Natural-language generative artificial intelligence (AI) technologies have revolutionised the way we interact with applications and platforms, offering businesses innovative solutions for tasks like enhancing business productivity, content generation, customer support, and decision-making.
As I write this blog article, our research team is analysing a new generation of generative AI applications that is now flooding the market including Google PlayStore and Apple App Store.
We have categorised the generative AI landscape into three categories: Core Apps, Clones, and Combination Apps. In the age of the "Clone Wars," where numerous startups compete for investor attention and market share, organisations must exercise due diligence when sharing corporate data with generative AI providers, especially with Clones that may have varying levels of privacy and security practices.
1. Core Apps:
These are the original developers and pioneers of generative AI technologies, such as OpenAI (ChatGPT), Google (Bard) and Meta (LLaMA). They offer cutting-edge AI technology, direct support, and continuous updates to users.
Additionally, many make their API (application programming interface) available to other developers, enabling them to build their own applications using the same underlying technology. However, using Core Apps may come with higher costs and limited customisation options.
2. Clones:
These are entities that use APIs like ChatGPT to build their applications and innovate around them. Examples include Replika, Copy.ai, and Jasper (formerly Jarvis). While they offer easier integration, lower development costs, and diverse applications, they also present potential privacy and security risks due to their dependence on core AI providers and for the reasons we will cover in this blog.
3. Combination Apps:
These are existing applications that incorporate generative AI to enhance their functionality or introduce new features. Examples include popular apps from Adobe, Google, Meta, Microsoft, ServiceNow, etc. They offer improved app features, a competitive edge, and increased user engagement, but may face integration complexity, increased regulatory attention and even ethical concerns.
This is Part One of our series on generative AI apps - click to read Part Two and Part Three.
Look at data through an ethical lens and learn how to manage large streams of data by taking our Data Ethics and AI Governance Frameworks course.
In the current "Clone Wars" of generative AI, numerous startups are leveraging generative AI APIs to create innovative applications and compete for investor attention. Many of these Clone apps have received significant venture capital funding, driving a race to monetise their offerings. The business models of these startups often revolve around personalised advertising, which relies heavily on the processing of personal identifiable information (PII).
These clone apps are developed by startups with varying levels of AI expertise. Some may have limited experience in AI, relying solely on the power of Generative AI APIs to build their applications. Consequently, privacy and security might not be their priority, and they may lack the necessary competencies to implement robust data protection measures. This situation creates potential risks for organisations sharing corporate data with these clone apps.
As an example, we found over 100 mobile apps in the Google PlayStore that use the ChatGPT API, all with "ChatGPT" in their names. An API is a set of rules and protocols that enable different software applications to communicate and share data with each other. It acts as an intermediary, allowing developers to access the functionality and features of another service or platform, in this case, Chat GPT, without having to write their own code.
Many of these apps offer features that allow you to upload your corporate data, including business plans, policies, and spreadsheets, so that ChatGPT, as an example, can analyse or summarise them for you.
Learn how good data governance can not only help you protect data in your organisation, but derive even greater value from it, by taking the modules of the Advanced Certificate in Data Governance Systems.
In the current generative AI landscape, many providers offer their services through an API-based approach, which enables the development of clone apps. While this approach can accelerate innovation and make generative AI more accessible, it also presents potential cybersecurity and privacy concerns.
These concerns arise from the fact that the vulnerability of a service depends on factors such as the security measures implemented by the API provider, the clone app developers' data handling practices, and the overall architecture of the integrated system.
As a reminder, many of the developers of clone apps are new startups or individuals riding the generative AI wave. In fact, with these APIs, anyone with programming knowledge can tap into these technologies to generate new content and features.
One significant concern is the potential privacy and confidentiality issues that arise when clone apps have full access to the information an individual or company shares with them. In order to generate synthetic content, the clone app must send the input data to the relevant APIs. This process grants the clone app access to potentially sensitive information, which could be mishandled or misused if the developers lack robust data protection measures.
Take a hands-on approach to understanding information and cyber security from a management perspective, and learn how to create an Information Security Policy in our Information & Cyber Security for Managers, EXIN Certification course.
Before sharing corporate data with a generative AI provider, it is essential for organisations to thoroughly review the provider's privacy policy and terms of use. These documents outline the provider's data handling practices, data retention policies, data sharing agreements, and other critical aspects of their service. Users often overlook key areas in these documents that may have significant implications for their data's security and privacy.
Some of the key areas to be wary of or to look for include:
1. Data collection: Understand what types of data the provider collects, whether it's necessary for their service, and how it's used. Do they collect your inputs or the entire conversation?
2. Data retention: Check the provider's data retention policy, including how long they store your data and the circumstances under which they delete it.
3. Data sharing: Look for clauses regarding sharing your data with third parties and under what conditions this might occur.
4. Data protection measures: Ensure that the provider employs adequate security measures to safeguard your data from unauthorised access or misuse both at rest and during transmission.
5. Rights and control: Verify if the provider offers you control over your data, such as the ability to access, correct, or delete it.
To read more about the AI clone wars, and ensuring data privacy and safety, click here.
For access to news updates, blog articles, videos, events and free resources, please register for a complimentary DPEX Network community membership, and log in at dpexnetwork.org.
Kevin Shepherdson is the author of “99 Privacy Breaches to be Aware of”. He is the CEO and Founder of Straits Interactive.
Get access to news, enforcement cases, events, and actionable tips and guides
Get regular email updates and offers
Job opportunities, mentorship and career guidance
Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin
DPEX Network is a Community Initiative of Straits Interactive.
Copyright © Straits Interactive Pte Ltd. All Rights Reserved.
All intellectual property rights to logos and brands featured on this website remain the property of their respective owners.