Are Generative AI Chatbots Getting Too Close For Comfort?

2024-05-23
Article Banner

By Edwin Concepcion, Philippine Country Manager, Straits Interactive


Since late April, a new feature has quietly emerged in all of Meta’s apps - an AI assistant available at your beck and call in the search bar and within chats. Meta AI has made its debut across WhatsApp, Facebook Messenger and Instagram, and also exists on a separate website for direct access to its full scope of multimodal task capabilities, just like the web-based interfaces of ChatGPT and Gemini. 

While not yet available in most of Asia (only in Singapore), considering that India, Indonesia and the Philippines have some of the highest numbers of users of Facebook in the world, it should be safe to assume that Meta AI’s widespread arrival in the region is simply a matter of time. But here's the catch: Meta AI is a permanent feature, and it appears to be uninstallable.

This raises concerns about data privacy and security, especially with information from Meta AI users being fed into targeted advertising and content recommendations across Meta’s apps. Though Meta has stated that they do not have the ability to read messages outside of those that give instruction to Meta AI, Meta can share information given to its AI with third parties, such as Google’s search engine, to provide more relevant responses to a user’s prompt. Thus, sparking apprehensions about how our information is used and protected.

The Two Sides of the Coin: Convenience vs. Privacy

There is already a mix of government- and business-led initiatives that highlight the transformative potential of AI chatbots in addressing diverse challenges. 

In the Philippines, the Department of Trade and Industry of the Philippines has deployed a citizen support chatbot known as TIA (Trade & Industry Assistant) which automates administrative queries that the DTI previously had to address over email. It aims to free up human agents to focus on higher-level tasks, while also allowing the DTI to comply with the Philippine government’s No Wrong Door policy for citizens such that cases are always forwarded to the correct government agency. 

Likewise, businesses are increasingly adopting AI chatbots as frontdesk helpers in customer success due to their ability to handle multiple conversations and provide always-on personalised experiences. In many ways, Meta AI positions itself to fulfil the same intentions in much closer proximity with its users. 

Naturally, personalisation comes with a cost. AI chatbots are able to profile individuals swiftly and in detail through a massive amount of historical user interactions and data inputs. This ability raises privacy and manipulation concerns and poses a real threat if the information falls into the wrong hands.

Threats in Plain Sight

While Meta has mentioned that data is anonymised in the parts of the conversation that the AI saves to generate its response, experts believe that even anonymised data can sometimes be re-identified with a combination of data points. As such, users are advised to not share information that they don’t want kept in the system. But Meta AI isn’t the only chatbot out there that people should tread carefully around. As with all generative AI solutions, there are other vulnerabilities beyond data collection that may not be immediately apparent, which we must educate ourselves of.  

Accidental Data Leakage

For starters, there is an organisational risk of unintended data leakage posed by employees who might be using generative AI for their work with neither their company’s clearance nor knowledge that the data they input can be used to train the Large Language Model (LLM). By placing proprietary and sensitive data within these models, employees might inadvertently expose confidential data as well as personally identifiable information (PII) when using versions where model training is enabled.

Inference Attacks

Sensitive information and personal identifiers can be raked from seemingly innocuous conversations. Vulnerabilities in chatbot algorithms can be exploited to infer users’ personal or financial information. This is not only a breach of privacy, but also subjects the organisations behind the chatbots to legal repercussions and reputational damage. 

Adversarial Prompt Techniques

Through adversarial prompt techniques, unauthorised access can be gained to sensitive information, including passwords, personally identifiable information (PII) and even training data sets. Prompt injection (inserting malicious content to manipulate the AI's output), prompt leakage (unintentional disclosure of sensitive information in responses), and jailbreaking (tweaking prompts to bypass AI system restrictions) are now becoming synonymous with security issues in LLMs. Jailbreak-as-a-Service is being offered in the market to break the guardrails of AI chatbot models and divulge information safeguarded in the system prompts. Malicious tactics such as SQL injection, now extend to LLMs, posing new challenges. 

Rogue Chatbots

Malicious programs impersonating legitimate entities or services and can infiltrate chatbot platforms by tricking unsuspecting users into disclosing confidential information. Rogue chatbots are also the latest means of spreading malware, where victims are redirected to fake pages that invite users to download ‘legitimate’ software.

AI Biases

AI regulators and ethicists have also raised concerns about bias in AI, especially when deployed in recommendation or decision making systems. Generative AI systems are prone to biases inherent in their training data sets or algorithms, which can result in unfair or discriminatory outcomes. It is essential for businesses and developers, as “Human-in” and “over-the-loop” to recognise and address these biases to ensure fair and equitable use of AI technologies.

Using Chatbots Safely

Here are five considerations to keep in mind when interacting with chatbots:

1. Protect Sensitive Information 

Avoid sharing passwords, financial details, or any information you wouldn't feel comfortable giving out online.

2. Verify Authenticity & Security

Double-check that you are interacting with a legitimate chatbot from a trusted source and not a rogue chatbot. Organisations should also select a generative AI solution where there has been rigorous internal testing of adversarial prompts against its chatbots.

3. Stay Informed 

Be aware of potential security vulnerabilities associated with chatbots and emerging threats to mitigate risks.

4. Practice Vigilance & Validation

Be wary of suspicious requests that seem out of the ordinary, and refrain from clicking on links or downloading files from untrusted sources.

5. Review Privacy Notices 

Understand how chatbots collect, store, and use your data. Opt for chatbots that prioritise user privacy and data protection.

Creating a Safer Future for Us and AI

As AI regulations are being developed globally, businesses deploying chatbots will need to navigate data protection laws to safeguard user privacy and avoid legal pitfalls. Leading regulations like the EU AI Act and international frameworks like the ISO 42001 provide guidance for responsible development and deployment of AI systems that handle personal data. Here in the Philippines, organisations may also consider upskilling their staff to become skilled AI Business Professionals so that they may employ generative AI safely in their work.

By following these steps and staying informed, we can leverage the benefits of AI chatbots while minimising the risks to our privacy and security.


This article was first published on The Governance Age on 23 May 2024. 


Unlock these benefits
benefit

Get access to news, enforcement cases, events, and actionable tips and guides

benefit

Get regular email updates and offers

benefit

Job opportunities, mentorship and career guidance

benefit

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles