With Generative AI coming into the mainstream globally, Generative AI chatbots are quickly gaining popularity. The case for deploying chatbots is strong – they offer efficiency and convenience, automate repetitive tasks and queries, and simulate human-like conversations with users - freeing up teams to focus on areas where a human touch is actually required.
Recent initiatives in the Philippines have underscored the transformative potential of AI chatbots in addressing societal challenges. For instance, the launch of a chatbot by Quezon City to combat domestic violence highlights the innovative applications of AI technology in social welfare. Across businesses, there is also a move towards adopting AI chatbots for customer service, spurred by the technology’s ability to handle multiple conversations at any time, and the always-on personalized experience it can offer customers.
However, for personalised experiences to be available, chatbots would need to collect and analyse data for improvement and insights into customer behaviour, preferences, and demographics for instance. While this can bring tailored responses to each customer, it also raises the issue of data privacy and security.
Sensitive information and personal identifiers can be raked from seemingly innocuous conversations. Inference attacks can also strike through the exploitation of vulnerabilities in chatbot algorithms to infer users’ personal or financial information. In the Philippine context, where data privacy is increasingly paramount, businesses utilising chatbots must adhere to strict compliance measures to safeguard customer information. Failure to do so not only exposes individuals to privacy breaches but also subjects organizations to legal repercussions and reputational damage. However, that would only serve as cold comfort to customers, should that information fall into the wrong hands.
There are many other ways that chatbots can extract personal or classified information. Rogue chatbots - programmed with malicious intent, can infiltrate chatbot platforms, impersonating legitimate entities or services to trick unsuspecting users into disclosing confidential information - can pose a very grave threat.
Vulnerabilities in AI chatbots, such as software bugs and weaknesses in authentication mechanisms, further exacerbate the risk of exploitation by malicious actors. Through adversarial prompt techniques, unauthorised access can be gained to sensitive information, including passwords, personally identifiable information (PII) and even training data sets through prompt injection (inserting malicious content to manipulate the AI's output), prompt leakage (unintentional disclosure of sensitive information in responses), and jailbreaking (tweaking prompts to bypass AI system restrictions).
Additionally, it's crucial to acknowledge the presence of bias in AI. Generative AI systems are prone to biases inherent in their training data sets or algorithms, which can result in unfair or discriminatory outcomes. It is essential for businesses and developers to recognize and address these biases to ensure fair and equitable use of AI technologies.
So how can businesses and individuals balance the benefits and the risks of using chatbots?
Here are five considerations to keep in mind when interacting with chatbots:
1. Don’t share sensitive personal information
Safeguard sensitive data by refraining from sharing confidential details like passwords or financial information unless you are certain of the chatbot's security measures. Be cautious of oversharing personal information.
2. Verify authenticity of the chatbot
Ensure you're interacting with a legitimate chatbot from a trusted source and not a rogue chatbot. Beware of imposters or bad actors posing as chatbots to deceive users and extract personal or sensitive information.
3. Understand the security risks
Be aware of potential security vulnerabilities associated with chatbots, such as data breaches or phishing attacks. Stay informed and updated about emerging threats to mitigate risks.
4. Practice Vigilance
Be wary of suspicious requests that seem out of the ordinary, and refrain from clicking on links or downloading files from untrusted sources
5. Review Privacy Notices
Familiarise yourself with the privacy notices that provide descriptions of the data processing practices of chatbot providers to understand how your data is collected, stored, and used. If possible, interact with chatbots that prioritise user privacy and adhere to stringent data protection regulations.
AI chatbots undoubtedly offer unprecedented convenience and efficiency, but also bring significant risks concerning data privacy and security. Users should exercise caution and vigilance when engaging with chatbots to safeguard personal information and privacy. As businesses increasingly rely on chatbots to streamline operations and enhance customer service, it is imperative to prioritise regulatory compliance, transparency, and user protection.By fostering a culture of responsible AI development and deployment, and implementing stringent security measures, we can harness the full potential of chatbots technology while safeguarding the privacy and security of individuals in the Philippines.
This article was first published on The Manila Times on 31 Mar 2024.
Get access to news, enforcement cases, events, and actionable tips and guides
Get regular email updates and offers
Job opportunities, mentorship and career guidance
Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin
DPEX Network is a Community Initiative of Straits Interactive.
Copyright © Straits Interactive Pte Ltd. All Rights Reserved.
All intellectual property rights to logos and brands featured on this website remain the property of their respective owners.