By Alvin Toh
Generative AI chatbots have risen in prominence as businesses begin to harness conversational AI. They are used for a variety of purposes, such as streamlining customer service, automating tasks, and personalising user experiences. One company, Klarna, made the bold claim that their Gen AI chatbots are able to do the work of 700 customer service agents, serving customers 24/7 worldwide. These levels of productivity are unprecedented, and could be very compelling for organisations as they consider deploying chatbots. Nonetheless, there are also associated security risks which must also be considered.
As global regulators begin to introduce policies to govern AI, businesses deploying chatbots must first navigate the data protection regulations to safeguard user privacy and avoid regulatory pitfalls. Various policies are being introduced globally such as the EU AI Act, which obliges high risk apps to be more transparent about data usage. Similarly, Singapore’s Model AI Governance Framework identifies 11 key governance dimensions around issues such as transparency, explainability, security, and accountability to safeguard consumer interests, while allowing space for innovation
The recent PDPC AI Guidelines in Singapore likewise encouraged businesses to be more transparent when seeking consent for personal data use, through disclosure and notifications. Businesses have to ensure that AI systems are trustworthy, which provides consumers with confidence over how their personal data is being used.
Internationally, the new ISO 42001 specifies requirements for establishing, implementing, maintaining, and continually improving Artificial Intelligence Management Systems within organisations.
Modern Gen AI-enabled chatbots are able to profile individuals very quickly through a massive amount of historical user interactions and data inputs, enabling detailed profiles to be constructed. Personal information such as interests, preferences, gender and personal identifiers can be deduced from seemingly innocuous conversations with chatbots. This ability raises privacy and manipulation concerns and poses a real threat if the information falls into the wrong hands.
Through adversarial prompt techniques such as prompt injection (inserting malicious content to manipulate the AI's output), prompt leakage (unintentional disclosure of sensitive information in responses), and jailbreaking (tweaking prompts to bypass AI system restrictions), unauthorised access can be gained to sensitive information, including passwords, personally identifiable information (PII) and even training data sets.
Rogue chatbots are also a concern, where malicious actors deploy chatbots with the intention of extracting sensitive information from unsuspecting users. These rogue chatbots may impersonate legitimate entities or services to deceive users into disclosing confidential information.
In addition to data leakages, AI regulators and ethicists are also concerned about bias in AI, especially when deployed in recommendation or decision making systems. Generative AI systems are likely to have biases inherent in their training data or algorithms, which can result in unfair or discriminatory outcomes. It is essential for chatbot developers, as ‘Human- in and over-the-Loop’, to recognise and address these biases in their development and deployment to ensure fair and equitable use of AI technologies.
Verify authenticity of the chatbot. Look for trusted domain, security protocols in place: ensure you're interacting with a legitimate chatbot from a trusted source. Beware of imposters or malicious entities posing as chatbots to deceive users and extract personal or sensitive information.
Practice Vigilance: be wary of suspicious requests or prompts that seem out of the ordinary, and refrain from clicking on links or downloading files from untrusted sources.
Review Chatbot domain Privacy Policies: familiarise yourself with the privacy policies of chatbot providers to understand how your data is collected, stored, and used. If possible, interact with chatbots that prioritise user privacy and adhere to stringent data protection regulations.
Businesses would benefit by referring to robust frameworks such as the IS42001 AI Management System, the ISO 23894 AI Risks and Controls or the guidelines from the regulators (EU AI Act, AI Model Governance Framework) for a more systematic approach in developing and deploying Gen AI chatbots that are public facing and process personal data. Be transparent to demonstrate accountability in adhering to best practices and legal obligations so users can be confident in their use.
Ultimately, while chatbots offer unparalleled convenience and efficiency, they can pose significant security risks and privacy concerns with lax practices and deployment by new ‘AI automation agencies or new AI no-code solution’ developers who are not versed in enterprise security or data protection guidelines. It is critical that users are aware of this in order to maximise the benefits from chatbots whilst also keeping their personal data safe.
This article was first published on The Fast Mode on 25 April 2024.
Get access to news, enforcement cases, events, and actionable tips and guides
Get regular email updates and offers
Job opportunities, mentorship and career guidance
Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin
DPEX Network is a Community Initiative of Straits Interactive.
Copyright © Straits Interactive Pte Ltd. All Rights Reserved.
All intellectual property rights to logos and brands featured on this website remain the property of their respective owners.