PDP Week 2024: Navigating the Crossroads: Where AI and Data Protection Converge

2024-07-23
Article Banner

As AI continues to advance and integrate into various sectors, the complexities of AI governance and data protection have become increasingly apparent. The first panel discussion at this year’s Personal Data Protection (PDP) Week 2024, “Navigating the Crossroads: Where AI and Data Protection Converge, shed light on these challenges, highlighting the need for evolving frameworks and collaborative approaches.



The panel was helmed by industry leaders Jessica Gan Lee, Associate General Counsel & Head of Privacy Legal of OpenAI; Denise Wong, Assistant Chief Executive for IMDA Singapore & Deputy Commissioner, PDPC Singapore; Jason Tamara Widjaja, Executive Director of Artificial Intelligence for Singapore Tech Center, MSD and Irene Liu, Regional Strategy and Consulting Lead, Finance, Risk and Compliance Practice (APAC) for Accenture. Shameek Kundu, Co-chair of Data Governance Group, Global Partnership on AI, moderated the discussion. 


Starting off with the importance of using good quality data to train effective AI systems,  the panel then discussed what it means to ensure secure access and control over multiple Large Language Models (LLMs), maintain data integrity while fine-tuning internal data, and address complexities in content moderation and filtering. On the compliance front, connecting LLMs to internal data necessitates robust role-based access controls, and navigating international data privacy regulations has become  crucial for deploying AI across different jurisdictions. Here, we begin to segue into AI Governance.


Paradigm Shift Needed as AI Governance Evolves

Traditionally, AI projects involve multiple layers of assessments, including privacy impact and AI impact assessments. However, these methods are becoming “tricky” with the advent of general-purpose AI systems, said Jason Tamara Widjaja. With the barriers of entry falling, even non-specialists can develop and deploy powerful systems, thus, making threat spaces such as security and education more challenging to navigate.


“We really need to evolve our paradigm to a more holistic approach requiring extensive collaboration and education, as general-purpose AI becomes accessible to a wider range of users,” he said, adding that although he runs the global AI governance team, he collaborates with more than 10 other functions to do so effectively. “Gen AI is not a specialist technology for a few data scientists - it's very much a consumer technology with a wide footprint. So I think there's lots of room for evolution in the way we think about this management space,” he said. 


Jessica Gan Lee said that as OpenAI builds new technologies, they test for both risks and beneficial use cases, partnering with experts to understand how users might use their tools and support those applications. Safeguards are implemented at every stage of training, development, and deployment to protect data and ensure safety, such as training models to avoid sharing private or sensitive information. They also publish extensively on risks and safeguards to help raise industry standards and guide companies in their own implementations. 


From a policy and regulatory standpoint, Minister Josephine Teo announced that a set of safety guidelines would be launched in her opening address. Denise Wong added that these guidelines would parse out “key conceptual elements” such as transparency and accountability. These are issues that regulators around the world are concerned about but “even before we go into detailed obligations and requirements, we need to have an understanding of the broader architecture in place,” she said.


Emerging Challenges in the Horizon

AI Agents, or Agentic Workflows, capable of autonomously handling complex tasks, present new challenges. “These intelligent systems, being able to do these transactions that interact with internal data sets, require different levels of data access,” Widjaja explained. “With this trend just around the corner, there will be attention around securing not just data protection or privacy of the user, but of controlling the data access of these agents.”


Educating consumers about their rights and responsibilities is critical, so that they are mindful of the information that they share, said Liu. Gan Lee believes that communication and collaboration are essential, from the perspective of the company, the regulatory community and the broader privacy community. 


Widjaja said, “When you think about AI, you need to look at the whole spectrum of things. Sometimes there's no law, so you can't use regulatory guidelines - you’ll need to use internal compliance and privacy policy. Sometimes there's no policy, you’ll need to just use ethics. Everything's important.”


As a final note, Wong mentioned that there are regulations that deal with misinformation. With Gen AI, she said that, “Our approach is to use existing regulations  to provide horizontal guidelines to guide industries and companies in the way that we vision. But we will always keep an eye on what society cares and worries about, as well as the risks they're facing.”


Close Collaboration Needed

AI Governance is an evolving field that requires a multi-faceted approach. Collaboration, education, and a mix of governmental regulation and self-regulation are key to addressing the complex challenges posed by AI technologies. As the panellists emphasised, ongoing dialogue and solutions are essential in driving innovation and progress, while protecting the privacy and security of data.


Personal Data Protection (PDP) Week 2024


Unlock these benefits
benefit

Get access to news, enforcement cases, events, and actionable tips and guides

benefit

Get regular email updates and offers

benefit

Job opportunities, mentorship and career guidance

benefit

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles