By Alvin Toh, Co-Founder, Straits Interactive
In 2025, AI can be expected to open up incredible opportunities for innovation and growth as more organisations prepare to adopt it into their workflows.
Alongside its potential, AI’s risks are also being better understood. Over the past year, foundational frameworks, regulations, and testing modalities have emerged, including the EU AI Act, Singapore’s Model AI Governance Framework, and AI Verify initiative. These aim to address the ethical, legal, and societal challenges posed by AI.
This convergence of innovation and regulation underscores a critical need — enterprises will require more skilled AI governance professionals to manage and mitigate these risks effectively. For data professionals, staying ahead of this curve is no longer optional — it’s essential. Professionals and organisations that invest now in AI governance training are not just about avoiding risks — it is what allows them to unlock AI’s full potential in a way that’s ethical, transparent, and sustainable.
Here’s why AI governance is becoming indispensable and how data professionals can seize this opportunity.
Emerging AI Roles In A Growing Market
As AI gets adopted into organisations, specialised roles focused on its ethical and responsible use will also grow in tandem. Job titles such as AI Governance Officer, Ethical AI Specialist, and AI Risk Manager will soon become commonplace. According to the World Economic Forum’s Future of Jobs Report, jobs in AI and machine learning are among the fastest-growing roles, with an anticipated growth of 40 per cent by 2027.
Despite this demand, many organisations will initially rely on existing Data Protection Officers (DPOs) and Data Governance Professionals to bridge the gap.
Skilling up to AI Governance professionals will empower them to address challenges and mitigate risks from adversarial prompts or bias in outputs for instance, and enable them to confidently address these challenges to lead AI-driven digital transformation effectively.
Urgent Need For Enhanced Knowledge And Skills To Oversee Upcoming And Diverse AI Applications
AI governance will encompass responsibilities such as defining and clarifying team roles, coordinating across departments, and monitoring AI applications.
With generative AI’s ability to process diverse data formats—including images, videos, and voice—governance professionals are required to develop enhanced GenAI skills and knowledge and go beyond traditional IT skills. These skills include understanding operational aspects of generative AI and identifying privacy risks across various data types as GenAI deployment cuts across multiple stakeholders beyond traditional IT or AI deployments.
Data governance professionals must evolve from a purely compliance-focused position to a business-enabling one to successfully integrate and leverage GenAI training and skills across diverse departments and workflows, ensuring AI-powered solutions align with organisational goals.
Mitigating AI’s Inherent Risks
AI can deliver remarkable results but undeniably also brings risks — from bias to ethical breaches. Without proper AI governance, organisations risk reputational harm, legal liabilities, and financial losses. A Deloitte report on Southeast Asia found that security vulnerabilities, including cyber or hacking risks, and data privacy were cited as top concerns associated with the risk of using AI.
Prioritising AI governance training equips professionals with critical skills such as risk management, ethical framework development, and operational oversight – crucial for ensuring AI systems are transparent, trustworthy, and aligned with regulatory and ethical standards.
With employees typically adopting AI more rapidly than their official organisational rollouts, it is important for organisations to educate staff on how to use AI responsibly in the enterprise.
Traditional methods may fall short in addressing novel risks like adversarial attacks, biased datasets, and model vulnerabilities. AI governance training provides tools to develop ethical frameworks, manage operational risks effectively and monitor AI models for transparency and alignment with regulations.
By addressing these challenges proactively, trained professionals safeguard their organisations from reputational, financial, and legal repercussions.
Staying Compliant With Evolving AI Regulations
AI regulations are still evolving across the world, with landmark frameworks like the EU AI Act and Singapore’s IMDA Model AI Governance Framework setting the stage for responsible AI deployment. Internationally, there is also the Council of Europe’s AI Treaty that was adopted on May 17, 2024 with 11 signatories that include the United Kingdom, the United States, and the European Union, and other countries. This is the first international legally-binding treaty aimed at ensuring that AI systems are developed and used in ways consistent with human rights, democracy, and the rule of law.
1. Comprehensive legal framework: The Convention addresses the entire lifecycle of AI systems, from design and development to deployment and decommissioning, ensuring that each phase aligns with human rights and democratic values.
2. Fundamental principles: It emphasises principles such as human dignity, individual autonomy, equality, non-discrimination, privacy, data protection, transparency, accountability, and reliability.
3. Risk-based approach: The Convention adopts a methodology to assess and mitigate potential negative impacts of AI systems on human rights, democracy, and the rule of law, ensuring that risks are identified and addressed appropriately.
Organisations developing, deploying or using GenAI across borders must ensure that their AI systems comply with these guidelines, making AI governance training indispensable. For instance, professionals trained in AI compliance can interpret frameworks, mitigate risks, and develop policies to align AI initiatives with regulatory standards, ensuring seamless international operations.
Safeguarding Internal AI Tools
Data poisoning—a malicious attack where adversaries manipulate training data to compromise AI models—poses a significant threat to the integrity of AI systems. There are many different forms of data poisoning, including targeted and non-targeted attacks, that can cripple the outputs of AI systems. Left unchecked, such vulnerabilities can lead to significant risks, inaccuracies, security breaches, and ethical concerns.
To mitigate against the risks of such attacks, data professionals need the knowhow to validate and sanitise training data sets. They should also be able to ringfence access controls and put into place proactive safeguards – for example, developing AI systems that detect anomalies in data. Again, these skills go beyond traditional data protection and compliance training but are crucial for safeguarding the quality and integrity of training datasets, and maintaining the reliability and effectiveness of AI systems.
As AI gains traction within organisations, filling the competency gap of GenAI enhanced roles across the board and having skilled data professionals in AI Governance will be the cornerstone of responsible, impactful innovation. A new generation of data professionals who are focused on risk-mitigation versus risk-avoidant is what will thrive in the workforce of tomorrow.
For organisations and professionals alike, 2025 is not just a golden window—it’s a call to action. Investing in building AI governance capabilities sooner than later, not only mitigates risks but also empowers professionals to unlock AI’s potential ethically, transparently, and sustainably.
This article was first published on e27 on 10 Feb 2025.