By Kevin Shepherdson, Founder & CEO, Straits Interactive
Generative AI, underpinned by the 6C trends—Collection of Data, Compute Power, Context Window, Chain of Thought, Customisation, and Control—is transforming how businesses operate. For Governance, Risk, and Compliance (GRC) professionals, this evolution demands a shift from traditional compliance-driven data protection to a broader focus on data governance that aligns with business objectives. This approach emphasises not just mitigating risks but also increasing the value and utility of organisational data.
In 2025, digital transformation enabled by Generative AI highlights the critical need for AI governance, particularly for small and medium-sized enterprises (SMEs) aiming to harness these tools responsibly. Data Protection Officers (DPOs) are uniquely positioned to lead this transformation, bridging compliance objectives with strategic business goals.
Quick Review of the 6C Trends in 2025
The 6C trends outline the key technological and operational shifts driving Generative AI in 2025. Here’s a summary for GRC professionals:
1. Collection of Data
Organisations are moving from relying solely on general knowledge LLMs to leveraging internal knowledge bases through techniques like Retrieval-Augmented Generation (RAG). This enables contextualised outputs tailored to specific organisational needs but raises challenges such as data privacy risks and bias in datasets.
- Implication: Strong data governance practices are essential to mitigate risks and unlock data's value.
2. Compute Power
With innovations like Nvidia’s NVIDIA’s Blackwell architecture and Amazon’s Ultracluster Supercomputer and Trainium chips, scalable and energy-efficient AI compute resources are more accessible. SMEs increasingly use cloud-based AI solutions to manage costs and scale operations.
- Implication: Sustainable and cost-effective compute resources are critical for SMEs aiming to balance efficiency with ESG goals.
3. Context Window
Expanded context windows, such as those introduced by Google’s Gemini models, allow AI systems to retain and process vast amounts of information across sessions. This capability enhances collaboration and continuity in workplace interactions, enabling seamless multi-session workflows.
- Implication: Ai tools must integrate privacy safeguards to prevent the unintentional inclusion of sensitive data.
4. Chain of thought
AI systems with advanced reasoning capabilities assist in multi-step problem-solving, scenario analysis, and strategic decision-making. OpenAI’s O3 and upcoming models showcases the potential of AI to automate reasoning-heavy tasks.
- Implication: Explainable AI (XAI) becomes vital to ensure traceable and justifiable decisions in high-stakes scenarios.
5. Customisation
The rise of no-code and low-code platforms has democratised AI customisation, enabling SMEs to tailor solutions for specific workflows and departments. Customised AI tools, from HR to marketing, deliver immediate relevance and ROI.
- Implication: Ethical considerations must guide AI customisation to avoid biased or non-compliant outputs.
6. Control
As AI takes on agentic tasks, robust oversight mechanisms are critical. Real-time monitoring, incident response protocols, and compliance with regulations like the EU AI Act are necessary to align AI operations with organisational values.
- Implication: Governance frameworks must evolve to manage the complexity of autonomous AI systems.
Leveraging the 6C Trends for GRC Excellence
1. Data Governance: Ethical Collection of Data
Generative AI thrives on quality data. SMEs must prioritise robust frameworks for data collection, storage, and processing to ensure ethical practices.
Scenario: An SME in retail uses Retrieval-Augmented Generation (RAG) to extract insights from customer data. By implementing stringent data governance policies, the company prevents the inadvertent inclusion of personally identifiable information (PII) in its AI training datasets.
Challenges
- Inconsistent data practices across departments.
- Limited expertise in data annotation and preparation.
Actionable recommendations:
- Conduct regular audits to identify risks like PII leakage and biases.
- Train employees on ethical data handling and privacy regulations such as GDPR and the EU AI Act.
2. Ethical and Privacy Practices: Energy-Efficient Compute Power
Generative AI systems require substantial compute power, raising concerns about sustainability and privacy.
Scenario: A logistics SME adopts a cloud-based AI solution using energy-efficient hardware. This reduces both operational costs and environmental impact while ensuring compliance with data protection laws.
Challenges
- Verifying third-party compliance with data protection and cybersecurity standards.
- Balancing compute performance with sustainability goals.
Actionable recommendations:
- Choose AI vendors with transparent privacy and energy efficiency policies.
- Include contractual clauses requiring compliance with data protection standards and sustainability benchmarks.
3. Context Management: Avoiding Sensitive Data Misuse
Extended context windows in Generative AI allow for detailed outputs but risk the unintentional inclusion of sensitive information.
Scenario: A professional services SME uses AI to draft reports. Clear policies ensure the system avoids including proprietary client data or sensitive employee information in outputs.
Challenges
- Lack of understanding of how context windows work in AI systems.
- Difficulty in monitoring sensitive data inclusion in AI-generated outputs.
Actionable recommendations:
- Develop and enforce policies on data inclusion for AI systems.
- Use monitoring tools to flag unintended data exposure in AI outputs.
4. Chain of Thought to include Explainable AI: Ensuring Transparent Decision-Making
For GRC professionals, advocating for explainable AI (XAI) systems is crucial, particularly in high-stakes scenarios.
Scenario: An SME in healthcare uses AI to prioritise patient treatment plans. Explainable AI provides traceable reasoning, ensuring recommendations align with clinical and ethical standards.
Challenges
- Limited availability of explainable AI solutions for SMEs.
- Employee skepticism about the reliability of AI recommendations.
Actionable recommendations:
- Choose AI tools with explainability features to increase stakeholder trust.
- Train staff to interpret and validate AI outputs effectively.
5. Compliance in Customisation: Aligning AI with Ethical Principles
Customised AI systems tailored to specific workflows must align with ethical and legal requirements.
Scenario: A finance SME customises an AI-powered fraud detection tool. Ethical customisation ensures unbiased detection without disproportionately targeting specific demographics.
Challenges
- Lack of in-house expertise to implement ethical AI customisations.
- Risks of embedding biases during the customisation process.
Actionable recommendations:
- Integrate ethical principles into customisation processes from the outset.
- Partner with third-party experts to audit customised AI tools for compliance.
6. Control Mechanisms: Monitoring Agentic AI Tasks
As AI systems take on autonomous, agentic tasks, robust oversight mechanisms are vital for safe and effective deployment.
Scenario: A construction SME uses AI to schedule and allocate resources for projects. Real-time monitoring ensures decisions align with both business objectives and ethical standards.
Challenges
- Limited resources to implement real-time tracking systems.
- Lack of clear protocols for addressing AI errors or misalignments.
Actionable recommendations:
- Implement real-time monitoring systems for agentic AI tasks.
- Establish incident response protocols to address AI-related risks quickly.
The Role of Data Protection Officers in 2025
Data Protection Officers (DPOs) are uniquely positioned to lead the integration of AI governance into organisational practices. Moving beyond compliance, DPOs can:
- Champion Data Governance: Align data practices with business objectives, unlocking value from organisational data.
- Advocate for Ethical AI: Ensure AI systems align with privacy, transparency, and ethical standards.
- Drive Digital Transformation: Collaborate with business leaders to embed AI governance into broader digital strategies.
Conclusions: Key Takeaways for GRC Professionals
- Data Governance Establish robust frameworks for ethical data collection, storage, and processing, with regular audits to mitigate risks such as bias, PII leakage, and non-compliance.
- Ethical and Privacy Practices Adopt energy-efficient and privacy-conscious compute resources, ensuring third-party compliance with contractual obligations and data protection standards.
- Context Management Develop clear policies to avoid unintended inclusion or misuse of sensitive information in AI systems.
- Explainable AI Advocate for transparent reasoning in AI systems to build trust and ensure traceability, especially in high-stakes scenarios.
- Compliance in Customisation Integrate ethical principles and legal requirements into AI customisation processes to prevent bias and ensure alignment with regulatory standards.
- Control Mechanisms Implement real-time monitoring for agentic AI tasks and establish protocols to manage errors or misalignments effectively.
Final Thought
GRC professionals and DPOs are critical to the responsible deployment of Generative AI in 2025. By aligning AI governance with ethical, regulatory, and business objectives, they can help SMEs harness the transformative power of Generative AI while mitigating risks and fostering trust. Through robust governance, transparency, and compliance, GRC leaders can ensure that Generative AI becomes a force for positive change in their organisations.