In our previous article on Personal Data Protection (PDP) Week 2024, we covered the panel discussion on “Navigating the Crossroads: Where AI and Data Protection Converge”. During the session, Irene Liu, Regional Strategy and Consulting Lead, Finance, Risk and Compliance Practice (APAC) for Accenture, went on to outline the steps to successful AI system deployment. This involves careful consideration of three key stages: Data Acquisition, Model Development, and Model Utilisation.
Data Acquisition: The Foundation of AI
With data acquisition, the type of data being used should be correctly identified; proper consent and rights to use personal and third-party data must be ensured; and data localisation and sovereignty laws are addressed. Irene stressed the importance of good data quality and implementing measures to detect and correct data errors.
The initial step in the AI development process is the gathering and preparation of data. This phase is critical as the quality and relevance of data directly impact the performance of the AI model. Several key considerations arise during this stage:
1. Data Sourcing
Organisations often rely on both first-party and third-party data. While first-party data is owned and collected by the organisation itself, third-party data is acquired from external sources. The acquisition of third-party data must align with the intended purpose of the AI system.
2. Data Privacy and Compliance
Handling personal data requires strict adherence to privacy regulations and obtaining necessary consents. Additionally, navigating complex data sovereignty laws across different countries is essential when working with global datasets.
3. Data Quality
Ensuring data accuracy and consistency is paramount for model performance. Implementing robust data quality checks and leveraging tools like Big ID can help identify and rectify data issues.
4. Data Ownership and Governance
Clearly defining data ownership and establishing governance frameworks is crucial to mitigate risks associated with data management.
Model Development: Building Trustworthy AI
Model Development means assessing the model's suitability and confidence for the intended purpose, establishing accountability for model development and testing and implementing back-testing to ensure model efficacy.
Once the data is prepared, the next phase involves developing the AI model. This stage focuses on creating a model that meets the specific needs of the organisation while maintaining high standards of reliability and accuracy. Key considerations include:
1. Model Accountability
Assigning clear responsibilities for model development is essential. This includes defining roles for model testing, backtesting, and evaluation.
2. Model Efficacy
The model should be rigorously tested to ensure it delivers the desired outcomes. Backtesting helps assess the model's performance on historical data.
Model Utilisation: Maximising AI Value
In terms of Model Usage, organisations should ensure the model is used as intended and by the authorised users. Providing necessary training for external parties using the model is crucial, focusing on proper usage and prompt understanding.
The final stage involves deploying the AI model and ensuring its effective use. This includes:
1. Authorised Access
Controlling access to the AI model is crucial to protect sensitive information and prevent unauthorised use.
2. User Training
If the AI model is used by external parties, providing adequate training on how to use the model and interpret its outputs is essential.
Liu also emphasised the importance of unbiased datasets, especially when working with data from multiple countries. This is a critical aspect of responsible AI development and should be considered throughout the entire process. Overall, by carefully addressing these three stages, organisations can increase their chances of successfully deploying AI systems that deliver value while mitigating risks.