The Belgian Data Protection Authority has just published a brochure entitled “Artificial Intelligence Systems and the GDPR - A Data Protection Perspective” to explain the GDPR requirements specifically applicable to the development and deployment of AI systems. We have set out below a summary of this brochure and provided some key takeaways.
The intersection of artificial intelligence (“AI”) and data protection law has become increasingly relevant in today’s digital landscape. As AI technologies evolve, they pose unique challenges that necessitate a thorough understanding of legal frameworks, particularly the GDPR and the newly adopted AI Regulation 2024/1689 (“EU AI Act”).
In light of such challenges, the Belgian Data Protection Authority (“BDPA”) has just published a 23-page brochure (available in French, Dutch and English), titled “Artificial Intelligence Systems and the GDPR - A Data Protection Perspective”, to explain the GDPR requirements specifically applicable to the development and deployment of AI systems. Alongside this brochure, the BDPA is launching a brand-new section on its website dedicated to AI. Below you will find a summary of this brochure and some key takeaways.
Context of AI and data protection law
The rapid advancement of AI technologies has transformed various sectors, improving efficiency and enabling innovative solutions. However, this progress raises significant concerns regarding data privacy, transparency and accountability. The GDPR provides a robust framework for protecting personal data within the EU. Concurrently, the EU AI Act, which came into effect on 1 August 2024 (as outlined in greater detail here), introduces additional provisions specifically addressing the complexities associated with high-risk AI systems.
Understanding AI systems
An AI system is defined under the EU AI Act as, broadly speaking, an automated system designed to operate with varying levels of autonomy. These systems can infer outputs from inputs to generate predictions or decisions that may influence environments or individuals.
Examples include recommendation systems in streaming services. Indeed, film streaming services use AI systems to generate recommendations for users. These systems analyse a user’s past viewing habits, as well as the viewing habits of similar users, to recommend content likely to be of interest to them. Another example is medical imaging analysis that assists healthcare professionals (such as those employed by hospitals and healthcare providers) in diagnosing conditions by identifying patterns in medical images (such as X-rays, scans and MRIs). These systems are trained on large datasets of medical images, enabling them to identify patterns and possible anomalies.
What are the key principles of the GDPR relevant to AI?
The GDPR establishes several principles that are critical for ensuring lawful processing of personal data within AI systems:
- Lawfulness. Both the GDPR and EU AI Act require that AI systems must adhere to lawful processing principles. The GDPR outlines six legal bases for processing personal data, which remain applicable under the EU AI Act. Notably, certain high-risk AI applications are prohibited altogether – such as social rating systems or real-time facial recognition in public spaces – due to their potential for abuse and discrimination.
- Fairness. Although the EU AI Act does not include a section titled “fairness”, it builds on the fair processing principle of the GDPR, as the EU AI Act focuses on mitigating bias and discrimination in the development, deployment and use of AI systems.
- Transparency. Transparency is particularly crucial; users should be informed when interacting with AI systems. For example, a conversational agent or chatbot might initiate an interaction with a message such as “Hi, I'm Nelson, a chatbot. How can I help you today?”. For high-risk AI systems (such as recruitment technologies, medical devices and biometric identification), the EU AI Act requires an even higher level of transparency. The system must be accompanied by instructions for use that set out its capabilities, limitations, intended purposes and more.
- Purpose limitation and data minimisation. Personal data may only be collected for specified purposes that are legitimate and clearly defined. These principles ensure that AI systems do not use data for purposes other than those for which they were designed or that they do not collect excessive data. The EU AI Act strengthens the purpose limitation principle for high-risk AI systems by emphasising the need for a well-defined and documented purpose.
- Data accuracy and up-to-dateness. Personal data must be accurate and, where necessary, kept up to date. The EU AI Act builds on this GDPR principle by requiring high-risk AI systems to use high-quality and objective data to avoid discriminatory outcomes.
- Storage limitation. Personal data should not be retained longer than necessary for its intended purpose. The EU AI Act does not explicitly introduce an extra requirement on storage limitation for high-risk AI systems.
- Automated decision-making. The GDPR empowers individuals to object to solely automated decisions, while the EU AI Act requires proactive human oversight for high-risk AI systems to safeguard against potential biases and ensure responsible development and use of such systems.
- Security of processing. To protect personal data processed by AI systems, organisations must implement robust technical and organisational measures. The GDPR requires these measures to mitigate risks associated with data processing activities. The EU AI Act further enhances this requirement by mandating ongoing monitoring for biases in training data and vulnerabilities specific to AI technologies. The EU AI Act also focuses on proactive measures such as identifying and planning for potential problems, continuous monitoring and testing, and human oversight.
- Data subject rights. The EU AI Act reinforces the GDPR rights by emphasising the importance of clear explanations about how data is used in AI systems. With this transparency, individuals can make informed decisions about their data and utilise their data subject rights more effectively.
- Accountability. While the EU AI Act doesn’t have a dedicated section on demonstrating accountability, it builds upon the GDPR’s principles. It requires organisations to assess risks, document AI systems, implement human oversight and report incidents
How to translate the legal obligations into actionable steps?
With the help of a ‘user story’ concerning a car insurance company, the BDPA illustrates how organisations can turn the above-mentioned legal obligations into practical steps when designing or implementing AI systems. Here are some useful, practical tips:
- Assess and document the correct legal basis for collecting and using personal data in the AI system.
- Ensure the system complies with the prohibitions in the GDPR and the EU AI Act for processing sensitive personal data.
- Ensure proper and non-discriminatory processing of data and guarantee the use of non-distorted data. This can be done by:
a. analysing the data sources used to train the AI system to identify and mitigate potential biases;
b. regularly testing the AI system for potential biases in the output;
c. implementing a human judgement process for high-impact decisions made by the AI system. For example, in the insurance sector, decisions significantly increasing premiums or rejecting policies.
- Be transparent about how the data is used. This can be done by:
a. clearly explaining in your privacy policy how data is collected, used and stored in the AI system;
b. using simple language, images or frequently asked questions to explain the AI decision-making process;
c. implementing mechanisms for customers to easily access information about the data points used in their specific case.
- Develop processes to ensure the accuracy and up-to-dateness of the data used in the AI system, such as:
a. user-friendly mechanisms to verify and update the personal data in the AI system, e.g. via an online portal, mobile app or dedicated phone line;
b. procedures for regularly refreshing the data used in the AI system, such as asking customers to periodically update their information or providing an integration with an external data source so that data is automatically updated;
c. providing alerts for missing or inaccurate data points;
d. clearly communicating the right to rectification under the GDPR.
- Implement appropriate security measures such as data encryption, access control, regular penetration testing, and logging and auditing. Such security measures may include:
a. developing processes to ensure data validation and quality assurance, such as verifying data provenance or detecting output anomalies;
b. establishing a framework for human oversight, e.g. ensuring that high-risk data points are reviewed by humans, that system performance is monitored by humans for fairness and accuracy, and that human intervention is carried out at critical decision moments.
- Conduct a Fundamental Rights Impact Assessment (FRIA) to assess the impact of AI systems on fundamental rights such as privacy and freedom of expression, and recommend measures to manage any risks.
Conclusion
As AI technologies continue to evolve, ensuring compliance with both GDPR and the new EU AI Act is important for organizations leveraging these systems. The BDPA’s brochure is a useful document to help businesses understand the legal data protection framework in the context of AI and thus to better navigate the potential pitfalls while harnessing the benefits of innovative technologies responsibly.