The Information Commissioner’s Office (ICO) has launched new guidance on the relationship between artificial intelligence (AI) technology and data protection.
An accompanying blog written by Simon McDougall, deputy commissioner for regulatory innovation and technology at the ICO, explained that the pandemic has driven innovation in the use of technology and data – but some of the challenges for organisations using AI are constant.
“AI offers opportunities that could bring marked improvements for society, but shifting the processing of personal data to these complex and sometimes opaque systems comes with inherent risks,” he commented.
Understanding how to assess compliance with data protection principles can be challenging in the context of AI, the ICO stated.
“From the exacerbated, and sometimes novel, security risks that come from the use of AI systems, to the potential for discrimination and bias in the data – it is hard for technology specialists and compliance experts to navigate their way to compliant and workable AI systems.”
Therefore, the ICO’s new guidance contains recommendations on best practice and technical measures that organisations can use to mitigate those risks caused or exacerbated by the use of this technology.
It is the culmination of two years of research and consultation by professor Reuben Binns and the ICO AI team.
The guidance aims to provide a clear methodology to audit AI applications and ensure they process personal data fairly, comprising: auditing tools and procedures that we will use in audits and investigations, and a toolkit designed to provide further practical support to organisations auditing the compliance of their own AI systems.
It is aimed at both those with a compliance focus - such as data protection officers, general counsel, risk managers, senior management and the ICO's own auditors - and technology specialists - including machine learning experts, data scientists, software developers and engineers and cyber security or IT risk managers.
While data protection and ‘AI ethics’ overlap, the guidance does not provide generic ethical or design principles for your use of AI. It corresponds to data protection principles and covers accountability and governance in AI; lawful and transparent processing - including assessing and improving AI system performance and mitigating potential discrimination; data minimisation and security; and compliance with individual rights – including rights related to automated decision-making.
The ICO stated that in an AI context, accountability requires users to be responsible for the compliance of a system; assess and mitigate its risks; and document and demonstrate how the system is compliant, as well as justifying the choices made.
“As part of striking the required balance between the right to data protection and other fundamental rights in the context of your AI systems, you will inevitably have to consider a range of competing considerations and interests,” noted the guidance, adding that during the design stage, you need to identify and assess what these may be.
“You should then determine how you can manage them in the context of the purposes of your processing and the risks it poses to the rights and freedoms of individuals,” it added, noting that if the AI system processes personal data, it must comply with the fundamental data protection principles, and cannot 'trade' this requirement away.
In terms of security risks, the ICO warned that AI can increase the potential for loss or misuse of the large amounts of personal data often required to train AI systems; and software vulnerabilities to be introduced as a result of the introduction of new AI-related code and infrastructure.
By default, the standard practices for developing and deploying AI involve processing large amounts of data, the guidance stated. “There is a risk that this fails to comply with the data minimisation principle – a number of techniques exist which help both data minimisation and effective AI development and deployment.”
“We will keep seeking feedback on the guidance to help us to achieve this goal as well as continuing to engage with experts to explore the frontiers of this technology whist also growing our own expertise,” concluded McDougall’s blog.
“It is my hope this guidance will answer some of the questions I know organisations have about the relationship between AI and data protection, and will act as a roadmap to compliance for those individuals designing, building and implementing AI systems.”
Recent Stories