The Information Commissioner’s Office (ICO) has launched a consultation on draft guidance for an artificial intelligence (AI) auditing framework.
The guidance contains advice on how to understand data protection law in relation to AI and recommendations for organisational and technical measures to mitigate the risks AI poses to individuals. It also provides a methodology to audit AI applications and ensure they process personal data fairly.
Aimed at both technology specialists developing AI systems and risk specialists whose organisations use AI systems, the guidance is aimed at helping assess the risks to rights and freedoms that AI can cause; and the appropriate measures that can be implemented to mitigate them.
The ICO stressed that this is the first piece of guidance published that has a broad focus on the management of several different risks arising from AI systems as well as governance and accountability measures.
“It is essential for the guidance to be both conceptually sound and applicable to real life situations as it will shape how the ICO will regulate in this space,” read a statement, adding: “This is why feedback from those developing and implementing these systems is essential.”
The data regulator is seeking feedback from both those with a compliance focus such as: data protection officers, general counsel and risk managers; as well as technology specialists, including: machine learning experts, data scientists, software developers and engineers, and cyber security or IT risk managers.
In March 2019, the ICO launched a call for views about its initial thinking in relation to auditing AI.
“Since then, our thinking has developed and we have established a more practical approach to the guidance so, if you have already engaged with us please feel free to feedback once again,” the statement concluded.
The consultation closes on 1 April.
Recent Stories