The Information Commissioner’s Office (ICO has launched a new AI and biometrics strategy to ensure that organisations are developing and deploying new technologies lawfully.
The regulator said the move would support the growth of organisations whilst also protecting the public.
It added that the increased supervision of AI and biometric technology will ensure that companies use sensitive personal information responsibly in new products and services.
According to research by the ICO, 54 per cent of people have concerns that the use of facial recognition technology (FRT) by the police will infringe on their right to privacy.
The research also found that the public expect to understand exactly how and when AI-powered systems affect them and are concerned about the consequences if things go wrong.
Under the new strategy, the ICO said it will focus on a number of themes including the potential for misuse.
It will review the use of automated decision making (ADM) systems by the recruitment industry and set clear expectations to protect people’s personal information when used to train generative AI foundation models.
The regulator also plans to develop a statutory code of practice for organisations developing or deploying AI responsibly to support innovation while safeguarding privacy.
In addition, it will look at emerging AI risks and trends such as the rise of agentic AI systems and their capabilities of acting autonomously.
“The same data protection principles apply now as they always have – trust matters and it can only be built by organisations using people’s personal information responsibly,” said John Edwards, UK information commissioner. “Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails.”
Recent Stories