ICO and Alan Turing Institute open AI consultation

The Information Commissioner’s Office (ICO) and the Alan Turing Institute have started a consultation to build guidance about explaining the decisions made by artificial intelligence (AI).

The first draft of regulatory guidance into the use of AI is out for consultation until 24 January, and is aimed at data scientists, app developers, business owners or data protection practitioners whose organisations are using, or thinking about using, AI to support, or to make, decisions about individuals.

Simon McDougall, the ICO’s executive director of technology and innovation, explained: “The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works – and when people don’t understand a technology, it can lead to doubt, uncertainty and mistrust.”

ICO research shows that over half of people are concerned about machines making complex automated decisions about them. Its citizen jury study found that the majority of people stated that in contexts where humans would usually provide an explanation, explanations of AI decisions should be similar to human explanations.

“The decisions made using AI need to be properly understood by the people they impact,” said McDougall. “This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built-in to AI systems.”

The draft guidance lays out four key principles, rooted within the General Data Protection Regulation (GDPR). Organisations must consider these when developing AI decision-making systems; these are:

• Be transparent: make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.
• Be accountable: ensure appropriate oversight of your AI decision systems, and be answerable to others.
• Consider context: there is no one-size-fits-all approach to explaining AI-assisted decisions.
• Reflect on impacts: ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.
An interim report the ICO released in June stated that context was key to the explainability of AI decisions.

“Our draft guidance goes into detail about different types of explanations, how to extract explanations of the logic used by the system to make a decision, and how to deliver explanations to the people they are about,” read the statement. “It also outlines different types of explanation and emphasises the importance of using inherently explainable AI systems.”

Real-world applicability is at the centre of the guidance, so the ICO and Alan Turing Institute said that feedback is crucial to its success.

The final version of the guidance will be published later in the year, taking the feedback into account.

    Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.