ICO and Alan Turing Institute open AI consultation

The Information Commissioner’s Office (ICO) and the Alan Turing Institute have started a consultation to build guidance about explaining the decisions made by artificial intelligence (AI).

The first draft of regulatory guidance into the use of AI is out for consultation until 24 January, and is aimed at data scientists, app developers, business owners or data protection practitioners whose organisations are using, or thinking about using, AI to support, or to make, decisions about individuals.

Simon McDougall, the ICO’s executive director of technology and innovation, explained: “The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works – and when people don’t understand a technology, it can lead to doubt, uncertainty and mistrust.”

ICO research shows that over half of people are concerned about machines making complex automated decisions about them. Its citizen jury study found that the majority of people stated that in contexts where humans would usually provide an explanation, explanations of AI decisions should be similar to human explanations.

“The decisions made using AI need to be properly understood by the people they impact,” said McDougall. “This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built-in to AI systems.”

The draft guidance lays out four key principles, rooted within the General Data Protection Regulation (GDPR). Organisations must consider these when developing AI decision-making systems; these are:

• Be transparent: make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.
• Be accountable: ensure appropriate oversight of your AI decision systems, and be answerable to others.
• Consider context: there is no one-size-fits-all approach to explaining AI-assisted decisions.
• Reflect on impacts: ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.
An interim report the ICO released in June stated that context was key to the explainability of AI decisions.

“Our draft guidance goes into detail about different types of explanations, how to extract explanations of the logic used by the system to make a decision, and how to deliver explanations to the people they are about,” read the statement. “It also outlines different types of explanation and emphasises the importance of using inherently explainable AI systems.”

Real-world applicability is at the centre of the guidance, so the ICO and Alan Turing Institute said that feedback is crucial to its success.

The final version of the guidance will be published later in the year, taking the feedback into account.

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.