MPs say government must ‘fundamentally change’ the way it thinks about AI

The UK government must “fundamentally change” the way in which it thinks about AI, according to a new report by the House of Commons Science, Innovation and Technology Committee.

MPs support the current approach to regulation of the technology but have called on the next government to prepare itself to introduce more legislation should it encounter gaps in the powers of any of the regulators when it comes to public interest.

The report also flagged concerns about the fact that the AI Safety Institute has been unable to access some developers’ models for testing.

It said that the government needs to identify any developers that refuse access to their models, which contravenes the agreement made at the November 2023 Summit at Bletchley Park.

The report adds that developers should have to report their justification for refusing.

The report says that UK regulators need to have the tools with which to hold AI developers to account, suggesting that current funding for regulators such as Ofcom is “clearly insufficient to meet the challenge”, especially when compared to the UK revenues of AI developers.

Commenting on the news Greg Clark, chair of the Science, Innovation and Technology Committee, said the overarching “black box” challenge of some AI models requires a change in the way people think about assessing AI and there needs to be more testing of models to see if they have “unacceptable consequences.”

“We are calling for the next government to publicly name any AI developers who do not submit their models for pre-deployment safety testing,” he added. “It is right to work through existing regulators, but the next government should stand ready to legislate quickly if it turns out that any of the many regulators lack the statutory powers to be effective.

“We are worried that UK regulators are under-resourced compared to the finance that major developers can command.”

In a report released in August last year, the Committee said the UK government needed to pass laws to regulate AI or risk “falling behind” the EU and the US.

In the report, the Committee detailed 12 challenges of AI governance that need to be addressed by policy makers, including the bias challenge, where AI can introduce or perpetuate biases that society finds unacceptable, and the misrepresentation challenge, where AI allows the generation of material which deliberately misrepresents someone’s behaviour or opinions.

Earlier this week, The Alan Turing Institute urged regulators to counter threats to the general election posed by AI “before it’s too late.” According to its research, Ofcom and the Electoral Commission have a “rapidly diminishing window of opportunity” to preserve trust in the democratic process.



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.