EU poised to ban ‘high risk’ AI

The European Union is set to ban the use of artificial intelligence (AI) systems for “high risk” applications such as mass surveillance or ranking social behaviour.

Companies could face fines of 4 per cent of global revenue - or €20 million, if greater - if they fail to comply with the new rules or report correct information, like GDPR rules.

The commission is expected to release the measures officially next week, which could change in the interim.

The EU will make exceptions to the rules for certain public security concerns however, including military usage.

The legislation would prohibit the use of AI for “social credit scores” or modelling a person’s trustworthiness or credit based on behaviour or personality traits.

Authorities will also need special authorisation for the use of biometrics such as facial recognition in public spaces under the new rules.

Organisations will also be required to notify people when they are interacting with an AI system.

The regulations include restrictions on “high risk” AI systems, such as those that directly threaten someone’s life or livelihood, such as self-driving cars or AI systems used in recruitment.

The EU also proposed that “high risk” AI systems have a “kill switch”, which will instantly disable the system if necessary.

The proposals including setting up the European Artificial Intelligence Board, which will support the application of the regulation by giving specific recommendations.

The news comes amid strong support in the private sector for AI legislation, with senior executives from many large technology firms, including Elon Musk, calling for increased global regulatory scrutiny.

“The fact that the EU is reportedly considering imposing broad restrictions on the deployment of AI technologies reflects understandable concerns with the idea of machines making decisions that impact people,” said David Naylor, partner, Wiggin LLP. “However, the proposed rules appear wide-ranging and intrusive.”

“The benefits that might be achieved from prohibiting or heavily regulating the use of AI to protect human autonomy may well be more than offset by the consequences of such a restrictive approach.”

“First, it will put European AI businesses at a significant competitive disadvantage relative to competitors in less heavily regulated jurisdictions.”

He added: “Second, and at least as importantly, where the AI would in fact have delivered benefits overall for people and society, these will be lost if the AI is never developed in the first place, or at least not deployed in the EU.”

“Now there is no question in my mind that artificial intelligence needs to be regulated,” wrote Alphabet chief executive Sundar Pichai in a recent March column in Financial Times. “It is too important not to. The only question is how to approach it.”

    Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.