EU AI regulations will force UK to scale up quickly, warns BCS

Tough EU rules on using AI in high-risk situations will mean Britain’s professional and ethical standards must scale up quickly, BCS, the Chartered Institute for IT has said.

The IT industry professional body said that the “sweeping legislation” will impact British businesses providing AI services to the EU.

It warned that the new rules will require organisations to meet “unprecedented standards of ethics and transparency.”

Last week it was announced that the European Union would ban the use of AI for high risk applications like mass surveillance or ranking social behaviour.

Companies could face fines of 4 per cent of global revenue - or €20 million, or greater - if they fail to comply with the new rules or report correct information, like GDPR rules.

“The EU has realised that AI can be of huge benefit or of huge harm to society, and has decided to regulate on standards for the design, development, and adoption of AI systems to ensure we get the very best out of them,” said Dr Bill Mitchell, director of policy at BCS. “There will be a huge amount of work to do to professionalise large sections of the economy ready for this sweeping legislation.”

Mitchell said that these ambitious plans will be impossible to deliver without a fully professionalised AI industry.

“Those with responsibility for adopting and managing AI will need to ensure their systems comply with these new regulations, as well as those designing and developing these systems,” he urged. “The IT profession - and particularly those involved in AI – will in future need to evidence they have behaved ethically, competently and transparently. In principle this is something we should all welcome, and it will help restore public trust in AI systems that are used to make high stakes decisions about people.”

The EU legislation will set Europe on a different path to the US and China by directly prohibiting the use of AI for indiscriminate surveillance and social scoring.

BCS said that the law would establish a number of use cases for AI, including education, financial services and recruitment, and designate them as high risk.

“For these high-stakes examples regulation would include mandatory third-party audit, of both actual data and quality management systems,” the organisation said. “The new rules are likely to take some years to become law but need to be prepared for now especially in the areas of staff training and development.”

    Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.