Most European firms rolling out AI without proper safety or governance, ISACA warns

European organisations are adopting artificial intelligence systems at a rapid pace, but are failing to implement adequate governance and safety controls to keep them in check.

That is the finding of a new study from IT governance trade association ISACA, which surveyed 681 European technology and business professionals between 6 and 22 February.

Alarmingly, 59 per cent of respondents admitted they were unsure whether their organisation would be capable of stopping an AI system if it were impacted by a security incident.

Just over one-fifth of respondents (21 per cent) said they would be able to do this within 30 minutes. ISACA says this indicates that, if an AI system were “compromised or malfunctioning”, most organisations would be unable to do anything for at least half an hour.

The core reason behind organisations’ inability to rein in problematic AI systems appears to be that they are not taking their governance obligations seriously enough.

Currently, 33 per cent of organisations have no formal rule or procedure in place compelling employees to disclose whether they have used AI at work.

Meanwhile, 20 per cent of organisations are unsure who to hold accountable when AI systems fail. Just 38 per cent said this responsibility would lie with the company’s board or an executive.

According to ISACA, these statistics are concerning because the European Union’s AI Act requires organisations deploying AI systems to be transparent about how they use AI to help improve employees’ understanding, and to be accountable when issues arise.

There is also a growing expectation among global regulators that leadership teams should be the parties held accountable for AI-related issues, showing that the safety of this technology is now a top boardroom priority.

When it comes to AI oversight, the study paints a slightly better picture. Forty per cent of organisations have implemented rules ensuring AI systems cannot make decisions without prior approval from a human. That aligns with regulatory expectations.

Chris Dimitriadis, chief global strategy officer at ISACA, said: “What this research reflects is that our thirst to innovate is not matched by our desire to govern change, exposing us to critical risks.

“The tools to govern AI responsibly already exist. Risk management, prevention controls, detection mechanisms, incident response and recovery strategies are the foundations of good cybersecurity practice, and they need to be applied to AI with the same rigour and urgency.”



Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.