EC publishes guidelines for trustworthy AI

The European Commission (EC) has published its own guidelines outlining the need for “trustworthy AI”.

The report, published by the EC’s expert group on artificial intelligence, sets out basic ethical principles governing the use of AI, as well as seven key requirements to be met for machine learning tools to meet an EU-wide standard and ensure they remain “human centric”.

It warns that AI should be robust “from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm”.

The report falls short of calling for new laws and regulations to govern the use of AI at this stage, but stressed that AI should comply with existing laws and regulations in order to meet the “trustworthy” standard.

The seven key requirements for trustworthy AI as set out in the report are: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, environmental and societal well-being and accountability.

It stated: “AI systems need to be human-centric, resting on a commitment to their use in the service of humanity and the common good, with the goal of improving human welfare and freedom.

“While offering great opportunities, AI systems also give rise to certain risks that must be handled appropriately and proportionately. We now have an important window of opportunity to shape their development.”

Particular attention should also be paid to ensure that vulnerable groups should as children, the disabled and historically disadvantaged groups are not put at risk of exclusion by AI and algorithm-driven processes, the report stated.

It also advised that organisations should develop, deploy and use AI systems “in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability”.

The guidelines come days after Google was forced to scrap its own AI ethics council a week after its launch following controversy over certain board members.

The EC report urged organisations using AI to: “Acknowledge that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure (e.g. on democracy, the rule of law and distributive justice, or on the human mind itself.)”

    Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.