Beware the malicious use of AI warns major new report

Governments must collaborate with technical researchers to investigate, prevent and mitigate potential malicious uses of artificial intelligence (AI) and machine learning technologies, according to the authors of a new report. Its authors suggest AI may increase the incidence of some existing cyber threats by reducing the cost of cyber attacks, as well as introducing new threats and risks.

The authors warn that “there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute and likely to exploit vulnerabilities in AI systems.”

They consider how these threats might play out in digital, physical and political security domains, including via cyber attacks; and the use of drones or other physical systems. Examples include malicious use of AI technologies to cause autonomous vehicles to crash, or to direct a swarm of micro-drones; or for tasks such as surveillance and forms of social manipulation. They note that while the latter concerns are most applicable in the context of authoritarian states, they “may also undermine the ability of democracies to sustain truthful public debates.”

The report, entitled The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation, has 26 different authors, including representatives of the Study of Existential Risk at the University of Cambridge; the Future of Humanity Institute at the University of Oxford, Yale University, Stanford University; and the not-for-profit research company OpenAI. It is also supported by the Electronic Frontier Foundation.

The report also recommends that AI technology researchers and engineers factor misuse-related considerations into decisions on research priorities and norms; and calls for an exploration of reimagination of norms and institutions based around the openness of research and an acknowledgement of the “dual-use nature” of AI and machine learning technologies.

“The proposed interventions require attention and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators,” says the report’s authors. “The challenge is daunting and the stakes are high.”

    Share Story:

Recent Stories


The future-ready CFO: Driving strategic growth and innovation
This National Technology News webinar sponsored by Sage will explore how CFOs can leverage their unique blend of financial acumen, technological savvy, and strategic mindset to foster cross-functional collaboration and shape overall company direction. Attendees will gain insights into breaking down operational silos, aligning goals across departments like IT, operations, HR, and marketing, and utilising technology to enable real-time data sharing and visibility.

The corporate roadmap to payment excellence: Keeping pace with emerging trends to maximise growth opportunities
In today's rapidly evolving finance and accounting landscape, one of the biggest challenges organisations face is attracting and retaining top talent. As automation and AI revolutionise the profession, finance teams require new skillsets centred on analysis, collaboration, and strategic thinking to drive sustainable competitive advantage.