Beware the malicious use of AI warns major new report
Written by David Adams
Governments must collaborate with technical researchers to investigate, prevent and mitigate potential malicious uses of artificial intelligence (AI) and machine learning technologies, according to the authors of a new report. Its authors suggest AI may increase the incidence of some existing cyber threats by reducing the cost of cyber attacks, as well as introducing new threats and risks.
The authors warn that “there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute and likely to exploit vulnerabilities in AI systems.”
They consider how these threats might play out in digital, physical and political security domains, including via cyber attacks; and the use of drones or other physical systems. Examples include malicious use of AI technologies to cause autonomous vehicles to crash, or to direct a swarm of micro-drones; or for tasks such as surveillance and forms of social manipulation. They note that while the latter concerns are most applicable in the context of authoritarian states, they “may also undermine the ability of democracies to sustain truthful public debates.”
The report, entitled The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation, has 26 different authors, including representatives of the Study of Existential Risk at the University of Cambridge; the Future of Humanity Institute at the University of Oxford, Yale University, Stanford University; and the not-for-profit research company OpenAI. It is also supported by the Electronic Frontier Foundation.
The report also recommends that AI technology researchers and engineers factor misuse-related considerations into decisions on research priorities and norms; and calls for an exploration of reimagination of norms and institutions based around the openness of research and an acknowledgement of the “dual-use nature” of AI and machine learning technologies.
“The proposed interventions require attention and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators,” says the report’s authors. “The challenge is daunting and the stakes are high.”