Regulators must ‘counter AI threats’ before general election, warns Alan Turning Institute

The Alan Turing Institute has urged regulators to counter threats to the general election posed by AI “before it’s too late.”

According to new research from The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), Ofcom and the Electoral Commission have a “rapidly diminishing window of opportunity” to preserve trust in the democratic process.

CETaS said that advances in AI technology have caused many people to be concerned about its use to spread disinformation, influence voters, and disrupt the integrity of election processes with the aim of manipulating the outcome of elections or eroding trust in democracy.

The study found that while there is limited evidence that AI has prevented a candidate from winning compared to the expected result, there are early signs of damage to the broader democratic system. This includes confusion over whether AI-generated content is real, deepfakes inciting online hate against political figures, and politicians exploiting AI disinformation for potential electoral gain.

Additionally, CETaS said that the current electoral laws on AI are ambiguous which could lead to its misuse. Examples include using ChatGPT to create fake campaign endorsements which could damage the reputation of individuals involved and undermine trust.

The study recommended that the Electoral Commission should ensure any voter information contains advice on how to remain vigilant about AI-based election threats such as attempts to cause confusion over the time and place of voting.

It also urged the Electoral Commission and Ofcom to create guidelines and request voluntary agreements for political parties detailing how they should use AI technology for campaigning.

Sam Stockwell, research associate at The Alan Turing Institute and lead author of the report, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period. Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information.

“That’s why it’s so important for regulators to act quickly before it’s too late.”

Earlier this month, The National Cyber Security Council (NCSC) launched a new personal internet protection service to increase the digital security of political candidates, election officials and other people at high risk of being targeted ahead of the general elections.

The opt-in service aims to prevent these individuals falling victim to phishing, malware and other cyber threats. The NCSC said it will provide an extra layer of security on personal devices by warning users visiting a domain known to be malicious and blocking outgoing traffic to these domains.



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.