Mainstream media should ‘fight misinformation’ on polling day, say tech experts

BCS, The Chartered Institute for IT has urged mainstream media organisations to “fight misinformation” on polling day, despite a ban on coverage.

Ofcom rules state that election-related discussion or analysis, including the results of opinion polls, must stop when polling stations open and may only resume once they close.

A new poll conducted by the Institute finds that nearly two thirds of tech experts think that the ban should include an exception to allow mainstream media to rebut fraudulent misinformation.

The poll, which surveyed 1,200 people, also reveals that 65 per cent of technologists are concerned that deepfakes will have an influence on the result of the upcoming UK general election.

In March, The Center for Countering Digital Hate (CCDH) published a report warning that generative AI (GenAI) tools could be used to spread election-related disinformation.

The non-profit organisation said that it tested popular AI image tools, including Midjourney, ChatGPT Plus, and Microsoft's Image Creator, finding that they generated election disinformation in 41 per cent of tests overall.

“As we approach the General Election, it is essential that broadcasters are more active in the fight against misinformation and disinformation, especially when it comes to those misleading the electorate," said Adam Leon Smith, BCS fellow and international AI standards expert. “By enabling reputable media outlets to fact-check and correct misleading content in real time, they can provide the public with accurate information, thereby fostering a more informed electorate and upholding democratic values.”

The comments come as TikTok announces plans to label images and videos uploaded to its platform that have been generated using artificial intelligence (AI) technology. The video-sharing service will employ a digital watermarking system known as Content Credentials to identify AI-generated content.

Many within the UK government have in recent months expressed concerns about how AI could be used to influence the upcoming UK general election.

Back in February 2024, home secretary James Cleverly talked about how “malign actors” could influence the next general election with technology such as deepfakes. Cleverly said these attacks could stem from countries like Iran and Russia.

Earlier this year, BigTech firms including Microsoft and Google pledged to help prevent deceptive AI content from interfering in this year's global elections.

This year will see a record number of voters heading to the polls around the world, with more than 60 countries holding elections.

At the Munich Security Conference (MSC), 20 companies signed the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.

Those who signed have pledged to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps.

Share Story:

Recent Stories

Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.