Deepfakes: is perception more dangerous than reality?

Paying too much attention to fake videos could undermine the notion of truth when it comes to online content. Alexandra Leonards explores the dangers of perception, the rise of ‘shallowfakes,’ and discovers how to tackle deepfakes while they’re still in their infancy.

The internet is already a hotbed for misinformation, and amid rising fears over election integrity and public health messaging in a global pandemic, the last thing it needs is an influx of doctored videos, advanced enough to convince the public a politician or scientist said something they didn’t.

The prospect of fake videos becoming sophisticated enough to completely hoodwink audiences is certainly troubling. But is this type of content really a major threat to society right now?

Without limits, perhaps we could see convincing deepfakes appear across social media and other online platforms. But for now, according to experts, the AI technology powering these kind of videos is not complex enough to trick people into believing they’re legitimate.

“Currently, the majority of deepfake videos available online are created by amateurs with low grade technology, producing low quality material and the process that creates deepfakes often leaves very visible artifacts around the face,” says Jonathan Miles, head of strategic intelligence and security research at Mimecast, the cloud cybersecurity business. “This can include blurring or flickering, which is common when the face changes angles very rapidly.”

Another weakness of the technology currently used to develop deepfakes, is that it often it produces videos where the subject’s eyes move independently of each other. To mitigate these issues, videos are rendered at a low resolution, which means it is obvious they are fake.

The term deepfake was first coined in 2017, when a Reddit user with the same name started uploading digitally altered pornographic videos onto the site. Indian journalist Rana Ayyub fell victim to a doctored video of this kind, which was shared over 40,000 times, causing her great distress.

Although many fear the development of deepfakes for the purpose of changing or promoting certain political narratives, a few years’ on and the primary use of malicious deepfakes is still in the development of pseudosexual imagery.

The threat of perception

However, it goes both ways: regardless of the actual prevalence of deepfake content online, if internet users becomes fixated on the idea that these videos are more common than they in fact are, the risk emerges that people might start to disbelieve what they see online, even if it’s real. With political deepfakes still in their infancy, an exaggerated perception of their pervasiveness could be a more urgent threat than the videos themselves.

“We shouldn't overegg the problem,” says Benedict Dellot, head of AI monitoring at the CDEI. “There's a potential danger of talking too much about the problem because it undermines truthful content on the internet; everyone's then second guessing it, asking whether or not it’s real.”

He adds: “You've [then] got a kind of perverse situation where although you might not have that many fake videos and content on the internet, or on platforms like Twitter and Facebook, the very perception they are is destructive in its own right.”

By creating a culture of perceiving and presenting deepfakes as a big issue, they can be used as a tool by repressive or authoritarian regimes to take down anything they like or claim that a genuine video of prisoners, abuse or other state-sanctioned violence is merely a staged piece of propaganda.

“The international polices of Facebook, Twitter and others, I think have to be quite careful about their directness and how forthright they are with dealing with the problem,” warns Dellot. “They don’t want to lay the groundwork and create a precedent for these regimes to do what they want and call everything bad content and take it down.”

The danger of ‘shallowfakes’

Shallowfakes are videos or content that have been subtly altered using simple techniques like changing the speed or caption on footage.

“From a misinformation perspective, in my view this is more of a concern, because they are easier to produce, and we have seen many examples of “creative editing” being used to imply politicians are saying certain things,” says Andy Phippen, professor of IT ethics and digital rights at Bournemouth University, and fellow of BCS, The Chartered Institute for IT. “They are less of a concern for abuse or harm, but certainly in the broader realm of misinformation they are a far bigger issue.”

A few years ago, current speaker of the US House of Representatives Nancy Pelosi fell victim to a shallowfake video. Someone slowed down footage of her to make it look like she was slurring her words at a conference, to convince viewers that she had been drinking at the event.

“The thing with shallowfakes is it's really easy to do, it’s easy to defend as just part of the stylistic editing,” says Benedict Dellot, CDEI. “That's a problem - subtle shifts that happen that are damaging but also not so obvious as to merit taking down.”

He adds: “Like with Nancy slurring her words, it's not as if she's saying something ludicrous, as a deepfake might show her to say, these are minor changes that could happen over thousands of videos - I don't know how tech companies will deal with that as easily [as deepfakes].”

How to tackle the deepfakes?

Even if, as the experts suggest, deepfakes are not yet sophisticated enough to pose a real threat to society at large, it doesn’t mean technology companies or governments shouldn’t be addressing and preparing for an increase in malicious deepfakes while the technology behind them is still in its infancy.

Deepfake detection tools are perhaps the first port of call for addressing fake videos. Of course, a lot of the responsibility falls on the shoulders of tech companies like Facebook and Google, which host perfect online environments for the spread of misinformation.

Behind the scenes, these BigTech companies are already addressing the misuse of deepfakes.

“Governments and social media platforms are taking steps to prevent the use of deepfakes to deceive the public and influence elections, but it remains to be seen how effective those efforts will be,” says Mimecast’s Jonathan Miles. “One way to prevent deepfakes spreading is for social media platforms to identify and tag only valid content from legitimate outlets.

He explains: “As we’ve seen with some of these social media platforms, some controversial posts now come with warnings about their legitimacy. In the future we might see social media platforms immediately delete ‘fake’ posts, but this will require monitoring and appropriate AI or ML algorithms to allow content to be identified and removed.”

Facebook recently developed a large body of “fake deepfakes” that it used to train an algorithm to identify doctored videos. The social media site then shared that data set with other platforms and researchers to help them develop their own detection tools.

“There’s a difference between what you're capable of doing as a tech platform, and what you choose to do,” says the CDEI’s Benedict Dellot. “We've seen that recently with the take down of Donald Trump; they could have easily pulled the cord on Trump's Twitter account [earlier] but they decided not to.

He explains: “It's the same with every domain and with deepfakes too, they could do something to reduce this level; if it did become an issue I expect they'd be able to reign it in, but it comes down to whether they want to or not, it’s subjective, and probably for them a political question.”

Detection once relied on identifying physiological inconsistencies in footage, like the absence of blinking. The problem with this method was that once a perpetrator discovered the flaw organisations were looking for, they could simply change the defect.

But methods have vastly improved; nowadays there’s much less reliance on human judgment and more on AI techniques.

Tackling malicious deepfakes with legislation is another viable approach.

“Legislation to outlaw the production of pseudosexual imagery would be a start,” says Andy Phippen, BCS, The Chartered Institute for IT. “This exists in the UK for indecent images of children, but it is less well defined for adults.

He adds: “As with all of these issues, it takes a multi stakeholder view – legislation to protect individuals, platform providers implementing clear and transparent reporting routes, users aware of, and confident to use, reporting routes, providers adding identified deepfakes to hash lists.”

But there is an argument to say that enforcing legislation is not an appropriate way to deal with the issue.

A legal approach relies on the ability of platforms to detect the footage in the first place, and it also means that the person who has uploaded the footage must be identified. This raises a question about what to do with perpetrators located outside of the jurisdiction in which the legislation is issued.

Data regulation analyst Chiara Rustici, from BCS' law specialist group, says that it’s important to avoid the temptation of “regulatory nationalism” or one-upmanship.

“The correct way to regulate this area of technology is to strive for interoperability of rules by passing laws relying on internationally agreed parameters,” she says. “As other areas of technology regulation have taught us, illegality thrives in the cracks between jurisdictions and disconnect between countries' legislation; when it comes to technology, we now know the only way to regulate effectively is to regulate globally.”

It’s also very difficult to determine what is malicious and what is satirical. Not all altered videos are created with bad intentions, there’s plenty of grey area. Take for instance Channel 4’s deepfake of the Queen over Christmas, which divided opinion on the advert’s propriety.

“The first recommendation is to understand and regulate deepfakes as part of the same tech as facial recognition,” says Rustici. “Both consist in an application of machine learning to personally identifiable data, often biometric data, obtained by scraping privately owned and publicly available images without clear knowledge and permission by the persons affected of the subsequent uses their facial data is put to.”

She warns against regulating deepfake technology in isolation, but instead recommends it's looked at as an ecosystem issue.

“It is vital to understand this facial data ecosystem involves several seemingly unrelated market actors: data brokers and ordinary businesses who supply data to them, data mining and analytics businesses, and platforms,” she explains. “It is also vital to understand this technology takes shape in this ecosystem along at least three market stages: collection of input data, non consensual manipulation of those data, and non consensual dissemination of the final outcome.”

Věra Jourová, the European Commission’s vice president for values and transparency, presents an alternative approach. She says that to address misinformation and harmful content there needs to be a focus on how it is distributed and shown to people rather than a push for removal.

“[One way to address the issue is to] educate the public and those that use Facebook and Instagram etc about how to watch out for this kind of content, particularly at a young age,” says Dellot. “[They should] question content, not just videos, but what they read. But you can't rely on education by any means.”

Deepfakes aren’t yet an urgent danger, but this kind of content does raise some important questions about how society should move forward in tackling misinformation.

These doctored videos are also a reminder of the often conflicting role technology plays in both the spread of false information and the policing of misleading content.

    Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.