With recent advancements in generative AI technology, it is now easier than ever to create highly realistic but fake media content. We are talking about full-length videos with realistic voiceovers that mimic real-life individuals. In the short time since generative AI became a thing in the digital space, there have been numerous incidents of malicious actors using this technology to create fake videos to spread misinformation or commit fraud.
What’s most scary is how easily accessible this technology is. A skilled cybercriminal can leverage widely available online tools to create deepfakes within minutes. This has created a genuine trust crisis in the digital world, with growing questions about how to spot a deepfake. In light of the rapid evolution of AI technology and the rise of deepfakes, can online content still be trusted?
This article explores the various ways deepfakes have been eroding trust and the measures that have been helping us safeguard the digital space.
How Deepfakes Are Eroding Trust
Deepfakes are created using a combination of artificial intelligence, machine learning, and digital media. These manipulated digital media have gone beyond simply editing a photo or altering a video. Malicious actors now create entirely artificial content that mimics both the appearance and behaviour of real people. They leverage Generative Adversarial Networks (GANs), which are AI models that have been trained on extensive datasets to produce realistic but highly deceptive results.
The proliferation of this technology has raised legitimate concerns about trust in today’s digital landscape. These days, people mostly consume information through digital platforms, which is why the authenticity of visual and audio content has become crucial. The following are some of the ways deepfakes have been eroding trust in today’s digital age:
- Spread of misinformation
- Decline in trust and public confidence
- Reputational damage
- Political manipulation
- Fueling conspiracy theories
- Manipulation of public opinion
How Trust Is Being Safeguarded in the Age of Deepfakes
In the current digital age, there are now several tools to manipulate content, meaning people can no longer trust what they read, hear, or even see with their eyes on the internet. The rise of deepfakes poses a challenge to digital content consumers. Fortunately, some measures can help verify the authenticity of digital content and safeguard user trust. The following are some of the effective tools for identifying and protecting users from manipulated content.
Digital Signatures
Establishing the authenticity of digital content is as old as the Internet. So, while AI-generated deepfakes are a relatively recent problem, the same technologies and principles that have been used to verify the authenticity of digital content in the past can still be applied to them.
Organizations have been using Public Key Infrastructure and digital signatures to secure and authenticate servers, protect websites, and ensure the integrity of online transactions for years. Digital certificates generated by PKI can be used to verify the identity of individuals and organizations from whom content is being generated.
Similarly, Digital signatures can be used to create a unique “fingerprint” of any digital file, making it easier to trace the provenance of any content and verify its authenticity. We are still a long way off from building a robust and publicly available infrastructure for verifying the authenticity that the AI world demands. However, the existing solutions can be used to create an immutable manifest for digital files with tamper-proof metadata that can be reviewed by anyone to confirm authenticity.
Leveraging Social Trust Indicators
There’s no way to completely control or prevent deepfake content from making it to the internet. However, the social aspects of digital trust can still be managed by leveraging indicators that verify the identity of individuals and authenticate information sources.
For instance, the simple lock icon in your browser’s address bar proves that the site is encrypted and can be trusted. Similar indicators can be applied to digital content to verify authenticity. Another common indicator is the blue check next to the identity of verified users on social media. This shows us that the person putting out the content has been verified to be who they purport to be.
With indicators like this, all we’ll need to do is ensure that everyone involved in putting out content, including social media platforms, publishers, media editing software, and even AI services, adopts a consistent mechanism that makes it easier to identify non-human content. A good example of this at work in real life is TikTok automatically labeling AI-generated videos and images.
AI as a Solution
Another strategy that might be efficient in authenticating digital content is to use the technology to police itself. Many developers have been focusing their efforts on training AI models that detect and validate the authenticity of content.
Human users can inspect deepfake videos for lighting, video quality, and audio inconsistencies that may serve as obvious pointers to authenticity. However deepfake models have gotten more advanced over the years, which makes it harder to verify their authenticity through ordinary human inspection.
Fortunately, AI-powered analysis tools have been created to detect more subtle anomalies in videos and audio, such as speech patterns, facial expressions, and other digital artefacts that give them away as manipulated content.
The table below lists some AI detection tools designed for identifying deepfakes:
AI Tool | Description | How It Works |
DeepFake-o-meter | Open source tool that uses AI algorithms to analyze the authenticity of a digital file | Automated multi-modal analysis of video frames |
Sensity AI | Advanced AI platform for deepfake detection | Multimodal detection with real-time monitoring and analysis |
Intel’s FakeCatcher | Real-time deepfake detection software | Uses Photoplethysmography (PPG) to analyse blood flow in faces within videos |
Resemble Detect | Deepfake audio and video detection software | Uses deep neural networks to analyse media files and search for imperceptible signs of manipulation |
Establishing Standards and Regulations
A more overarching approach to regulating the proliferation of deepfakes and maintaining digital trust is to establish standards and enforce them through regulatory instruments. One such standard that has been in the works recently is the Coalition for Content Provenance and Authenticity (C2PA).
This technical specification, created through a collaboration between media and tech leaders, provides technical specifications that can be used to verify the source of digital media and identify when it has been generated by AI. This standard essentially assigns a birth certificate to the media, providing information about when it was created, the location, and the type of device used to create it.
This manifest will also record any changes made to the original file, providing tamper-proof evidence that can help distinguish between authentic and manipulated content. Regulatory changes may also be required to enforce these standards so that the appropriate technology companies comply with them.
Conclusion
Generative AI models have created a world where anything is possible, with immense potential for misuse. Fortunately, there are measures that can be implemented to introduce a degree of trust and restore sanity to the digital content world. Only time will tell if these measures will advance fast enough to combat the mistrust that has already been created by the proliferation of deepfakes.