5 key reasons to read the article
- Photos, video and audio footage can now be easily fabricated, but we are not ready for what comes next.
- Due to AI, reality is becoming negotiable as the line between what did happen and what never did is disappearing.
- Verifying the truth is taking longer and becoming more difficult whereas lies are instant and cost nothing.
- When facts collapse, democracy follows suit.
- Are we witnessing the moment humanity loses its last shared reality?
For decades, photos, videos and audio recordings have served as the closest we had to proof. They could be used in courtrooms, newsrooms, and public debate as strong evidence of what happened and who was responsible. Today, this foundation appears to be cracking.
Advances in generative artificial intelligence (AI) have made it easy to create convincing fake images, videos and audio footage. Researchers, journalists and policymakers warn that the growing amount of this material is beginning to undermine confidence in what is real, with consequences for public trust, politics, the economy and the justice system.
Cases that test trust
One of the first signs of what was to come happened in March 2022, when a deepfake video of Ukrainian President Volodymyr Zelensky went viral, falsely showing him urging Ukrainian troops to surrender. Although the video was somewhat unsophisticated, it marked a significant moment. It was the first widely reported use of a synthetic video in an active war, and an indication that manipulated footage could be used not just to mislead, but also to attempt to influence events in real time.
Since then, technology has improved dramatically. In recent years, multiple incidents have repeatedly proved that humanity is witnessing the end of the maxim, “seeing is believing”.
In Germany, AI-generated images invaded the internet in early 2026 to distort historical material relating to concentration camps and the killing of more than six million Jews during World War Two.
In late 2025, Google’s AI Overviews were found to be providing dangerous medical advice. By stripping the context from lab results, AI gave patients false reassurance about serious conditions. While Google eventually removed these summaries, health experts warned that AI’s tendency to oversimplify complex medical data could lead to real-world harm.
The same year, the Australian arm of Deloitte faced a backlash for producing a report for the Australian government that contained AI-generated errors, fake citations, non-existent quotes, and fabricated research which resulted in a partial refund of fees.
Breaking news events have also proved vulnerable. In several recent cases, including shootings and attacks in the United States, social media platforms were flooded with fake AI-generated images that were shared before the authorities or media outlets could verify what had actually happened.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) has described this as a “crisis of knowing”, warning that synthetic media is beginning to destabilize how humans establish what is true and what is not.
The liar’s dividend
One of the more subtle effects of this shift is what researchers call the “liar’s dividend”. It is actually the idea that, as fake content becomes increasingly common, people accused of wrongdoing can claim that genuine evidence has been AI-generated. This creates a new kind of uncertainty.
According to Hany Farid, a Berkeley digital forensics expert, with the rise of deepfakes, all evidence can now be questioned.
U.S. courts have already heard cases where lawyers have argued that audio or video recordings might have been altered by AI. The broader risk, experts explain, is that the actual video of visual proof will begin to lose its strength.
A challenge for journalism
For news organizations, the spread of deepfakes is a major practical and ethical challenge.
Traditionally, information verification has focused on sources, context and coordination. Now, journalists are forced to verify the technical integrity of every file, not just the credibility of the source. This is an exhausting task that stretches resources thin.
Some large newsrooms, such as the BBC and the Wall Street Journal, have created forensic teams or recruited external experts. Irrespective of the strategy, it is known that verification is taking more time and becoming more expensive, while the production of false material is becoming faster and cheaper.
Fact-checking organizations confirm that the increase in AI-generated material has made verification more time-consuming and technically demanding, forcing them to invest more resources in analyzing and confirming whether media is genuine or fabricated. According to Full Fact, even with improved detection tools, fact-checkers must still undertake careful human analysis of AI-related claims in view of the increased challenge posed by synthetic content.
Politics and public trust
When people cannot agree on basic facts, democracy begins to fray. The World Economic Forum has warned that “the rapid spread of false and misleading information, facilitated by digital technologies and social media, is eroding trust in institutions, exacerbating societal polarization, and undermining efforts to address critical global issues.”
The risk is not just that people will believe things that are untrue, but that they will begin to doubt everything, including reliable reporting and official evidence, researchers warn.
What can be done?
Technology companies are working on watermarking and provenance standards that attach metadata to generated content, making it possible to verify its origin and authenticity. Meanwhile, policymakers around the world are considering new regulations on transparency, labelling and liability.
UNESCO has also stressed the importance of media and information literacy as a response to the challenges posed by generative AI. The organization argues that citizens must approach digital content with the same critical skills long used for written sources.
A new relationship with evidence
In the past, if you lied, you had to work hard to prove that lie was true. Now, the person telling the truth has the harder job. Yet, despite all the issues, the rise of generative AI does not mean that truth is lost completely. But it does mean that photos, video and audio material can no longer serve as reliable evidence.
In a world where seeing is no longer automatically believing, hopefully, trust will not disappear, but it might need to be rebuilt on more explicit and more carefully defended foundations.

