As hyper-realistic photos are becoming more prevalent online as purporting to be the latest news, they threaten the credibility of objective reality and, as a result, the foundations of our democracy. To prevent fake from becoming the new real, technology, education, journalism, and government must work together to provide a solution.
The picture of a man from the Greek special disaster unit EMAK holding a small child amid collapsed buildings in the aftermath of the recent earthquake in Turkey and Syria elicits strong emotions and is widely shared on social media as being the latest news. It says “Humanity In One Photo”. Closer inspection reveals that the man in the picture has six fingers — a common flaw in AI-generated images.
Investigations reveal that the photo was created by Panagiotis Kotridis, a former fire brigade commander, who used artificial intelligence program Midjourney to produce the image based on a textual description and then edited it in Photoshop to add various details, such as the flags, EMAK acronym and the corps insignia. He then signed the fake image and posted it on his Facebook page “to honor his colleagues who are doing their best to save those affected by the earthquakes in Turkey”. Despite the photo being labeled as fake, it quickly went viral and was widely shared online as a real photo. Interestingly, a large group of people don’t care at all whether the photo is real or fake: the photo conveys a very powerful message that has deeply moved millions of people. It is even suggested that the image should be proclaimed “photo of the year”.
The photo taken with AI is not the only one of its kind. Several images of newsworthy events can be found online, which seem hyper-realistic, but are completely fake. In addition to fake photos of earthquake victims, AI photos of the demonstration against pension reform in Paris (which the creator claims were taken to raise awareness of robot-generated images) are also circulating online and a news producer of the BBC publishes an AI-generated photo of old men in Yemen as if it were real. Most famous is the poignant photo of a crying boy in a coat with a Ukrainian flag. This image was distributed via the official English-language Twitter channel of the Ukrainian parliament on Monday morning, January 16. The child had survived a Russian missile attack on an apartment building in the Ukrainian city of Dnipro. This image also stated that it was an illustration, however few people noticed this. The Ukrainian investigative journalist network Behind The News used the fake image to start a discussion on Facebook about the ethical use of these photos generated by artificial intelligence. Such photorealistic images pose a real danger, according to these fact-checkers: “It is not surprising that people will think that our real photos and videos are also made with artificial intelligence.”
These fake photos make it clear once again that our society is coming apart. It is not the fake images themselves that we should be afraid of, but the way they pollute the information ecosystem and create an alternate reality. The danger is not so much that the lie is elevated to the truth, but that the credibility of the truth is damaged. The lie does not have to be convincing at all, if it is repeated and spread often enough, people will naturally become confused. Scientists call this phenomenon the liar’s dividend. This advantage for the liar plays into the hands of those with malicious intent: anyone can now dismiss anything as a lie, a conspiracy theory, fake news or a deepfake. In this way we are conditioned to develop an apathy for reality. If people are constantly inundated with mis- and disinformation on a large scale, it becomes increasingly difficult to distinguish real from fake and there is a good chance that people will erect a wall about their own rightness and become indifferent to objective reality. Reinforced by the algorithms of social media, people increasingly withdraw into their own reality bubble. In addition, this form of computer kitsch easily hits people in the gut. Where previously the press photographer still had to go out to shoot the perfect picture, the drama is simply fantasized together by AI manipulation and the sensational potential of these fake images is enormous. The sum total is that there is no longer one shared truth, which poses an existential threat to our democracy.
To counter the problem, a three-pronged solution is needed. First of all, technology lends a helping hand. For example, proof of authenticity can be provided through the issuance and acceptance of digital signatures. Secondly, an important task is reserved for education and journalism. Media literacy should be high on their agenda. For example, young people in Finland are prepared at an early age for a future in which real and fake merge seamlessly. Finally, the government must act adequately in the spread of fake news. Laws and regulations should primarily deal with the question of how the general public can know whether something is real or fake. So transparency first. The recent exponential growth of AI tools, such as ChatGPT, Dall-E and Midjourney, which allow everyone to play with reality, once again make it clear that time is running out. The recent statement by the creator of ChatGPT, calling for increased government involvement in the AI explosion, highlights the urgency of the situation. Our society must act now to prepare for a future where the line between real and fake is no longer clear, lest we risk threatening the very foundations of our democracy.
This article is written together with Menno van Doorn. The article is part of the research that we have done for our book “Real Fake — Playing with Reality in the Age of AI, Deepfakes and the Metaverse”.