People are creating records of fake historical events using AI
By Chloe Xiang From vice.com
The Midjourney subreddit is being flooded with images that depict images of historical events like “The infamous Blue Plague Incident” that occurred in the 1970s in the Soviet Union, the “July 2012 solar superstorm and blackout” in the U.S., and “The 2001 Great Cascadia 9.1 Earthquake & Tsunami” that devastated the West Coast of North America.
Each post showcases a slideshow of images that share various perspectives of the event, as well as important events following the catastrophe, including press conferences and cleanups. The catch is: These are all Al-generated images and none of these events have ever occurred.
Midjourney is a text-to-image Al generator that is similar to OpenAl’s DALL- E and Stability.Al’s Stable Diffusion. Currently, Midjourney is running on its fifth version, after V1 was first released in July 2022. The Al image generator, which used to make mistakes such as depicting hands with weirdly morphed fingers, has improved vastly. Now, Midjourney has nearly perfected the human hand and is the most hyper-realistic version of the Al model thus far.
“This ‘what-if’ historcial [sic] event is based on the real-life events of July 23, 2012. At 2:08 UT on the 23rd, an extremely powerful coronal mass ejection was detected, estimated to be as strong as the 1859 Carrington Event,” Arctic_Chilean wrote in the comments. “This alternate scenario considers what could have happened if Earth took a glancing blow from this event, with it occurring a few days earlier. In this alternate timeline, this event would serve as a wakeup call to fortify global power grids and take the
threat of solar events more seriously.
“As for the MidJourney prompts, I feel it did great at capturing the early 2010 news footage aesthetics, as well as the photographs of aurora over famous skylines and landmarks. The biggest issue was the total inability of the program to generate images of a blackout (city without power),” the user added.
In another post, a user named FinewithIX, whose real name is Jordan Rhone, posted a collage of four AI-generated images that depicted “Staging the Moon Landing, 1969.” The images mimicked the grainier, film quality of photos taken in the late 60s and showed the behind-the-scenes of people filming and photographing a fake moon landing.
“This was my [sic] essentially my goal here. Mocking the conspiracy theorists,” Rhone, replied. “Right now, conspiracies are popular because it’s all just chatter among like-minded and often ignorant people. Throwing these things in their face is essentially making a mockery of them, which I think will actually quiet them down.”
“I fully believe that the more people who visualize conspiracies, the fewer who will believe in them. If you show them what they’ve always wanted to see, it doesn’t live up to the hype created in their own minds. AI-generated art will lead conspiracy theorists to become desensitized to their own theories. Society will rightfully start to question the credibility of these images and become less likely to believe in what they see,” Rhone told Motherboard.
While this is a good sentiment from Rhone, it seems that people are already falling for AI-generated images. An image of the pope wearing a stylish white puffy coat and a large statement cross necklace went viral over the weekend. Since being posted on Reddit on Friday, the image has made its way to Twitter, where many users thought it was real.
“I thought the pope’s puffer jacket was real and didn’t give it a second thought. no way am I surviving the future of technology,” model Chrissy Teigen tweeted to her nearly 13 million followers.
Last Monday, Eliot Higgins, the co-founder of the investigative journalism group Bellingcat, tweeted fake images of Trump getting arrested without explicitly labeling them as AI-generated, leading a lot of people to believe that he had indeed been arrested.
Now, AI experts and social media companies are focused on coming up with ways to curtail the spread of misinformation while continuing to support the ability to use new technologies. Sam Gregory, the executive director of the human rights organization Witness, told The Washington Post that his team came together with experts to identify ways to “support the nascent power of synthetic media for advocacy, parody and satire, while confronting the gray areas of deception, gaslighting and disinformation.”
Even if these images are hyper-realistic, a closer examination of them will still reveal the telltale signs of an AI generation. “The Trump arrest image was really just casually showing both how good and bad Midjourney was at rendering real scenes,” Higgins wrote in an email to PBS. Higgins pointed out that in the images, Trump is wearing a police utility belt, and faces and hands are distorted.
Platforms have been amending their policies to address synthetic media. TikTok announced last week that all deepfakes or manipulated content that show realistic scenes have to be labeled as fake or altered, and deepfakes of public figures are not allowed to be used for political or commercial endorsements. Twitter’s policy says that “you may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”).”
However, as AI innovation occurs at a speed that is hard to keep up with, maintaining and creating adequate safeguards will be challenging. People will still fall for images, especially as they are shared without context. Experts are warning users to be on the lookout for AI-generated images and to be more critical than ever when seeing an image, rather than automatically assuming that it is real.
Update: This article was updated with Reddit user Jordan Rhone.
For more on this story go to: VICE