- snitzoid
Hey, the Pope looks great in that $2,000 down ski jacket!
What if there is no Pope and he's some AI-generated dude! Pope Hal?
Paparazzi Photos Were the Scourge of Celebrities. Now, It’s AI.
Researchers say advancements in artificial intelligence could be used to stoke misinformation about public figures. A recent image had even experts fooled.

A fabricated photo of Pope Francis in a big coat set off a conversation about the implications of viral AI-generated imagery. GUERRERO ART
By Ashley Wong, WSJ
April 3, 2023 12:39 pm ET
A fabricated image of Pope Francis wearing a fashionable white puffer was a funny viral moment for many. But artificial-intelligence experts say its implications are no laughing matter.
Several researchers said the picture, created using the AI image generator Midjourney, is a sign of a coming misinformation wave in which fake photos will be more convincing than ever. Even those who specialize in AI and social media can’t always tell what’s real from what’s not.
“I thought it was excellent,” said Jeff Hancock, professor of communication at Stanford University and founding director of the Stanford Social Media Lab, of the pope image’s photorealism. “Everything fit right, there were no obvious distortions,” he added.
AI has entered everyday life, optimizing everything from meal planning to essay writing to detecting cancer. Though the full potential of this technology has yet to be fully realized, tools like Midjourney, ChatGPT, DALL-E and Stable Diffusion are already being widely used to create text and images that spread easily online. (Previously, DALL-E users were prevented from editing images of real human faces, but that restriction was lifted last September.)
They have also made it easier for people to use and abuse public figures’ likenesses online. Last fall, the likes of Leonardo DiCaprio and Elon Musk saw their faces being used in advertisements without their permission, while NBC News reported the face of actress Emma Watson was used to create sexually provocative videos in an advertisement for a deepfake mobile app last month.
“Any internet troll now can, with only a few keystrokes and a click of a button, create convincing images that might fool a human,” said Andrew Owens, assistant professor of electrical engineering and computer science at the University of Michigan.
In a statement, a spokesperson for Stability AI, the company that created Stable Diffusion, said the company is searching for ways to combat AI’s potential for creating misinformation.
“We are currently working with leading companies and researchers in the digital security space to implement a secure, long-term solution to this concern,” the spokesperson said. Representatives for Midjourney, ChatGPT and DALL-E did not respond to requests for comment.
‘Any internet troll now can, with only a few keystrokes and a click of a button, create convincing images that might fool a human.’
Last week, as news circled the possible arrest of Donald Trump, images of the former president being arrested, tackled and carried away by police filled Twitter. Many of the images were created with Midjourney, the same service used to create the fake Pope photos.
To Ari Lightman, professor of digital media and marketing at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, the proliferation of increasingly sophisticated AI-generated imagery marks a new frontier in spreading fake news.
“I think that it’s sort of an evolution, if you will,” Mr. Lightman said.
Images are often posted on social media with little to no context, experts said, and may be taken at face value. As AI image generation continues to improve, experts said, the usual telltale signs of fake images, such as hands with too many fingers or odd-looking eyeballs, will start to disappear.
“When it’s presented in a context of, ‘Oh look, this could be fake,’ then of course your spidey senses are active,” said Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute. “But in regular cases, if you’re just browsing, you might just chuckle at the image and keep scrolling rather than keep questioning the image.”
Mickinzy Seneff, a 21-year-old student at the University of California, Santa Cruz, paused when she saw a video all over her Twitter feed of pop star Harry Styles kissing model, author and podcaster Emily Ratajkowski. It looked odd, Ms. Seneff said, and she assumed the celebrities had been misidentified.
After reading a viral Twitter post suggesting that the video was AI-generated, she started to believe it.
“AI is really huge,” Ms. Seneff said. “So it wasn’t that far of a reach for me to be like, ‘Oh, this video looks like fake people.’” If the theory were true, she said, it could explain why the video was so blurry and why the pair’s posture was awkward—not to mention the extremely public nature of the display, which looked to be in the streets of Tokyo, leaning against a van. A spokesperson for Ms. Ratajkowski did not respond to requests for comment, and Mr. Styles could not be reached for comment. Neither have publicly commented on the photos, which first appeared in the Daily Mail without a photographer credited.
When images of people are grainy and the faces are difficult to distinguish, experts said, it’s hard to tell with certainty if they’re authentic. In theory, it would be relatively easy to generate still images of a real celebrity couple caught in a clinch.
“With powerful computer capability, we can get the images to do pretty much whatever we’d like them to,” Mr. Lightman said. “Contort them in specific ways, add specific pieces of clothing like the pope’s parka.”
But creating convincing videos of real human beings is still a difficult task for AI, they added, since the human eye can detect even the smallest irregularities in movement. Videos are also essentially a series of different images, and every image has to be convincing—something machine learning hasn’t quite yet achieved. The technology is on the way though, experts said, and will likely develop faster than expected.
After seeing more images of the pair taken from different angles, Ms. Seneff said she now believes the footage is real. But the experience has left her even more disturbed about a future rife with AI-generated visual misinformation.
“I think it’s scary to be like, the things that are AI look so real,” she said. “And these things that are real can be seen as AI.”
‘With powerful computer capability, we can get the images to do pretty much whatever we’d like them to.’
Debates over the authenticity of celebrity photos are just the tip of the iceberg, said Baobao Zhang, assistant professor of political science at Syracuse University. Dr. Zhang referenced images she had seen recently on Reddit of people using AI to generate fake versions of real historical events.
“I think there’s real stakes on social media,” Dr. Zhang said.
Mr. Lightman said he was also concerned about the ability of AI-generated images to manipulate the economy. For instance, he said, a fake image of a CEO doing something suspicious could cause a stock’s value to drop, and potentially damage an entire financial sector.
“I think we’re going to hit an inflection point where the societal toll of misinformation is going to become stark and evident,” Mr. Lightman said.
Extreme skepticism over an image’s authenticity can also give way to paranoia. In Gabon, a failed coup to overthrow the government in 2019 began after suspicious residents spread rumors that the president was dead, and that a video he posted on New Year’s Eve was a deepfake.
Mr. Hancock was more optimistic, saying he believed that humans would adapt to spot the differences. Even the fact that people are already sounding the alarm about AI-generated images’ potential for misinformation, he said, is a positive sign.
Write to Ashley Wong at ashley.wong@wsj.com