Skip to content
Reading time: 2 minutes
person holding a microphone
Taylor Swift (Photo credit: Shutterstock)

Taylor Swift has been dominating the news for her worldwide Eras Tour, her support for her new boyfriend Travis Kelce of the Kansas City Chiefs and the surprising fact that her father once played football for the University of Hawaiʻi. Now she is making headlines as the victim of a disturbing social media trend: explicit images generated by artificial intelligence (AI).

“These AI tools with image generation are still kind of a wild west,” said UH Mānoa Communication Program Professor Wayne Buente, who specializes in social media. “This is one of the classic situations where the technology is moving far faster than the law and policy.”

Related story: What is Taylor Swift’s tie to UH?, November 2023

The AI generated, explicit images of Swift were shared on X (formerly Twitter) and received more than 47 million views in less than 24 hours before the account was suspended and X removed the option to search for the images, according to multiple media reports. The use of social media to spread false information and deep fake images (realistic computer-generated visuals produced by artificial intelligence) has grown exponentially in recent years. As AI improves, the ability to discern what’s true or false will get even more difficult, according to Buente.

“In the end, the responsibility is really going to fall on the end user to kind of do their homework about the content that they’re seeing, the sourcing of it and what we know about where this image is coming from,” Buente said.

The Swift images were created through free Microsoft software, according to Buente. Even though the use of AI to create an explicit image is prohibited by Microsoft, the user likely found ways around that limitation. He said once an image is posted on a social media site and begins to gain traction, algorithms take over and further amplify the content.

AI imaging could be used on everyday people

Buente also said what happened to Swift could happen to anyone.

“I hope that the understanding that AI image generation and deep fakes is a real serious concern now, not just for celebrities,” Buente said. “I’ve read stories where kids are taking photos of their peers and putting them through these tools to generate explicit pictures and that’s causing a lot of trauma in these situations.”

While X has announced plans to hire content moderators, Buente said more needs to be done, including the expected rise in deep fake videos and false narratives ahead of the 2024 presidential election.

“When we also think about bad actors that operate in our social media information landscape that are deliberately out there to put disinformation because they have particular interests in the outcome, it’s something that I’m really concerned about,” said Buente.

Educating our next generation

Buente teaches a variety of courses in the Communication Program related to social media and information and communications technology. In his social media courses, undergraduate and graduate students learn about algorithms, which lead to increased social media literacy. Each year the amount of false information, hoaxes, conspiracy theories and more has proliferated. Being able to rationally understand the points, their origins and the motives behind sharing such misinformation has become increasingly more important with the trend.

The Communication Program is in the School of Communication and Information in the College of Social Sciences.

Back To Top