Taylor Swift disdained record labels and fought to own her music. She garnered Grammy Awards and led the world’s first billion-dollar tour.
But overnight, her power was punctured in a familiar way, in the crudest, most depressing way possible for a woman: through deepfake pornography.
Degrading women by reducing them to mere sexual objects and creating explicit images has been happening for a long time. More than 100 years ago, the New York Evening Graphic used composite photographs that superimposed some people’s heads on other people’s torsos to present sexualized images of women. But the combination of a global megastar like Swift and the rapid, unbridled rise of generative AI tools is bringing images that went viral overnight on social media to a low point, and regulators and companies are sounding a warning about the future of online sexual abuse. Don’t take action.
While many of Swift’s fans have banded together to flood the results of search terms on social media with innocuous images to recoup the damage, the truth—much depressing for victims of revenge porn and the nascent growth of deepfake images—is that these artificial intelligences… images are now available and exist in the wild.
Carolina Are, a platform governance researcher at Northumbria University’s Center for Digital Citizens, says this debate highlights a number of issues. One of these is society’s approach to women’s sexuality, and in particular the sharing of non-consensual images stolen from them or created using artificial intelligence. “It just leads to this weakness,” she says. “This is literally the powerlessness over your own body in digital spaces. Unfortunately it looks like it’s going to be par for the course.
Platforms, including X, where the images were first shared, attempted to stop the images from being shared, including suspending the accounts that posted the content, but users reposted the images on other platforms such as Reddit, Facebook, and Instagram. The photos are still being reposted to X, and X did not immediately respond to a request for comment.
“What I’ve noticed in terms of all this worrying, AI-generated content is that when it’s non-consensual, when there aren’t people who actually want their content to be created, or they’re not sharing their bodies consensually and willingly to work, that’s when that content seems to thrive,” says Are. “But the same content is heavily policed when consent is given.”
Swift is far from the first person to be victimized through AI-generated images without her consent. (A number of media outlets have previously reported on the existence of other platforms that allow the creation of deepfake pornography; Fast Company does not link to these platforms.) In October, a Spanish town was devastated by a scandal involving deepfake images. It was made up of more than 20 women and girls, the youngest of whom was only 11 years old. Deepfake pornography is also used as a means of extorting money from victims.
But Swift is perhaps the most high-profile and most likely to have the resources to fight back against tech platforms; This may cause some productive AI tools to change direction.
A non-academic analysis of nearly 100,000 deepfake videos posted online last year found that 98 percent of such videos were pornographic and 99 percent of those featured in them were women. 94% of those in the videos were in the entertainment industry. A related survey of more than 1,500 American men found that three in four people who consumed AI-generated deepfake porn did not feel guilty about it. A third said they knew it wasn’t that person, so it wasn’t harmful and wouldn’t harm anyone as long as it was only for their personal benefit.
But it’s harmful, and people should feel guilty: New York law makes it illegal to share deepfaked, non-consensual sexual images, punishing those who do so with a year in prison, and provisions in the UK’s Online Safety Act prevent it from being shared. It forbids. such images are illegal.
Seyi Akiwowo, founder and CEO of Glitch, a charity that campaigns for greater digital rights, said: “An ongoing and systemic disregard for bodily sovereignty is fueling the harm caused by such deepfakes, which are certainly sexual exploitation and sexual harassment.” it’s open,” he says of people marginalized online.
“We need tech companies to take decisive action to protect women and marginalized communities from the potential harm of deepfakes,” says Akiwowo.