Fake Taylor Swift audios on TikTok reveal AI's potential to spread misinformation
Should social media platforms and tech companies be responsible for upholding truth? Should journalists?
“I couldn’t give less of a f*ck about my tickets being over a thousand dollars,” Taylor Swift said. “I don’t perform to poor b*tches.”
No, Taylor Swift did not say this—but it sure sounds like she did. A recent TikTok trend has fans making AI-generated audios in Swift’s voice. These include everything from the fake bit above to Swift complimenting fans or taking jabs at Kim Kardashian.
Fans are using a simple program called “Instant Voice Cloning” by ElevenLabs, according to an article published in The Atlantic just last week. This program costs a monthly rate but the first month is only $1—certainly affordable for a young Taylor Swift fan, though ElevenLab’s terms of service notes minors must have a parent’s permission to use the program.
Swifties seem to have a collective understanding that these audios are fake and have been using them purely as a joke. While the audio quality is just imperfect enough for any reasonable user to infer falsehood, the mere principle of this technology being so accessible is cause for concern.
What is a deep fake?
A deep fake is a fabricated photo, video, or audio made to look as real as possible. Video and audio fakes, which are arguably the most concerning, still have a long way to go in terms of quality. If the context of the video is absurd enough, most reasonable viewers will presume falsehood, and flaws will become more noticeable. Photoshop technology has been around long enough to be close to perfect, making fake photos difficult to spot.
On a darker side of Instant Voice Cloning, The Atlantic reports that users can make celebrities say racist or abusive things. The Washington Post reported on scammers creating fakes of loved ones’ voices pleaded for help over the phone in exchange for ransom.
There are plenty of voice cloning programs out there, and despite using them costing money, they’re fairly accessible to the public and do not require a ton of tech knowledge.
Should tech companies be responsible for fighting the spread of disinformation, and to what extent are they to blame?
The simple answer is yes. ElevenLabs says it can trace back audios to the users who made them to prove falsity if the need every arose. But what about the social platforms that promote these fakes to the public? ElevenLabs can find the source of deep fakes, but what’s the point if they’ve already spread like wildfire on TikTok?
Some tech leaders like Mark Zuckerberg have shown little interest in combating misinformation, which he demonstrated during a congressional hearing on the Cambridge Analytica scandal in 2019.
During this hearing, congresswoman Alexandria Ocasio-Cortez grilled Zuckerberg for lack of fact checking on Facebook. She asked whether she would be able to run false political ads saying her Republican opponents voted for the Green New Deal. Zuckerberg responded, “Probably.”
“I think lying is bad, and I think if you were to run an ad that had a lie that would be bad. That’s different from it being the right thing to do to prevent your constituents from seeing that you have lied,” Zuckerberg said.
AOC asked if Facebook would take down blatant lies. “Yes or no?” she said.
“In a democracy, I believe people should be able to see for themselves what politicians are saying and assess their character for themselves,” Zuckerberg said in response.
Is there some truth to Zuckerberg’s statement? The internet has drastically changed our idea of communication and speech in relation to the First Amendment. For a long time, the concept of a marketplace of ideas reigned supreme, and suggested the best way to counter “bad speech” in a democracy was with more speech, not censorship. If you don’t agree with something someone says or deem it harmful to the greater good, speak out against it, exercise “counter speech,” instead of removing it from the so-called marketplace.
Well, the marketplace is a lot bigger now, and scarier, with darker corners and limitless potential for mistruths. Because we’re not talking about opinions, we’re talking about hate and lies. The fight against misinformation isn’t concerned with posts in favor of abortion bans, rather posts claiming that most abortions happen in the third trimester or that all abortion procedures are unsafe (neither are true).
With right-wing accusations of bias and censorship, it’s hard for tech companies to moderate false content without being chastised—or bought out in a multi-billion dollar deal, in the case of Twitter.
“In terms of public perception, it puts into question our credibility as scientists. If we put something on Twitter like a research study, and you say it’s wrong, it invalidates everything we’re saying.”
- Johanna Po, an oral pathologist with a PhD in molecular biology who had factual tweets wrongly flagged for misinformation, according to The Washington Post
The other issue with content moderation is that there is no exact science behind it—we tend to forget just how young social media is. The two ways tech companies have dealt with moderation is 1) Using AI, which is far from perfect and can lead to problems that only fuel far-right talking points, or 2) A team of humans, which can be tedious and begs the question of how to compile the perfect group of unbiased persons to decide what posts to take down? See my post on Spotify’s playlist takedown policy (Spoiler: Spotify doesn’t use AI or manpower, rather relies on users to report content).
Or, maybe deciding what to take down isn’t that difficult at all. If the vaccine is safe, and has been proven to be so, why do we need a post claiming it’s not? Why not just take it down? Likewise, maybe fake audios of Taylor Swift saying she doesn’t perform to “poor b*tches” contributes nothing and should be taken down just out of principle?
What role should journalists play in fighting fake news?
From a business perspective, journalists should strive to stay on good terms with the people they interview, whether they be politicians, celebrities, or musicians. Record labels and publicists are reluctant enough to work with journalists, and I’m sure Taylor Swift would appreciate if any reporter she works with sets the record straight on incriminating misinformation. Upholding the truth and keeping a source’s best interest in mind is well within journalism ethics.
At the end of the day, the notion of tech companies as objective third parties is a myth. Whether you believe social platforms should allow users to run amok with speech and counter speech and decide for themselves what is real or harmful doesn’t really matter. If Elon Musk doesn’t like what he sees on his new playground of an app, he can just take it down. The Zuck can do the same.
Just like journalists act as a watchdog over the government, they need to be a watchdog over Big Tech, too. Whatever’s happening or trending on the internet—be it Twitter, Facebook or TikTok—journalists should be there to make sense of it and set the record straight, maybe like I’m doing now explaining Taylor Swift AI.