🔮 We predicted this
Deepfakes of Taylor Swift are not surprising.
👋🏻 Hey, friends! I'm happy to announce that comments are now available for this newsletter thanks to the hard work of the team at Buttondown. So feel free to comment away!
Let's jump right in.
Some racy AI-generated images of Taylor Swift were being shared on X last week. While the images were fake, they still stirred up quite the controversy. The Verge called it the "latest example of the proliferation of AI-generated fake pornography."
But we predicted this would happen. Privacy experts and tech enthusiasts alike have been saying fake AI-generated images would be used to spread misinformation, disrupt elections, and create chaos.
One study from 2020 published in Crime Science "identified 20 ways AI could be used to facilitate crime over the next 15 years." The study's authors "said fake content would be difficult to detect and stop, and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call," according to London-based UCL. "Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm."
We're in the beginning phases of such harm today, especially with the public release of a handful of generative AI tools. Shirin Ghaffary writes in Vox that "tools like ChatGPT, DALL-E, Midjourney, and even new AI feature updates to Photoshop have supercharged the issue by making it easier and cheaper to create hyperrealistic fake images, video, and text, at scale." And this is exactly what's happening.
While the Taylor Swift deepfakes created chaos on X, future deepfakes may have greater impacts on society. Take, for example, deepfakes intended to upend an election, or to provoke violence or terrorism.
Last week, Pope Francis made an appeal to the international community "to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms."
This comes after Pope Francis was featured wearing a puff jacket in an AI-generated image created using Midjourney. He called AI tech "both exciting and disorienting" adding that "I too have been an object of this."
Hopefully the Taylor Swift deepfakes will shed light on the urgent need to regulate deepfakes for nefarious purposes.
The White House responded to the issue, calling on Congress to come up with a solution. Press Secretary Karine Jean-Pierre said: "We are alarmed by the reports of the circulation of the...false images."
Moreover, U.S. Representative Joe Morelle called the deepfakes "appalling." And Congressman Tom Kean Jr said it is "clear that AI technology is advancing faster than the necessary guardrails."
Meanwhile, Taylor Swift is considering suing X. 👩⚖️
👀 In surveillance news, a report from Ars Technica shows that the "National Security Agency (NSA) has admitted to buying records from data brokers detailing which websites and apps Americans use."
This comes after Senator Ron Wyden's office released documents confirming the NSA's purchase of personal data on Americans.
"The U.S. government should not be funding and legitimizing a shady industry whose flagrant violations of Americans’ privacy are not just unethical, but illegal," Senator Wyden wrote in a letter to the Director of National Intelligence. "To that end, I request that you adopt a policy that, going forward, IC elements may only purchase data about Americans that meets the standard for legal data sales established by the FTC."
The NSA aside, I still find it surprising how easy it is for companies or governments to purchase personal data through data brokers.
📖 What I'm reading
X is blocking searches for Taylor Swift (The Verge)
Big Tech is surviving despite layoffs (Axios)
iOS 18 may be the biggest software update ever (TechCrunch)
Netflix is different now — and there’s no going back (The Verge)
Reed Hastings gives $1.1b in shares to Silicon Valley charity (WSJ)