AI deepfakes are cheap, easy, and coming for the 2024 election
Image: The Verge
Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week, we’re continuing our miniseries on one of the biggest topics of all: generative AI.
Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system. A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too.
And of course, it’s once again a presidential election year here in the US. So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge misinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.
The conversation around media manipulation on social platforms really comes and goes with every election cycle. The 2016 election was marked by Russian disinformation campaigns on Facebook; the 2020 campaign offered a reckoning over the Hunter Biden laptop story on Twitter and then the major platforms finally banning Donald Trump after the January 6th attacks.
Those bursts of attention have receded, with little or nothing to show for it — and in the case of Twitter, a wholesale retreat from any moderation at all as Elon Musk turned the platform into what’s now X. And X is where fake pornographic images of Taylor Swift have been most widely distributed — a preview of the problems facing every major platform.
Dirty political tactics are already in the mix. There was a fake Joe Biden robocall in New Hampshire last month. There have always been lies in campaigns, but a magician who currently lives in New Orleans claims it took him 20 minutes of time and $1 to make the fake audio, which says a lot about how easy and scalable generative AI makes it to lie more and lie faster.
It does seem like there might be more nuance in the misinformation conversation in 2024 than we’ve had before. But like with any thorny issue surrounding online speech, there is the First Amendment to contend with. There are also the existing policy and platform moderation debates around how best to combat the creation and spread of nonconsensual pornography and where the line exists between protected commentary and malicious misinformation.
None of this is easy. But these problems aren’t going away, and it’s important to take stock of how AI companies, social media platforms, and policymakers are trying to deal with it and what we as individuals should keep in mind as the election cycle kicks into high gear.
Image: The Verge
Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week, we’re continuing our miniseries on one of the biggest topics of all: generative AI.
Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system. A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too.
And of course, it’s once again a presidential election year here in the US. So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge misinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.
The conversation around media manipulation on social platforms really comes and goes with every election cycle. The 2016 election was marked by Russian disinformation campaigns on Facebook; the 2020 campaign offered a reckoning over the Hunter Biden laptop story on Twitter and then the major platforms finally banning Donald Trump after the January 6th attacks.
Those bursts of attention have receded, with little or nothing to show for it — and in the case of Twitter, a wholesale retreat from any moderation at all as Elon Musk turned the platform into what’s now X. And X is where fake pornographic images of Taylor Swift have been most widely distributed — a preview of the problems facing every major platform.
Dirty political tactics are already in the mix. There was a fake Joe Biden robocall in New Hampshire last month. There have always been lies in campaigns, but a magician who currently lives in New Orleans claims it took him 20 minutes of time and $1 to make the fake audio, which says a lot about how easy and scalable generative AI makes it to lie more and lie faster.
It does seem like there might be more nuance in the misinformation conversation in 2024 than we’ve had before. But like with any thorny issue surrounding online speech, there is the First Amendment to contend with. There are also the existing policy and platform moderation debates around how best to combat the creation and spread of nonconsensual pornography and where the line exists between protected commentary and malicious misinformation.
None of this is easy. But these problems aren’t going away, and it’s important to take stock of how AI companies, social media platforms, and policymakers are trying to deal with it and what we as individuals should keep in mind as the election cycle kicks into high gear.