In 2024, an artificial intelligence (AI)-generated parody video of former Vice President Kamala Harris sparked controversy across the internet. California officials, not thrilled with its spread, responded by enacting AB 2839, known as the “Defending Democracy From Deepfake Deception Act of 2024,” which imposed restrictions on election-related parody videos. The law faced immediate backlash and legal challenges, culminating in a ruling on Tuesday from U.S. District Judge John Mendez, who ultimately struck it down.
Tech billionaire and X CEO Elon Musk was a key player in the charge against the law. He filed a 65-page lawsuit on behalf of his social media platform shortly after the bill’s passage. Musk argued that the law violated free speech rights by mandating that large online platforms remove content deemed “materially deceptive,” particularly in election campaigns and political speech. Breitbart News reported that Musk’s suit warned the law would lead to “widespread censorship of political speech and infringes upon First Amendment rights.”
Notably, Judge Mendez did not address the free speech arguments. Instead, his decision was grounded in federal protections under Section 230 of the Communications Decency Act, which shields online platforms from liability for third-party content. Reportedly, Mendez also signaled his intent to overturn a related California law requiring labels on digitally altered campaign materials and advertisements, deeming it a First Amendment violation. As Mendez stated, “They don’t have anything to do with these videos that the state is objecting to.”
According to Mendez, there may be better approaches to allowing California to accomplish its goals. However, he was concerned that it’s too easy for such policies to become “censorship” laws, “and there is no way that is going to survive.”
Supporters of Tuesday’s ruling have hailed it as a win for free speech, not just for Musk and his platform, but for similar satirical content creators like the Babylon Bee and video platform Rumble. In fact, the original challenge came from a man named Christopher Kohls, who was the creator of the Kamala Harris parody video. He also argued on the grounds of the First Amendment.
A spokesperson for California Governor Gavin Newsom’s (D) office, Tara Gallegos, said they’re still reviewing Tuesday’s decision. However, she noted that they “remain convinced that commonsense labeling requirements for deep fakes are important to maintain the integrity of our elections.” A spokesperson from California Attorney General Rob Bonta’s (D) office said they are also reviewing the ruling.
Kristin Liska, arguing on the attorney general’s behalf, “noted the Berman law only applied to large platforms with 1 million or more users,” Politico reported. As such, she “asked Mendez to limit his order to plaintiffs X and Rumble.” To this, one of Kohls’ attorneys stated they would work to protect other sites, such as Facebook and YouTube, which were not included in the original lawsuit.
According to Chris Gacek, senior fellow for Regulatory Affairs at Family Research Council, “The judge seems right.” As he explained to The Washington Stand, “Political speech in election periods is pretty sacrosanct — parody seems to have a special place in Americans’ hearts.” Elaborating on this idea was Jared Bridges, vice president for Brand Advancement at Family Research Council.
As he shared with TWS, “The concern about AI deepfakes is certainly valid. We don’t want ourselves or our fellow citizens to be presented with images and audio that bear false witness.” But as he went on to explain, “The ad that sparked the California law was clearly parody — no one with voting capacity would mistake it for a real Kamala Harris campaign ad.”
“Parody,” Bridges concluded, “AI-generated or not, can be a very effective tool for speech, but doesn’t fall into the category of deception.”
Sarah Holliday is a reporter at The Washington Stand.


