Spreading AI-generated content could lead to expensive fines
AI-generated “deepfake” materials are flooding the internet, sometimes with dangerous results. In just the last year, AI has been used to make deceiving voice clones of a former US president and spread fake, politically-charged images depicting children in natural disasters. Nonconsensual, AI-generated sexual images and videos, meanwhile, are leaving a trail of trauma impacting everyone from high schoolers to Taylor Swift. Large tech companies like Microsoft and Meta have made some efforts to identify instances of AI manipulation but with only muted success. Now, governments are stepping in to try and stem the tide with something they know quite a bit about: fines.
This week, lawmakers in Spain advanced new legislation that would fine companies up to $38.2 million or between 2 percent and 7 percent of their global annual turnover if they fail to properly label AI-generated content. Within hours of that bill being signed, lawmakers in South Dakota pushed forward their own legislation seeking to impose civil and criminal penalties for individuals and groups who share deep fakes intended to influence a political campaign. If it passes, South Dakota will become the 11th US state to pass legislation criminalizing deepfakes since 2019. All of these laws use the threat of potentially drained bank accounts as an enforcement lever.
According to Reuters, the Spanish bill follows guidelines set by the broader EU AI Act that officially took effect last year. Specifically, this bill is intended to add punitive teeth to provisions in the AI Act that impose stricter transparency requirements on certain AI tools deemed “high risk.” Deepfakes fall into that category. Failing to properly label AI-generated content would be considered a “serious offense.”
“AI is a very powerful tool that can be used to improve our lives … or to spread misinformation,” Spain’s Digital Transformation Minister Oscar Lopez said in a statement sent to Reuters.
In addition to its rules on deepfake labelling, the legislation also bans the use of so-called “subliminal techniques” to certain groups classified as vulnerable. It would also place new limits on organizations attempting to use biometrics tools like facial recognition to infer the race or political, religious, or sexual orientation of individuals. The bill still needs to be approved by Spain’s lower house to become law. If it does, Spain will become the first country in the EU to enact legislation enforcing AI Act’s guidelines around deepfakes. It could also serve as a template for other nations to follow.
A handful of US states are taking the lead on deepfake enforcement
The newly proposed South Dakota bill, by contrast, is more narrowly tailored. It requires individuals or organizations to label deepfake content if it is political in nature and created or shared within 90 days of an election. The version of the bill that advanced this week includes exemptions for newspapers, broadcasters, and radio stations, which had reportedly expressed concerns about potential legal liability for unintentionally sharing deepfake content. The bill also includes an exception for deepfakes that “constitute satire or parody,” a potentially broad and difficult-to-define carveout.
Still, watered down as it may be, South Dakota’s bill represents the latest addition to a growing patchwork of state laws aimed at curbing the spread of deepfakes. Texas, New Mexico, Indiana, and Oregon have all enacted legislation specifically targeting deepfakes designed to influence political campaigns. Many of these efforts gained momentum in 2024 after an AI-generated “digital clone” of President Joe Biden’s voice called voters in New Hampshire, urging them not to participate in the state’s presidential primary. The Biden fakery was reportedly commissioned by Steve Kramer, a political consultant who at the time was working for rival presidential candidate Dean Phillips’ campaign. Phillips later condemned the deepfake and claimed he wasn’t responsible for it. The Federal Communication Commision, meanwhile, hit Krammer with a $6 million fine for allegedly violating The Truth in Caller ID Act.
“I expect that the FCC’s enforcement action will send a strong deterrent signal to anyone who might consider interfering with elections, whether through the use of unlawful robocalls, artificial intelligence, or any other means,” New Hampshire Attorney General John Formella said in a statement.
Four states—Florida, Louisiana,, Washington, and Mississippi—have enacted laws criminalizing the distribution of nonconsensual, AI-generated sexual content. This type of material, sometimes referred to as “revenge porn” when directed at an individual, is the most common form of harmful deepfake content already spreading widely online. An independent researcher speaking with Wired in 2023 estimated that 244,625 deepfake pornography videos had been uploaded to the top 35 deepfake porn websites over the previous seven years. Nearly half of those videos were uploaded in the final nine months of that period.
The surge in uploads suggests that easier-to-use, more convincing generative AI tools—combined with a lack of meaningful safeguards—are making nonconsensual deepfakes more commonplace. Lawmakers have a personal stake in the issue as well. A study released last year by the American Sunlight Project (ASP) found that one in six women in Congress had been targeted by AI-generated sexual deepfakes.
Efforts to rein in deepfakes at the federal level have been slower moving, though that may be about to change. Earlier this month, First Lady Melania Trump spoke out in support of the “Take It Down Act,” a controversial bill that would make it a federal crime to post nonconsensual intimate imagery (NCII) on social media platforms. If passed, the bill would also require platforms to remove NCII content—and any duplicates—within 48 hours of it being reported. The bill has already passed the Senate and could come up for a vote in the House in the coming weeks or months.
“It’s heartbreaking to witness young teens, especially girls, grappling with the overwhelming challenges posed by malicious online content, like deepfakes,” Melania Trump said. “This toxic environment can be severely damaging.
The persistent problem with anti-deepfake laws
Though the goal of limiting deepfakes is a laudable one, critics worry some of the laws and bills being pursued go too far. The Electronic Frontier Foundation (EFF) has spent years warning that overly broad language used in several state laws targeting political deepfakes could be manipulated by a bad actor by criminalizing ads that simply use dramatic music or piece together authentic video clips in a way perceived to be damaging to a candidate. EFF also takes issue with the Take It Down Act and others like it, which it says creates an incentive to falsely label legal speech as a nonconsensual deepfake in order to have it censored.
“While protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy,” EFF Senior Policy analyst Joe Mullin said.
The start of 2025 could mark an inflection point in global efforts to combat AI-generated deepfakes. More European countries are likely to follow Spain’s lead and propose new legislation criminalizing the creation or spread of deepfakes. While the specifics of these laws may vary, they are united in part by the underlying frameworks established in the EU AI Act.
Meanwhile, the U.S. is on the verge of passing its first federal bill banning deepfakes. In all these cases, however, it remains to be seen how effectively lawmakers can wield these new legal tools. Deep-pocketed tech companies and political campaigns targeted by the laws will likely pursue legal challenges that strain government resources. The end result of those possible lengthy legal battles could determine whether or not deepfake laws can actually do what they set out to accomplish without removing free speech in the process.
Source link