Lifestyle

How to Protect Yourself From Deepfakes, AI Sexual Harassment

Last week, sexually explicit deepfake images of Taylor Swift reportedly made using artificial intelligence were disseminated across social media. It was just the latest — and most high profile — example of the growing dangers of deepfakes and digital harassment.

Swift is not the first woman this has happened to, nor will she be the last. It’s safe to say that almost every other victim of a pornographic deepfake has fewer resources than a billionaire pop star with millions of fans around the world who are very much online and savvy about it. But the fact that this can happen to even Taylor Swift has brought renewed attention to the issue and to a wider audience. It’s even jumpstarted federal legislation to combat the abuse, advocates say.

On Tuesday, US senators introduced the bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act) to address and deter nonconsensual deep fake pornography. The bill would allow victims of “digital forgery” — “created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means to falsely appear to be authentic” — to sue the perpetrators civilly, giving them an opportunity to seek justice for the first time.

Victims of nonconsensual deepfake pornography have not had much success in suing people who create, distribute, or solicit deepfake pornography, according to Omny Miranda Martone, who worked with legislators on the bill as founder and CEO of the Sexual Violence Prevention Association (SVPA). The technology is still fairly new, and as with other societal issues regarding tech, the laws need to catch up. In the meantime, technology has advanced to make it even easier for anyone to make AI-generated deepfakes.

“The amount of sexual violence against everyday individuals has skyrocketed.”

“Until about two years ago . . . somebody needed to have a high-powered computer, they needed to know a lot about technology and be able to work a confusing system and potentially even build the system themselves. And they needed a lot of photos of the victim for it to look realistic,” Martone explains. “But AI technology has significantly advanced, and it’s at the point where anybody can create really realistic-looking nonconsensual, deepfake pornography, using just one or two photos. They don’t need to know a lot about technology, they can just go onto an app or a website on their phone or computer and just make it in a matter of seconds.”

That’s mind-boggling today, given that by 2017, one in eight social media users reported being targets of nonconsensual pornography and one in 20 adults on social media had been perpetrators, according to the Cyber Civil Rights Initiative.

Given these technological advancements — and the fact that “anyone with internet access can easily create” it — “the amount of sexual violence against everyday individuals has skyrocketed,” Martone says. Celebrities had been the most common target because of the availability of their images. “But now [perpetrators] only need a couple of photos which they can get from somebody’s LinkedIn or social media or their company’s website,” Martone explains.

This is disproportionately damaging for women, according to Margaret Mitchell, the former co-lead of Google’s Ethical Artificial Intelligence team. “It’s extremely violating, it has an extremely traumatic psychological effect on you if you’re a victim of it. It also is objectifying, so it influences other people to see you through a sexual lens as opposed to a professional lens. Then that means it can affect opportunities,” she tells POPSUGAR.

“Women already struggle to be seen as leaders, to be taken seriously in the workplace. When this kind of content is floating around, that can affect how women are able to be promoted, get into more leadership positions,” Mitchell, who is now chief ethics scientist at Hugging Face, an open platform for AI builders, continues. “It seeps into so many things in a way that can sometimes be really hard to trace, but fundamentally influences how you see yourself, how you feel about yourself, and how other people treat you.”

In Swift’s case, the images originated from 4chan and a group on Telegram used for distributing abusive images of women, reported 404 Media. Designer, a free Microsoft AI generator, is often used to produce the nonconsensual content, according to the outlet. In response to the reporting, Microsoft closed a loophole that allowed Designer to make AI-generated celebrity porn.

But in the meantime, the images quickly flooded X, formerly known as Twitter. In an unusual example of content moderation in the Elon Musk era, X instituted a temporary block on a search of “Taylor Swift,” which slowed down the spread, but it was not able to remove all of the images, CBS News reported. X’s response followed legions of Swifties reporting the posts and demanding the site take them down. It also came after the posts were already widespread — one had already clocked 45 million views, 24,000 retweets, and hundreds of thousands of likes and bookmarks in 17 hours before that verified user’s account was suspended, according to The Verge. By Monday, Jan. 29, the block had expired.

Although SVPA has been working with Congress for a while — Martone says they sent an open letter to last year urging them to take on the issue — it seems to Martone that “they didn’t take action until the Taylor Swift news got really big.”

“Even if you feel like nobody cares about your images, AI cares.”

Still, most victims of nonconsensual pornography aren’t high profile. Most are women and many hold marginalized identities, whether that’s their race, ethnicity, and/or sexual orientation. SVPA, for example, has worked with a woman who was a college student working out at the gym when she was hit on by a man, whom she turned down. A few weeks later, another gym member pulled the woman aside to let her know the man she rejected had made deepfake pornography of her — and had been showing other members at the gym and posted it on Instagram, where she was able to get it taken down. Martone says she herself recently had deepfake porn made of her, given her work on the issue.

Without legislation, it’s hard to take any recourse. Because the problematic content is acknowledged as fake, it’s not considered defamation, Martone adds.

In any case, as a society, we should all work on preventing nonconsensual content from being made and shared, experts say. The idea that the problem is growing and that it’s impossible to put the genie back in the bottle shouldn’t foster a sense of inevitability or numbness.

For those who think they’d never find themselves in this type of situation, Mitchell also stresses that the tech has allowed it to become so commonplace, anyone could be affected.

As Mitchell explains it: “People should be aware that any content they put online could potentially be swept up by private individuals, or by corporations in order to be used as part of AI training. So it might not be that some family pic on Facebook is something that the general public will care about. But that’s not the question. The question is, will this content be picked up by a massive tech corporation in a way that can help further the objectification of women?”

Ultimately, it’s the tech that you need to be aware and wary of. “Even if you feel like nobody cares about your images, AI cares,” Mitchell says. “AI systems can still sweep them up, they can still be used to misrepresent you, and they can still be used in any sort of revenge situation, and any sort of situation where you have someone who’s going to be a malicious actor like that can harm you personally.”

How to Prevent and Fight Nonconsensual Deepfakes

  1. Educate yourself so you can understand what’s happening. Know that sharing personal information online is potentially sharing that information with the public. “There needs to be a kind of a reimagining or re-understanding of what it is to share content online: when it’s appropriate to share stuff and when it’s not. It’s also about parents talking to their children about why it’s not okay to make deepfake porn of their classmates,” Mitchell says.
  2. Support journalism. “It’s important to know the role that the press plays, because sometimes that’s like the only counterbalance for really problematic behavior from companies or problematic behavior that companies are enabling,” Mitchell explains. “Because when there’s bad press that affects their stocks, that affects what the shareholders think, that affects what the venture capitalists think and how much money they can get. This is how we make change. We have journalists write about it. Companies pay attention to that — it reverberates throughout the market.”
  3. Be a good digital bystander. If someone makes or shares nonconsensual pornography with you — deepfake or not — don’t look at it, engage with it, share it, or even search for it online, because it will train the algorithms to provide more of it to more people, says Martone.
  4. Do a digital inventory of your image and information online. Make sure you’re being proactive about checking the sites where you post images of your face or body. Use the safety features available on each site or app, including everything as basic as making sure only people you know can see your photos on Facebook. If something does happen, Martone says, report to the sites immediately. SVPA has a guide to taking down content you don’t want from sites, as well as a guide to preventing digital sexual violence for content creators, but these principles can be used by anyone. You may also find it useful to review the extensive resources housed on the Cyber Civil Rights Initiative website, including their Frequently Asked Questions, State Laws, and research findings.
  5. Look for tools and apps that alter or protect images. It’s still a new area, but developers are working on technology to protect and “poison” images posted online, including tools like Nightshade. Mitchell says that as the need grows, more and better tools will be developed to combat these problems.

Source link

Related Articles

Back to top button