Politics

Photos of your children are being used to train AI without your permission, and there’s nothing you can do about it 


Human Rights Watch just completed a sweeping audit of AI training materials and revealed that pictures of children scraped from the internet were used to train models — without the consent of the children or their families.  

This already isn’t great, but it gets much worse.  

According to HRW: “Some children’s names are listed in the accompanying caption or the URL where the image is stored. In many cases, their identities are easily traceable, including information on when and where the child was at the time their photo was taken.” 

Did I mention it gets worse? Many of the images that were scraped weren’t publicly available on the internet but were hidden behind privacy settings on popular social media sites.  

In other words, some parents who thought they were doing everything right in sharing images of their kids are about to find out just how wrong they were.  

I’m not unsympathetic. I’m from Australia. I live with my wife and kids in the states. There was a time when social media seemed like the perfect vehicle to keep friends and loved ones up to date on my growing family. Ultimately, I realized that I was violating my kid’s privacy — and that later in life, they might not want these pictures online and available. 

Sharenting — posting information, pictures and stories about your kid’s life online — has increasingly been under fire for a lot of very legitimate reasons. A three-year-old can’t meaningfully consent to their parents sharing their potty training fail video for the world to see. It might seem like innocent enough fun, but a three-year-old doesn’t stay three years old forever, and today’s children will have extensive information about them online well before they’re of consenting age.  

But aside from a child not being able to consent, HRW’s report reveals that adult parents have no way of knowing what the long-term implications of sharenting might be. Ten years ago, nobody imagined that the photo album they shared of their family vacation might be ingested into machine learning. There are real unintended consequences already rolling out.  

Of course, a reasonable reading might be that this shouldn’t be allowed at all. Why do for-profit AI companies have the right to train on anybody else’s data? Let alone children’s? Let alone data hidden behind privacy settings?  

Surely the Federal Trade Commission will have something to say about this. Except that, as of last month, the FTC and every other federal agency had its hands tied behind its back when the Supreme Court ruled against the Chevron doctrine — taking power out of the hands of federal agencies and giving them to the courts.  

“In one fell swoop, the majority today gives itself exclusive power over every open issue—no matter how expertise-driven or policy-laden—involving the meaning of regulatory law,” wrote Justice Elena Kagan in her dissent from the ruling. “As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar.”  

If a federal privacy law wasn’t cooked before, it’s certainly cooked now. The overwhelming result will be to push privacy legislation back to the states. Meanwhile, federal decisions will stay in limbo as understaffed courts with no special insight on privacy try to wade through a workload that they are neither prepared or equipped for.  

While we wait, AI will continue scraping kids’ data — and, ultimately, whether or not that’s a perfectly legal thing to do will come down to the state you live in.  

Sharing photos of your kid’s little league game might be a fun way to stay connected to family near and far, but until meaningful protections are in place, it’s a risk I wouldn’t advise anybody to take. We deserve data dignity, we deserve ethical technology, we deserve sound and responsible guardrails for AI. At present, we have none of that — and the Supreme Court’s decision adds a significant hurdle to ever achieving those things.  

In the meantime, Big Tech has been left to make its own rules. Perhaps the only way to get their attention is to delete the apps, stop posting and cease feeding the beast.  

State legislators can’t act fast enough. 

Jonathan Joseph is a board member of The Ethical Tech Project. 


Source link

Related Articles

Back to top button