Artificial intelligence leaders including OpenAI, Meta, and Google have agreed to install child protection safeguards in response to an alarming rise in AI-generated child porn, The Wall Street Journal reported. Organized by the child-safety group Thorn and the ethical tech nonprofit All Tech Is Human, the agreement asks AI labs to avoid training models off data sets that could include explicit images of children and calls for more vigilance in shutting down back doors that allow that content to be generated. For instance, Thorn wants AI platforms and search engines to cut links to services that generate naked photos of kids, which has caused serious privacy problems at middle and high schools this year. Big tech, which famously moves at a breakneck pace, worries that attempts to install sweeping safeguards could hinder innovation or lead to a less useful models—but there’s a dire need for self-regulation. In a report released Monday, Stanford’s Internet Observatory found the volume of AI-generated child porn is on the brink of overwhelming the single organization that monitors crimes against children.
Read it at The Wall Street JournalTech
Tech Giants Promise to Crack Down on AI-Generated Child Porn
SAFEGUARDS
Deepfake nudes have plagued middle and high schools this year, and AI has made it easier for pedophiles to access explicit images of kids.
Trending Now