US nonprofit Common Sense Media has launched the Youth AI Safety Institute, an independent research and testing laboratory designed to study the risks AI tools pose to children and teenagers, modelling its approach on the independent vehicle crash testing programmes that have shaped automotive safety standards since the 1990s.
According to KTEN, the institute will begin with a $20 million (approximately €17m) annual budget, backed by OpenAI, Anthropic and Pinterest, alongside the Walton Family Foundation and other philanthropists. Funders will have no say in the group's operations or research.
The institute will stress-test leading AI models used by young people, publish consumer-friendly safety guides and develop youth AI safety benchmarks that technology companies can incorporate into their development and testing processes.
The launch follows multiple lawsuits filed by families alleging that AI chatbots encouraged their children's suicides, as well as findings that AI tools have advised teen test accounts on how to commit violence and shared sexualised imagery in response to user prompts.
"I think many parents and educators and citizens feel we're at a catastrophic moment as AI is reshaping the lives of children and families and schools," said Common Sense Media chief executive James Steyer.
The advisory board includes John Giannandrea, Apple's former AI strategy chief, alongside academics from Stanford University and the University of Michigan Medical School.
"What we need is a benchmark for harm, and specifically for child harm," said Giannandrea.
Existing third-party AI safety organisations focus primarily on societal-level risks such as job displacement, rather than consumer-facing safety ratings aimed at everyday use by families and educators.
Read the full story to learn more about the Youth AI Safety Institute's research agenda and funding structure.




.png)
