Details

  • OpenAI introduced an advanced safety system for ChatGPT on September 16, 2025, deploying automatic age prediction to provide users under 18 with a distinct, more protective experience featuring stricter content policies.
  • Parental controls will be added by the end of September 2025, letting parents link their accounts to their teens (aged 13+), oversee features such as memory and chat history, and receive alerts during times of severe distress.
  • Key new safeguards include blackout hours to limit teen access, rigorous blocking of explicit content, and involvement from law enforcement in rare emergencies when parents cannot be reached.
  • When age cannot be confidently established, the system defaults to the under-18 version, and adults must verify their age to access the full platform, reflecting a focus on prioritizing minor safety even over privacy or user autonomy.
  • The rollout coincides with a Senate Judiciary Committee hearing on AI harms, follows a lawsuit alleging ChatGPT contributed to a teen's death, and comes amid an active FTC investigation into chatbot safety protocols.

Impact

OpenAI’s new safeguards represent a pivotal move in AI safety, coming as lawmakers and regulators increase scrutiny over chatbots and child protection. The introduction of age prediction could establish a new baseline for industry standards and prompt rivals like Google and Microsoft to enhance their own youth protections. These changes underscore the pressure on tech firms to balance innovation with the evolving demands of safety, regulation, and public trust.