As concerns grow regarding the impact of artificial intelligence on minors, OpenAI has rolled out an innovative “age prediction” feature within ChatGPT, aimed at effectively identifying younger users and implementing appropriate content restrictions.
This introduction comes in the wake of significant criticism directed at OpenAI for the potential risks ChatGPT poses to children. Notably, several tragic incidents, including adolescent suicides, have been linked to the chatbot’s interactions. In particular, OpenAI faced scrutiny last April after a flaw allowed the AI to generate adult content for users younger than 18.
OpenAI’s “age prediction” feature represents a continuation of its efforts to enhance safety for younger audiences. The capability utilizes sophisticated AI algorithms to evaluate various “behavioral and account-level signals” to discern the age of users. According to the company’s blog post, these signals encompass the user’s declared age, the account’s longevity, and the typical hours of activity.
To bolster protection, OpenAI has already implemented content filters that block discussions related to sexual topics, violence, and other sensitive subjects for users under 18. Should the age prediction algorithm classify a user as underage, these filters will be automatically applied, further safeguarding young individuals.
In cases where a user is incorrectly tagged as underage, OpenAI offers a recourse; users can verify their adult status by submitting a selfie through its ID verification partner, Persona.
Key Points:
– New “age prediction” feature aims to protect minors on ChatGPT.
– Developed in response to concerns about AI’s effects on youth.
– Utilizes behavioral data to assess user age.
– Existing content filters provide additional safeguards.
– Users can appeal incorrect age classifications through ID verification.
