Character.AI Enforces Age Restrictions on Chatbot Access for Teen Users
Character.AI has announced that it will restrict teenagers from engaging with its chatbots, responding to growing concerns regarding the safety of younger users in the rapidly evolving field of artificial intelligence. Effective November 25, the platform will discontinue open-ended chat interactions for individuals under 18, transitioning to a controlled environment aimed at fostering creativity rather than companionship.
In preparation for these changes, Character.AI will implement a temporary user experience for those under 18, promoting activities such as video creation and streaming while limiting interaction with bots to a maximum of two hours per day. This time limit is expected to tighten as the November deadline approaches.
To further enhance user safety, Character.AI is launching an internal age verification tool designed to ensure that interactions are age-appropriate. Additionally, the company is establishing an “AI Safety Lab” that will facilitate collaboration among tech companies, researchers, and academics, aimed at advancing AI safety protocols.
The necessity for such measures has been underscored by recent concerns about the potential risks associated with teens relying on AI for guidance. This follows a high-profile lawsuit alleging that a major AI platform contributed to a tragic incident involving a teenager. As AI technology continues to permeate daily life, Character.AI’s proactive approach seeks to balance innovation with user protection.
Key Takeaways:
– Beginning November 25, Character.AI will ban chat interactions for users under 18.
– A temporary experience for teens will encourage safe and creative uses of AI.
– Interaction with chatbots will be limited to two hours daily, decreasing as the deadline nears.
– An internal age assurance tool will be introduced to enhance user experience.
– An “AI Safety Lab” aims to promote research and safety improvements within the industry.
