OpenAI and Anthropic, two prominent AI companies, have announced significant updates to improve the safety of their chatbots, particularly for teenage users. OpenAI has introduced new guidelines for its ChatGPT model, focusing on interactions with users aged 13 to 17. These guidelines prioritize teen safety over other objectives, such as intellectual freedom, guiding the chatbot to steer teens towards safer options when needed.
Moreover, OpenAI emphasizes the promotion of real-world support, encouraging offline relationships, and setting clear expectations for interactions with younger users. The company underscores the importance of treating teens with care and respect, tailoring responses to their age group rather than adopting a condescending or overly adult tone.
As part of these changes, OpenAI is also developing an age prediction model to estimate users’ ages and automatically apply appropriate safeguards if a user is identified as under 18. This proactive approach aims to provide stronger protective measures and prompt users, especially teens, to seek offline support when conversations veer towards higher-risk topics or situations.
These advancements signify a concerted effort by OpenAI and Anthropic to enhance the safety and well-being of young users engaging with AI-powered chatbots, showcasing the evolving landscape of AI ethics and responsible deployment.
Source: The Verge
Leave a Reply