OpenAI has introduced a new feature within ChatGPT aimed at safeguarding young users by predicting their age and enforcing appropriate content restrictions. The ‘age prediction’ capability comes in response to growing concerns about the impact of AI on minors.
Recognizing the need to address potential risks associated with AI interactions for individuals under 18, OpenAI has integrated this feature to identify underage users and apply content filters to their conversations.
Recent incidents linking ChatGPT to teenage suicides and inappropriate discussions with minors have prompted heightened scrutiny of OpenAI’s practices. In response, the company has enhanced its platform with an advanced AI algorithm that evaluates user accounts for behavioral patterns and account-level indicators to determine the user’s age.
The ‘age prediction’ mechanism considers factors such as the user’s self-reported age, account creation date, and typical activity hours to assess the user’s age category. Upon identifying an account as belonging to an individual under 18, ChatGPT automatically enforces content filters to prevent exposure to sensitive topics.
To rectify any misclassifications, users can undergo an account verification process by submitting a selfie to OpenAI’s partner, Persona. This additional layer of security underscores OpenAI’s commitment to fostering a safer online environment for young individuals engaging with AI-powered platforms.
Source: TechCrunch