Time Room

OpenAI details safety measures for teenagers in India. business News & more related News Here

Artificial intelligence (AI) company OpenAI has a new comprehensive safety framework for teen users in India. The Teen Safety Blueprint for India, released days ahead of the India AI Impact Summit, emphasizes age-aware AI behavior, parental controls and what OpenAI calls industry-leading prevention of AI-generated child sexual abuse material (CSAM) and child sexual exploitation material (CSEM). A key element of this framework is the understanding that “we prioritize security before privacy and freedom”, which indicates the philosophical calibration of the approach.

The OpenAI logo is displayed on a mobile phone in front of a computer screen with the output of ChatGPT. (ap file)
The OpenAI logo is displayed on a mobile phone in front of a computer screen with the output of ChatGPT. (ap file)

The framework notes, “Teens are growing up with AI but aren’t grown up yet. We believe ChatGPT should meet them where they are – the way ChatGPT responds to a 15-year-old should be different from the way it responds to an adult.” This is the latest in a three-part development for OpenAI, which began with setting out teen safety and privacy principles in September, followed by the release of the first global teen safety framework in November, followed by a model spec update for teen safety in December. The India-specific iteration of this blueprint emphasizes the need for AI models to understand that teens are not yet adults, and respond accordingly.

Given that this is one of OpenAI’s largest and fastest growing markets, an India-specific blueprint is not irrelevant. In this, diversity and local specifics will also have to be kept in mind. The blueprint outlines clear guardrails for users under 18, including AI systems that encourage suicide or self-harm, facilitate dangerous stunts, enable access to illegal substances, reinforce harmful body ideals, or allow graphic and immersive sexual or violent scenarios.

OpenAI, beyond proposing better content moderation, calls for a structural and layered redesign of how AI platforms identify, classify, and treat users under the age of 18. The age prediction system that ChatGPT uses is in the headlines. “It looks at various signals associated with your account. For example, it may look at common topics you talk about or the time of day you use ChatGPT,” the technical documentation explains, although acknowledging that “no system is perfect”.

Parental controls, such as allowing parents to link their account to their teen’s account, setting blackout hours, as well as the ability to turn off memories and chat history, are key to keeping kids safe when interacting with AI.

“We believe AI companies should identify adolescents on their platforms by using privacy-protective, risk-based age estimation tools to distinguish between adolescents and adults. Age estimation will help AI companies ensure they are implementing the right protections for the right users. This will facilitate age-appropriate experiences,” the framework proposes that more AI companies should adopt similar methodology, leading to the creation of an industry standard at some point.

There is a challenge unique to India, which could complicate things – RATI Foundation’s Ideal Internet Report 2024-25 states that 62% of Indian teens use shared devices. This feature, which is unique to India, disrupts the assumptions and mechanisms in most globally formatted digital security systems, which assume individual device ownership with English as the main spoken language. Indian teens using shared devices are often part of a family of users who are also multilingual, making preconceived notions and context irrelevant.

Other AI companies also have policy guidelines for child user protection. Google’s Gemini uses an age verification method to gate some conversations – this age data synchronizes with Android and YouTube restrictions. In search and AI responses, explicit self-harm, sexual content, illegal behavior and graphic violence are filtered by existing secure search and moderation layers that are extended into generative responses.

Meta, like OpenAI, requires users to be at least 13 years old before setting up their account. Meta user machine learning to detect suspected minors, and generative AI responses to users under 18 default to a safe, non-graphic, and often non-directive mode.

Major questions still arise. Might generative AI systems that are open-ended, creative, and context driven be able to provide restrictive age-specific experiences without missing key elements? Second, can age estimation be inaccurate when associated with cultural and regional nuances such as developmental trends versus physical age?

When OpenAI says “teenagers are growing up with AI but not yet adults”, that’s both a description and a warning.

Exit mobile version