OpenAI thinks ChatGPT could read your kid a bedtime story, as it introduces age-group GPTs

These GPTs would be meant for kids, young adults, and families.

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

OpenAI age-group GPTs

Last week, OpenAI announced its partnership with Common Sense Media, the largest advocacy group for children and families in the US, in a bid to create, develop and release age-group GPTs.

The two organizations will work together to build GPTs that follow pertinent guidelines and that can be used by children, young adults, and families alike without any worry. OpenAI also wants to release a special section within the GPT Store specifically made for these categories.

Common Sense Media, the nation’s leading advocacy group for children and families, announced a partnership with OpenAI to help realize the full potential of AI for teens and families and minimize the risks. The two organizations will initially collaborate on AI guidelines and education materials for parents, educators and young people, as well as a curation of family-friendly GPTs in the GPT Store based on Common Sense ratings and standards.

Common Sense Media

Sam Altman, OpenAI’s CEO, who was recently fired and then reinstated into the position, also said that this partnership is exciting and it will prove to be essential for kids, and young adults seeking to expand their educational capabilities.

AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence.

Sam Altman

The company wants to build age-group GPTs that can sustain a responsible environment for education and entertainment. But isn’t it risky?

Are OpenAI’s age-group GPTs a good idea?

Well, talking about common sense, we need to start with a proper question: let’s say you are a parent; would you let AI read a bedtime story to your kid and trust that it won’t do any harm?

Young people are indeed more tech-savvy than older people, and kids growing up with devices such as mobile phones and tablets would surely entertain the idea.

However, just how safe it could be? We all know that ChatGPT requires input to work, and when it’s interrupted often, the AI tool can easily hallucinate. With hallucination, ChatGPT can not only spell out abusive discourses, but it could also offer false information.

OpenAI says it will be working with Common Sense Media to develop safe age-group GPTs that won’t hallucinate with improper information, however, the tools will need to be checked and updated regularly so they can exist within clear boundaries.

Another major problem to be considered is the state of kids’ and young adults’ privacy which is already a delicate issue for OpenAI, as the company has been involved in a series of investigations from both American and European authorities regarding possible violation of the privacy laws within the regions.

It will be interesting to see just how the partnership between OpenAI and Common Media Sense addresses these issues.

But even with these issues addressed, let’s not forget that GPTs could be amenable to cyberattacks, or even hacking from external actors, and they could potentially do harm that wouldn’t be easily traceable at first.

With these in mind, maybe it’s best if we keep it to physical and human interactions, as parents to our kids. Using AI should only be done when the context is proper: schools, universities, or places where it could be manageable and observed.

What do you think?

More about the topics: AI, OpenAI

User forum

0 messages