Meta Announces New AI Parental Controls for Teens on Instagram, WhatsApp & Facebook
Parental oversight slowly becoming a norm in the AI industry
As AI usage continues to rise among teens, major companies are starting to hand parents more control over how their children interact with these tools. OpenAI, for instance, recently launched new parental controls for ChatGPT, letting families link accounts, restrict certain features, and filter unsafe content when teens use the AI assistant.
Meta has now jumped in the boat, as it today announced new AI safety measures across its social platforms, Instagram, WhatsApp, Threads, and Facebook. The new parental controls are designed to give parents greater oversight and peace of mind.
Per the update, parents would be able to block one-on-one chats with Meta’s AI characters, monitor general conversation themes, and disable specific AI assistants altogether. Meta says its main AI assistant will remain available with age-appropriate restrictions, while one-on-one chats can be completely turned off.
Parents will also gain visibility into what kinds of topics their teens are exploring with AI, without breaching privacy. In a statement, Meta emphasized that AI is meant to complement real-world experiences, not replace them. “We believe AI can support learning and exploration with proper guardrails,” the company said.
The new changes comes amid increasing global scrutiny of how social media platforms handle teen mental health and AI interactions. Meta further says the updated parental supervision tools will first roll out on Instagram next year, starting in English for users in the U.S., U.K., Canada, and Australia, before expanding to more regions and languages later.
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more


User forum
0 messages