Microsoft and Google offer new Child Safety Commitments for AI

Developing AI models to protect kids from AI models

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

AI for kids

Two of the biggest brands offering generative artificial intelligence services have come together to address the harm their AI platform can pose to child safety.

In a post from the Microsoft on the Issues blog, the company announced its partnership with Thorn, a non-profit organization committed to protecting children from sexual abuse, and All Tech Is Human which aims to tackle risks generative AI poses to children.

As part of a new security pact dubbed Safety by Design, Microsoft has committed to the following three tenants to help transparently address its process towards protecting children from harm by its own AI services.

  • DEVELOP: Develop, build and train generative AI models to proactively address child safety risks
  • DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process.
  • MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks

Across the business divide, Google also penned an update to its Safety and Security blog that echoes Microsoft’s committed partnership with Thron and All Tech Is Human.

Google’s voluntary commitment to address AI-generated child sexual abuse material (CSAM) with Thorn and All Tech Is Human includes:

  • Training datasets: We are integrating both hash-matching and child safety classifiers to remove CSAM as well as other exploitative and illegal content from our training datasets.
  • Identifying CSAE-seeking prompts: We utilize our machine learning to identify CSAE-seeking prompts and block them from producing outputs that may exploit or sexualize children.
  • Adversarial testing: We conduct adversarial child safety testing across text, image, video and audio for potential risks and violations.
  • Engaging experts: We have a Priority Flagger Program where we partner with expert third parties who flag potentially violative content, including for child safety, for our teams’ review.

Google is building on its decades of work with other liked-minded NGO’s, industry peers, and law enforcement to combat CSAM by allowing greater access to its dedicated APIs from its free-to-license Child Safety Toolkit. Third party partner organizations can utilize Google’s CST APIs to monitor and report child abuse and exploitation (CSAE) that could occur because of its generative AI services.

Microsoft is also looking to engage with policymakers to cement future standards for addressing CSAM in lieu of expanding AI capabilities that are producing undreamt of outcomes.

We will also continue to engage with policymakers on the legal and policy conditions to help support safety and innovation. This includes building a shared understanding of the AI tech stack and the application of existing laws, as well as on ways to modernize law to ensure companies have the appropriate legal frameworks to support red-teaming efforts and the development of tools to help detect potential CSAM.

Courtney Gregoire – Chief Digital Safety Officer

More about the topics: microsoft

User forum

0 messages