Microsoft wants you to know it will legally deal with abusive AI-Generated content

AI-based cyberthreats are included.

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

abusive ai

Microsoft’s Digital Crimes Unit (DCU) has filed a complaint in the Eastern District of Virginia to halt cybercriminals from manipulating the safety protocols of generative AI services, including Microsoft’s, to produce harmful and abusive content.

Generative AI is a technology that can produce media like images, text, and videos that are often indistinguishable from what a human would create. However, the same benefits individuals and organizations can gain from generative AI can also be exploited by bad actors to produce abusive and harmful content.

As we saw last year, cybercriminals continued to innovate their tools and techniques to bypass even the most robust security measures, using all sorts of technology, including AI. The Redmond-based tech giant has even observed a foreign-based threat actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites.

In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on using these custom tools to generate harmful and illicit content.

Upon discovery, Microsoft revoked cybercriminal access, implemented countermeasures, and enhanced its safeguards to block such malicious activity in the future.

In a blog post, Microsoft says it is working to disrupt and deter these and other malevolent actors from weaponizing its AI services and has taken legal action to do so, including:

  1. Seeking a court order to seize a website instrumental to the criminal operation.
  2. Working to decipher how the creators of the illicit tools and services are monetizing their efforts.
  3. Engaging with industry partners to strengthen safeguards against the abuse of generative AI across the tech sector.

Beyond legal actions and the perpetual strengthening of our safety guardrails, Microsoft continues to pursue additional proactive measures and partnerships with others to tackle online harms while advocating for new laws that provide government authorities with necessary tools to effectively combat the abuse of AI, particularly to harm others. Microsoft recently released an extensive report, “Protecting the Public from Abusive AI-Generated Content,” which sets forth recommendations for industry and government to better protect the public, and specifically women and children, from actors with malign motives.   

You can read Microsoft’s statement on the topic here.

More about the topics: AI, microsoft

User forum

0 messages