It was just a matter of time: Microsoft says hackers are using AI to enable cyberattacks

Microsoft detected such hackers from Russia, China, North Korea, and Iran.

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

hackers ai

In a recent report published by the Redmond-based tech giant, Microsoft and OpenAI detected multiple hackers from Russia, China, North Korea, and Iran, using generative AI to enable and launch cyberattacks.

While it was just a matter of time until AI was going to be used this way, it still comes as a surprise and a shock, due to the AI’s capabilities to convincingly emulate real identities, forms of communication, and so on.

However, Microsoft in a partnership with OpenAI, is researching these cyberattacks to understand how they work, and how they can be stopped before wreaking havoc on the targets.

The two companies found no less than 5 threat actors: the Russian Forest Blizzard, the Iranian Crimson SandStorm, the North Korean Emerald Sleet, the Chinese Charcoal Typhoon, and the Salmon Typhoon.

All of them are using various AI technologies, from generating malware content to researching vulnerabilities in order to infiltrate, disrupt, and damage targets such as government institutions. However, Microsoft was able to apprehend and disable all accounts related to these threat actors.

To further combat these attacks, Microsoft and OpenAI will continue researching AI-based cyberattacks, which will include making ChatGPT and Copilot more secure, as well.

The objective of Microsoft’s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. 

Microsoft

The Redmond-based tech giant recently released Microsoft Security Copilot, which the company says it will be updated to detect other AI-based cyberattacks.

Hackers using AI: How should customers prepare for AI-based cyberattacks?

Organizations using Microsoft services and products, should set up Microsoft Security Copilot in place, and keep it updated, along with the rest of the Windows devices and Microsoft apps.

As with any other cyberattacks, customers are advised to not open anything that seems suspicious, from emails to attachments, Teams meeting links, and so on.

Per Microsoft’s report, AI-based cyberattacks are fairly complex, and they can bypass many security authentication methods, however, both ChatGPT and Copilot should be safe, as Microsoft and OpenAI will regularly update their security software.

As we said earlier, hackers using AI to initiate cyberattacks was just a matter of time, however, being aware that it can happen is the first step to preventing it from actually happening.

More about the topics: AI, Cybersecurity