Microsoft says threat actors won't be able to use official AI tools to create deep fakes anymore
2024 is expected to be the biggest election year in history so far.
3 min. read
Published on
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
Microsoft and 20 other leading tech companies, including Adobe, Amazon, Google, IBM, Meta, OpenAI, TikTok, and X, have pledged to make it harder for threat actors to use legitimate AI tools to create deep fakes.
The pledge was announced a few days ago as an intention to combat the rising popularity of deep fakes in elections. 2024 is set to be the biggest election year in history so far, with 64 countries all over the world electing leaders, and given the current political situation at the global scale, combating false information is essential, according to these companies.
As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections. AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.
Brad Smith, Microsoft
The pledge, which is set to take effect immediately, can be found here, and it comes days after Microsoft and OpenAI released a report chronicling the rising popularity of AI-generated malware by global threat actors, including but not limited to Russian, Iranian, North Korean, and Chinese actors that used AI to infiltrate and compromise information.
While Microsoft is planning the use Copilot for Security to effectively detect and deal with any AI-generated malware, the pledge will also follow three important keypoints:
- It plans to make it more difficult for bad actors to use legitimate tools to create deep fakes.
- The tech sector will work together to detect and respond to deep fakes in elections globally.
- The accord will help advance transparency and build societal resilience to deep fakes in elections.
These steps will be managed through the release of several new platforms, including Content Credentials as a Service, that aim to aid politicians around the world with official means, the  Microsoft-2024 Elections, where politicians are highly encouraged to report any kind of deep fakes about themselves, and the Global Internet Forum to Counter Terrorism, a forum where governments and tech officials will work together to make sure elections are carried out in a just and democratic way.
Microsoft says it’s essential for companies to work together with governments so that countries and people don’t fall victim to fake news, and false information.
Among other areas, this will be essential to address the use of AI deepfakes by well-resourced nation-states. As we’ve seen across the cybersecurity and cyber-influence landscapes, a small number of sophisticated governments are putting substantial resources and expertise into new types of attacks on individuals, organizations, and even countries. Arguably, on some days, cyberspace is the space where the rule of law is most under threat. And we’ll need more collective inter-governmental leadership to address this.
Microsoft
What do you think of this? Will Microsoft be successful in combating AI deep fakes?
User forum
0 messages