Microsoft pitched ChatGPT and DALL-E to the US Department of Defense
Dall-E could generate images to train the battlefield management systems
3 min. read
Published on
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
Microsoft proposed to the US Department of Defense (DoD) to use OpenAI and Azure AI tools, such as ChatGPT and Dall-E. With them, the DoD can build software and execute military operations. Additionally, the Pentagon could benefit from using AI tools for various tasks such as document analysis and machine maintenance.
According to The Intercept, the Microsoft proposal for the US Department of Defense (DoD) to use AI tools happened in 2023. Yet, in 2024, OpenAI removed its ban on military use. However, Liz Bourgeous, a spokesperson from the company, came forth and said that OpenAI’s policies don’t allow the usage of its tools to harm others.
However, there is a catch. The company’s tools are available through Microsoft Azure. Thus, even if OpenAI doesn’t sell them due to its policies, Microsoft can use its Azure OpenAI version for warfare.
How is AI used in the military?
On the other hand, the presentation from Microsoft to DoD has some examples of how to use AI tools for warfare. For instance, Dall-E can create images to improve battlefield management systems training.
In addition, the Azure OpenAI tools can help identify patterns and make predictions and strategic decisions. On top of that, the US Department of Defense (DoD) can use the AOAI for surveillance, scientific research, and other security purposes.
According to Anna Makanju, after OpenAI removed the ban on military use, the company started working with the Pentagon. However, the company still prohibits the usage of their AI tools for warfare. Yet, the Pentagon can use them for tasks like analyzing surveillance footage.
Could AI be a threat to humans?
There is a bit of controversy going on. According to Brianna Rosen, who focuses on technology ethics, a battle system will undoubtedly cause us harm, especially if it uses AI. Thus, OpenAI’s tools will most likely breach the company’s policies.
Heidy Khlaaf, a machine learning safety engineer, indirectly said that AI tools used by the Pentagon and DoD could become a threat. After all, AI doesn’t always generate accurate results. On top of that, its answers deteriorate when researchers train it on AI-generated content. Also, the AI image generators don’t even show an accurate number of limbs or fingers. Thus, they can’t generate a realistic field presence.
Another concern is AI hallucinations. After all, most of us know what happened to Google’s image generator. Also, AI could try to use predictions in its answers. Thus, the battle management system might become faulty.
Ultimately, Microsoft and OpenAI are getting billions from their AI tools contracts with the US Department of Defense (DoD) and the Pentagon. Also, their AI will inevitably lead to harm, especially when used for warfare training and surveillance. However, they should work to reduce the amount of AI errors. Otherwise, their mistakes could lead to disasters. On top of that, the US Government should be careful with Microsoft’s services, especially after the Azure data breaches.
What are your thoughts? Should governments use AI to enhance their military power? Let us know in the comments.
User forum
0 messages