Is AI a looming threat? Microsoft's President Brad Smith expresses concern

There are a wide array of risk factors we haven't considered yet

Reading time icon 4 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

AI and its implications

When we think of AI (Artificial Intelligence), ChatGPT or Microsoft Copilot is the first thought that comes to mind. That’s primarily because, for most of us, it’s the only interaction we have had with one of the most significant developments of the 21st century. But AI is a lot more!

In recent months, we have witnessed considerable improvements, Sora AI by OpenAI being a prime example. Subsequently, the number of people raising concerns has been on the rise. And it’s coming from those who have worked closely with the technology right from the start.

Geoffrey Hinton, known as the Godfather of AI, last year quit Google to warn the world about the risks posed by AI in the absence of stringent regulations.

Another ex-Google employee, Mustafa Suleyman, has expressed concern that AI could be used to create another pandemic-like situation, one much worse than Covid-19.

What did Microsoft’s President Brad Smith say about AI?

In an interview with EL PAIS, Smith’s response, when asked if AI could result in a large percentage of the workforce losing jobs, was rather reassuring.

Use this new technology to exercise better judgment, to be more creative, to develop ideas, to help with writing but not delegate or outsource thinking or writing to a machine. If we use it well, it can be an accelerator for people and the kinds of work they do.

This has been the stance of most experts in the field. AI is a tool, and it should be used as such to boost productivity and improve output!

Smith also emphasized adapting to the changing world of technology, utilizing the potential of AI, and employing it in whatever role you can. Harnessing the technology is the key!

Coming to the regulation brought in by the EU (European Union) for AI, Smith was optimistic that it would set a benchmark for safety and security standards and provide protection.

When we buy a carton of milk in the grocery store, we buy it not worrying about whether it’s safe to drink, because we know that there is a safety floor for the regulation of it. If this is, as I think it is, the most advanced technology on the planet, I don’t think it’s unreasonable to ask that it have at least as much safety regulation in place as we have for a carton of milk.

Smith emphasized a safety brake for AI. He had brought up the subject previously in the U.S. Senate.

We better have a way to slow down or turn off AI, especially if it’s controlling an automated system like critical infrastructure.

Another concern raised during the interview was how the use of deep fakes could affect us, especially in a year when roughly 64 countries are going to poll. Smith underscored the steps taken by Microsoft to tackle deepfake and the need to educate people.

The more powerful the technology becomes, the stronger the safeguards and controls need to become with it. I think that all of us are going to have to push ourselves. The biggest mistake the tech sector could make is to think that it’s already doing enough and it can just do what it needs to be done if it’s left alone.

Regarding countries using AI for conducting cyberattacks, Smith had an insightful response citing how AI can be used to counter such issues.

We’re not going to let nation state actors that engage in this kind of adversarial and harmful conduct have use of our applications because we regard that as something that is likely to do harm to the world. But we also need to use AI to fight back and create stronger cybersecurity protection.

Is AI really a threat?

Whether you believe it or not, AI is still in its infancy, and over the years, its integration with everyday tasks will increase even more. Surely, it will lead to job losses, but so does every technological advancement!

It’s rather early to discuss whether AI poses an existential threat to humanity. But there must be safeguards and regulations because there are other more pressing issues of the day that these can address. Be it deepfakes, cyberfrauds, or harassment!

Another concern is the biased training of AI models. At the time of writing, a recent example is Google’s Gemini image creator, which many reported had a bias. After a massive uproar, Google temporarily halted the AI-tools ability to generate images of people.

So, we must learn how to harness and better utilize AI and, at the same time, introduce safeguards to protect us from any unfavourable scenarios.

What do you think of AI and the threats posed by it? Share with our readers in the comments section.

More about the topics: AI, microsoft

User forum

0 messages