More than a third of code made by GitHub Copilot is unsafe, research shows

36% of AI-generated code has security flaws

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

copilot github code

Microsoft is working hard on Copilot, and even though it’s a revolutionary tool, it’s not without its flaws.

According to the latest reports, code that is generated by GitHub Copilot might not be as safe as you think.

42% of applications have long-term security flaws

According to Help Net Security, 42% of applications and 71% of organizations suffer from security flaws that haven’t been addressed in more than a year.

To make matters worse, 46% of organizations have critical security debt that can put both businesses and users at risk.

As for app developers, 63% of apps have flaws in their code, while 70% of third-party libraries have security flaws.

Despite these alarming numbers, there is some good news on the horizon. According to research, the number of serious critical flaws has dropped 50% since 2016.

AI is also a huge contributor, and many developers use it daily. However, 36% of code written by GitHub Copilot contains security flaws which is concerning.

It’s worth mentioning that 64% of applications have the resources to fix security flaws in a year, while the majority of developers despite having the capacity to do so, are ignoring security flaws.

Out of all the security flaws, only 3% are considered critical, so things aren’t as bleak as they seem in terms of security.

Hopefully, the developers will utilize the AI to more efficiently address long-term as well as emerging issues.

Microsoft is already using AI to combat cyberattacks, and it seems that other developers will have to follow suit.

More about the topics: Microsoft copilot, security