Microsoft, Nvidia, Google, and many others create CoSAI, to oversee the safe development of AGI

Sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz.

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

CoSAI

To ensure artificial intelligence (AI) technology is developed and utilized safely and securely, Microsoft, NVIDIA, Google, and many other renowned tech companies have aligned to establish the Coalition for Secure AI (CoSAI).

CoSAI aims to guarantee that AI systems operate within boundaries defined by human values while promoting technological advancements that benefit everyone.

According to the announcement, the purpose of CoSAI is to create an open-source place where developers can get help and tools for creating secure AI systems from the beginning. The goal of CoSAI is simple: ensure that artificial intelligence (AI) systems are safe, ethical, and fair for everyone who uses or interacts with them.

CoSAI envisions a world where safety measures in AI development are not just an afterthought but a fundamental aspect – something like how we build houses with sturdy foundations right from the start. The idea is simple: if we want all people across this planet (and beyond) to benefit from powerful future technologies like Artificial General Intelligence (AGI), then it must be guaranteed they will always remain beneficial rather than harmful no matter what happens during their creation process… even if they become superintelligent!

CoSAI has one main goal: to ensure that every AGI system developed will benefit everyone. They aim to prevent any possibility of using these technologies in ways that may harm humanity or lead to unfair concentration of power. AGIs could potentially be much smarter than humans in almost every field. Thus, there’s an urgent need for caution when developing AGIs. Without enough safety measures initially ensured by researchers worldwide, such AIs might not act according to our best interests once they surpass human-level abilities.

But what is CoSAI’s plan to make this happen? First and foremost, they emphasize improving the security of the software supply chain for AI systems. This implies stronger monitoring of what is incorporated into your AI applications, ensuring they are as free from alteration as possible.

Another crucial area is to prepare defenders for the changing cybersecurity landscape. As AI and classical systems become increasingly connected, the difficulties of incorporating them securely also rise. CoSAI is taking on this issue to create a pathway that developers and organizations can follow without problems.

Also, we cannot ignore AI security governance. Standard methods and risk evaluation models must be created to deal with the complicated field of AI security. CoSAI’s objective is to simplify this procedure, making it understandable and simpler for all to implement the required precautions.

Guiding this daring effort are professionals from industry and academia, united to direct CoSAI toward its objective of forming a more secure AI future. Supported by the big players in technology and led by experienced individuals, CoSAI is prepared to create noticeable changes in handling safety for AI systems.

In our world, where AI has great possibilities but also serious dangers, creating CoSAI is a positive step. It shows how much the tech industry wants to ensure that the future of AI not only looks promising but also remains safe and protected. At this moment, when we are about to enter a new era for AI, CoSAI’s efforts give us hope, like a guiding light showing us the path towards an upcoming period where AI can grow without any worries about security threats.

More about the topics: AI, microsoft