The new Azure safety features will protect your AI model and generations

The new system will block malicious prompts, and prevent AI hallucinations

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

Azure being defended by new safety features

Microsoft brings new safety features for Azure customers. Thus, you can use the Azure AI Studio tools to detect AI hallucinations and malware. Additionally, they can block malicious prompts from any model from the Azure platform. On top of that, you can access and use them with ease.

The new Azure safety features will protect those of us who don’t hire red teamers to test the AI services. In addition, Microsoft aims to defend users from injection attacks and hateful comments. Also, you can take a look at the video below to learn how to protect your Azure AI applications.

Microsoft learned from past errors

According to The Verge, Microsoft learned from AI controversies like Gemini’s historically inaccurate images or Designer’s celebrity fakes. So, the new safety features for Azure will restrict prompts. Thus, users won’t be able to generate misleading content.

Azure AI has three safety features you can preview: Prompt Shields, Groundedness Detection, and safety evaluations. The first feature prevents malicious queries that trick AI into generating content against their training and restrictions. The second one finds and blocks hallucinations. The third one assesses vulnerabilities in your AI model and generations.

AI hallucinations are errors. Thus, a learning model might generate them upon encountering new types of content that don’t fit its training data. Threat actors can use hallucinations to trick the AI into believing that spammy emails or malware are harmless. This way, they can bypass security measures.

Microsoft also trained Azure’s safety features to check for banned words in queries or hidden prompts. If the system finds any, it won’t send them to the AI model. Afterward, it will verify the answer received to see if the AI hallucinated information that isn’t present in the prompt.

Azure’s safety features will help system administrators

The company added a user report that warns administrators about who’s responsible for triggering unsafe outputs. This feature is helpful to organizations using red teamers because it aids them in distinguishing them from wrongdoers.

By the way, if you use a less popular AI model, you might need to enable the safety features manually.

In a nutshell, the new Azure safety features protect you from injection attacks and hateful content. In addition, they prevent malicious users from using AI hallucinations to trick the models. Furthermore, they will generate a user report for you and assess the security vulnerabilities of your AI model and generations.

What are your thoughts? Are you an Azure user? Let us know in the comments.

More about the topics: AI, microsoft, Microsoft Azure