Microsoft details Purview classifiers that can help prevent abuse, harassment, other illegal activities on Teams

Reading time icon 2 min. read

Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

It has been known for some time that Microsoft was working on more ways to allow IT admins to monitor Teams chats and channels, and as spotted by Neowin, we now know the specific work that’s being done under the hood to do so. Indeed, the Microsoft 365 Roadmap recently added several new classifiers relating to Microsoft Purview, the solution that scans and detects security and other risks in a company’s Microsoft 365 services against pre-set compliance standards.

To be specific, there’s a total of eight new classifiers for Purview that Microsoft is working on. The list covers sexual harassment, corporate sabotage, gifts and entertainment, money laundering, stock manipulation, workplace collusion, leavers, and authorized disclosures. A lot of those classifiers are self-explanatory, but we’ve included a sample explanation below.

The sexual harassment classifier detects explicit instances of sexual harassment as may be outlined in your organization’s policies and code of conduct, such as sexual advances, sexual comments and sexual favors. Microsoft Purview Communication Compliance helps organizations detect explicit code of conduct and regulatory compliance violations, such as harassing or threatening language, sharing of adult content, and inappropriate sharing of sensitive information.

When configured if Purview’s A.I. scans or spots anything that relates to any of these themes, the messages or content are sent for moderation. Do note, that the feature is Built with privacy by design. This means that usernames are pseudonymized by default, role-based access controls are built-in, investigators are explicitly opted in by an admin, and audit logs are in place to help ensure user-level privacy.

These classifiers are set to hit preview in June. General Availability should then come in September of 2022.