Slack denied using customer data to train its AI, but users remain skeptical

Its Privacy Principles documentation suggest otherwise

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

Slack denied using customer data to train its AI, but users remain skeptical

Recently, Slack was accused of using user data, such as files and messages, to train its AI and machine learning models.

The reports were based on the company’s privacy principles documentation pointed out by users on Twitter, where Slack mentions:

Machine learning (ML) and artificial intelligence (AI) are useful tools that we use in limited ways to enhance our product mission. We do not develop LLMs or other generative models using customer data. To develop non-generative AI/ML models for features such as emoji and channel recommendations, our systems analyze Customer Data (e.g. messages, content and files) submitted to Slack as well as Other Information (including usage information) as defined in our privacy policy and in your customer agreement.

The last line implies that the company uses Customer Data, including content, files, and messages, to develop non-generative AI/ML models for features such as emojis and channel recommendations. This is not a new trend; apparently, most AI and ML tech-using companies are using it; the news was not shocking. However, it made the users very upset.

When Neowin reached Slack and asked for a comment, a Slack representative said:

Slack has used machine learning for other intelligent features (like search result relevance, ranking, etc.) since 2017, which is powered by de-identified, aggregate user behavior data. These practices are industry standard, and those machine learning models do not access original message content in DMs, private channels, or public channels to make these suggestions

Slack then also mentioned the type of data that could be used to train the global models:

  • A timestamp of the last message sent in a channel can help Slack recommend channels to archive.
  • The # of interactions between two users is incorporated into the user recommendation list when a user goes to start a new conversation.
  • The # of words overlapping between a channel name and other channels can inform its relevance to that user.

The company further explains that it uses industry-standard, privacy-protective machine learning techniques, and the data doesn’t leak across workspaces. Thus, it doesn’t allow learning, memorizing, or reproducing customer data to build or train ML models.

If customers still don’t want their data used in any way, they can opt out. However, the requirement to compose an email to opt out instead of facilitating the process within the Settings section further annoyed the enraged customers.

Now you must be wondering how to do it. The company mentioned:

To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at [email protected] with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.

Here is one of the Twitter posts of annoyed customers:

What are your views on this matter? Share your thoughts with our readers in the comments section below.

More about the topics: Slack

User forum

0 messages