In a controversial move, Slack has been training the models it uses for its generative AI capabilities on user messages, files, and more, by default and without users’ explicit consent.
Instead (per Engadget), those who wish to opt out must do so through their organization’s Slack administrator, who must email the company to end data usage.
The revelation that potentially sensitive information is being used to train Slack’s AI highlights the technology’s dark sides: generative AI has already come under fire for improperly citing sources and its potential for generating content that could be subject to copyright infringement.
Slack criticized its use of customer data to train AI models
An extract from the company privacy principles page reads:
“To develop non-generative AI/ML models for features like emoji and channel recommendations, our systems analyze customer data (e.g., messages, content, and files) sent to Slack, as well as other information (including usage information) as defined in our Privacy Policy and in your customer agreement.”
Another passage reads: “If you would like to unsubscribe, please have your organization, workspace owners, or primary owner contact our Customer Experience team at feedback@slack.com…”
The Company does not provide a timeframe for processing such requests.
In response to community uproar, the company posted a separate one blog post to address the issues that arise, adding: “We do not build or train these models in such a way that they can learn, remember or reproduce customer data of any kind.”
Slack confirmed that user data is not shared with third-party LLM providers for training purposes.
Ny Breaking asked Slack’s parent company, Salesforce, to clarify a few details, but the company did not immediately respond.