Don’t blame Slack for training its AI on your sensitive data

Slack has come under fire for using customer data to train its global AI models and generative AI add-on. Sure, requiring users to manually unsubscribe via email seems sneaky (isn’t email avoidance the whole point of Slack?), but the messaging app doesn’t bear all the responsibility here. The most popular workplace apps have all integrated AI into their products, including Slack AI, Jira AI-Powered Virtual Agent, and Gemini for Google Workspace. Anyone using technology today – especially for work – should assume that their data will be used to train AI. Therefore, it is up to individuals and companies to prevent sensitive data from being shared with third-party apps. Anything less is naive and risky.

Rohan Sathe

Co-founder and CTO of Nightfall AI.

Trust no one

There’s a valid argument circulating online that Slack’s opt-out policy sets a dangerous precedent for other SaaS apps to automatically opt customers in to sharing data with AI models and LLMs. Regulators are likely to investigate, especially for companies operating in locations protected by the General Data Protection Regulation (but not the California Consumer Privacy Act, which allows companies to process personal data without consent until a user opts out). Until then, anyone using AI – more than 40% of enterprises, by IBM’s estimates – should assume that shared information will be used to train models.