Zoom can use your private calls and messages to train its AI systems thanks to new terms and conditions you’ve agreed to

Private video calls, text messages and meetings on Zoom can be used to ‘train’ artificial intelligence models.

The San Jose company’s new terms and conditions — which went into effect in March but were spotted this month — have sparked an outcry online, with users threatening to cancel their accounts over the change.

Part of the new T+Cs states that customers consent to Zoom using data for purposes such as “machine learning or artificial intelligence (including for training and tuning algorithms and models).”

Artificial intelligence models are usually trained on large amounts of publicly available data, often sourced from the internet, but Zoom’s move would use personal customer data, raising privacy fears.

The changes have sparked a privacy storm (Reuters)

The changes came in Section 10.4 of Zoom's Terms and Conditions (Zoom)

The changes came in Section 10.4 of Zoom’s Terms and Conditions (Zoom)

Zoom has responded this week with a blog post claiming that the data will only be used to train AI models to more accurately summarize meetings, and only with customer consent.

In a blog post, Zoom’s Chief Product Officer Smita Hashim wrote, “To reiterate, we do not use audio, video, or chat content to train our models without customer consent.”

But that doesn’t stop users from worrying about their privacy.

Elliot Higgins of news organization Bellingcat said: “We teach our training workshops on Zoom, so Zoom effectively plans to train its AI on our full workshop content without compensation, so goodbye Zoom.”

The data used to “train” AI models has become a legal battlefield in the wake of rapid advances in generative AI technology, such as OpenAI’s ChatGPT.

Users and companies fear that private information will be inadvertently disclosed – or artistic works plagiarized – if their data is used to train AI models.

Google recently changed its privacy policy to allow the company to train AI models on publicly available information on the web.

The new policy says, “For example, we use publicly available information to help train Google’s AI models and build products and features such as Google Translate, Bard, and Cloud AI capabilities.

High-ranking users threatened to cancel Twitter accounts (X)

High-ranking users threatened to cancel Twitter accounts (X)

Users were outraged by the privacy rule changes (X)

Users were outraged by the privacy rule changes (X)

Elon Musk revealed a plan to train his xAI model on publicly available content on Twitter, saying, “We’ll be using the public tweets — obviously not anything private — for training…just like everyone else has.”

In response, artists threatened to delete their Twitter accounts, fearing their work could be used to train AI systems that would then copy their art styles.

In April, Europe’s national privacy watchdogs created a task force to address issues with ChatGPT after Italian regulator Garante took the service offline, accusing OpenAI of violating the EU’s GDPR, a privacy regulation that came into effect in 2018.

ChatGPT was reinstated after OpenAI agreed to install age verification features and block European users from having their information used to train the AI ​​model.

Jake Hurfurt, head of research and investigations at privacy organization Big Brother Watch told DailyMail.com: “Companies need to be transparent about how they use their customers’ data.

It is unacceptable to hide massive changes in how they handle user data in dense privacy policies.

“People should have the power to choose how their sensitive data, including biometric data, is processed, and should be given clear information and the ability to opt-out.”

In Zoom’s blog post, Hashim explained that customer data would be used to train AI models used to summarize meetings, and that customers would need to consent to that first.

She wrote, “When you choose to enable Zoom IQ Meeting Summary or Zoom IQ Team Chat Compose, you will also be presented with a transparent consent process for training our AI models using your customer content.

Your content is used solely to improve the performance and accuracy of these AI services. And even if you choose to share your data, it will not be used to train third-party models.”