LinkedIn says that if you share fake or bogus AI-generated content, that is your responsibility
LinkedIn is passing the buck to users for sharing misleading or inaccurate information created by its own AI tools, rather than the tools themselves.
An update to the Service Agreement in November 2024 will hold users accountable for sharing misinformation created by AI tools that violate the privacy agreement.
Because no one can guarantee that the content generative AI produces is truthful or correct, companies are covering themselves by putting the onus on users to moderate the content they share.
Inaccurate, misleading or not fit for purpose
The update follows in the footsteps of LinkedIn’s parent company Microsoft, which previously updated its terms of service in 2024 to remind users not to take AI services too seriously, and to address the AI’s limitations, advising that it ‘ is not designed or intended to be used’. used as a substitute for professional advice’.
LinkedIn will continue to offer features that can generate automated content, but with the caveat that these may not be reliable.
“Generative AI Features: By using the Services, you can interact with features we offer that automate the generation of content for you. The content generated may be inaccurate, incomplete, delayed, misleading or unsuitable for your purposes,” the updated passage reads.
The new policy reminds users to double-check all information and make changes as necessary to adhere to community guidelines.
“Review and edit such content before sharing it with others. Like any content you share on our Services, you are responsible for ensuring that it complies with our Professional Community Policy, including not sharing misleading information.”
The social networking site likely expects its genAI models to improve in the future, especially since it now uses user data by default to train its models, requiring users to opt out if they don’t want their data used.
There was quite a bit of pushback against this move, as GDPR concerns clash with generative AI models across the board, but the recent policy update shows that the models still need quite a bit of training.
Via The registry