Employees pass more secrets to AI than to their friends

Employees are more likely to pass trade secrets to a workplace AI tool than their friends, according to a new report.

In a survey of more than 1,000 US and UK office workers, data analytics company CybSafe found that many are positive about generative AI tools. company banned them.

69% of all respondents also said that the benefits of such tools outweigh the security risks. US workers were the most optimistic, with 74% of them agreeing with this statement.

AI dangers

Half of all respondents reported using AI at work, a third weekly and 12% daily. When it comes to US employees, the most common use cases include research, copywriting, and data analytics, all closely tied at 44%, 40%, and 38%, respectively. AI tools were also used for other tasks, such as assisting with customer service (24%) and writing code (15%).

CybSafe believes this is cause for concern, claiming that companies are not properly warning their employees about the dangers associated with using such tools.

In its reports, CybSafe notes that “As AI cyberthreats increase, businesses are at risk. From phishing scams to accidental data breaches, employees need to be informed, guided and supported.”

An alarming 64% of US employees have entered information related to their work into generative AI tools, and 28% were unsure whether they had. CybSasfe further claims that as many as 93% of employees may be sharing confidential information with AI. And the icing on the cake is that 38% of US employees admit to sharing data with AI that they wouldn’t give “to a friend in a bar”.

“Emerging changes in employee behavior should also be taken into account,” says Dr Jason Nurse, CybSafe’s director of science and research and current associate professor at the University of Kent.

“If employees sometimes enter sensitive data on a daily basis, this can lead to data leaks. Our behavior at work is changing and we are increasingly relying on generative AI tools. Understanding and managing this change is crucial.”

Another issue from a cybersecurity perspective is the inability of employees to differentiate between content created by a human or an AI. 60% of all respondents said they were confident they could do this accurately.

“We see barriers to cybercrime crumbling as AI makes more and more convincing phishing lures,” added Nurse. “The line between real and fake is blurring, and without immediate action, businesses will face unprecedented cybersecurity risks.”

These concerns are reinforced given that the adoption of AI in the workplace is increasing at a rapid pace. a new report management consultancy McKinsey has labeled 2023 as the breakthrough year for AI, with nearly 80% in its survey claiming to have had at least some exposure to the technology at home or at work.

Related Post