Nearly a third of GenAI users give their bots confidential information – and that’s a worrying problem
New research shows that consumer awareness of privacy laws and regulations has steadily increased and now stands at 53%, up from 36% in 2019.
Cisco’s 2024 Consumer Privacy Survey claims that this awareness now translates into greater trust among consumers: 81% who are aware of their country’s privacy laws are confident in their ability to protect data, compared to just 44% of those who do not understand the regulations.
Predictably, there is a strong correlation between age and awareness, as 65% of 18-24 year olds are aware of data privacy laws, compared to just 24% of those aged 75 and over. However, almost everyone cares about data privacy, with 89% wanting more control and protecting others online.
Stop telling ChatGPT your bank details
As many as 80% of respondents were concerned that GenAI might be “bad for humanity,” and 72% are concerned that GenAI will replace jobs. There are overwhelming concerns about disinformation, with 86% concerned that GenAI’s output could be wrong, and 80% concerned about its potential to undermine elections.
Nevertheless, GenAI has to process enormous amounts of personal data: 37% of users enter health information, 29% provide financial information and 27% even tell chatbots their account number.
There have been reports of GenAI data being stolen and accounts being hacked, so it’s definitely not a good idea to give your chatbot (or anyone else) sensitive information.
The report recommends checking other sources of information to ensure GenAI’s output is correct and to gain a clear understanding of how your data is used and who it may be shared with.
To protect your data, you should also be vigilant in updating your privacy settings and researching your privacy rights.