The use of AI apps is increasing – and that could be a major security problem
Netskope has unveiled new research showing that more than one in 10 enterprise workers now have access to at least one generative AI application every month, up from one in 50 the year before.
The rise in the use of generative AI apps has been attributed to the immense success of apps like ChatGPT, to the point where Netskope described 2023 as the year of generative AI. In fact, ChatGPT turned out to be the most popular generative AI app.
As the technology continues to grow exponentially, Netskope expects an increase in the number of power users by 2024, with the top 25% of users expected to significantly increase their adoption of GenAI.
With generative AI comes security concerns
Adding to the limitations of artificial intelligence, companies were found to be using an average of 20 different cloud apps, up from 14 over a period of about two years.
Half interacted with between 11 and 33 cloud apps monthly, and the top 1% were found to use more than 96 different cloud apps. This growth in cloud interactions raises concerns about potential security issues.
In recognition of our growing online presence, Netskope shared a key insight from 2023: social engineering emerged as the most common way for attackers to gain initial access.
Users were three times more likely to fall for phishing scams than to download Trojans. Furthermore, cybercriminals targeted a range of sectors, with cloud apps and retail sites among the top targets, demonstrating the need for increased online security measures.
Criminally motivated activities dominated attacks in 2023, with Russian groups leading the way. Chinese threat actors have contributed the most to geopolitical attacks, with Asian countries, especially Singapore, the most targeted.
Ray Canzanese, Director of Threat Research at Netskope, emphasized the need for organizations to prioritize the development of secure AI apps and limit apps’ access to legitimate business purposes online, while investing in mitigating social engineering risks through security awareness.
Canzanese added: “This trend is likely to continue into 2024.”