AI, along with its generative AI counterpart, has already found its place in the average enterprise. However, using the tools poses serious security risks that companies must learn to manage and minimize or they will come back to haunt them.
These are the conclusions of Zscaler’s ‘2024 AI Security Report’, which noted that AI is now ‘business as usual’, with companies leveraging and integrating new features and tools into their daily workflows.
This is clearly visible in the amount of transactions and data generated, with a 600% increase in transactions as 569 terabytes of business data are sent to AI tools between September 2023 and January 2024.
Countless risks
Of all the different companies using AI tools, manufacturing companies are at the top, as they are responsible for more than 20% of all AI/ML transactions, according to Zscaler. They are followed by finance and insurance (17%), technology (14%), service industries (13%) and retail/wholesale (5%).
But companies must be careful not to leak sensitive data or create other risks. Zscaler says using generative AI poses the risk of intellectual property or leakage of non-public information; a larger attack surface, new vectors for transmitting threats and increased risks in the supply chain; and the risk of generating poor quality data.
“At the same time, companies are constantly exposed to a barrage of cyber threats, some of which are now AI-driven,” the researchers said. “To meet this challenge, enterprises and cybersecurity leaders must effectively navigate the rapidly evolving AI landscape to leverage its revolutionary potential while mitigating risk and defending against AI-powered attacks.”
To protect their buildings against AI-driven threats, companies must use tools that provide full visibility into AI tool usage, the creation of granular AI access policies, granular data security for AI applications, and strong controls with browser isolation, Zscaler concludes.