Enterprise AI applications are threatening security

Enterprise AI applications are threatening security

Over the past year, AI has emerged as a transformative productivity tool, potentially revolutionizing industries across the board. AI applications, such as ChatGPT and Google Bard, are becoming common tools within the business world to streamline operations and improve decision making. However, the surge in AI’s popularity brings with it a new set of security risks that organizations must address to prevent costly data breaches.

The rapid introduction of generative AI

Just two months after launching into the public domain, ChatGPT became the fastest growing consumer-facing application in history, using generative AI technology to answer questions and serve users’ needs. With a range of benefits that ultimately streamline processes for the individual – suggesting recipes, writing birthday inscriptions and acting as a go-to knowledge encyclopedia – ChatGPT’s wider application and benefit to the workplace was quickly recognized. Today, many employees in offices around the world rely on generative AI systems to compose emails, suggest calls to action, and summarize documents. Netskope’s recent Cloud and Threat Report shows that the use of AI apps is increasing exponentially in enterprises around the world, with a growth of 22.5% in May and June 2023. At this current growth rate, the popularity of these applications will double by 2024.

Ray Canzanese

Ray Canzanese is director of Netskope Threat Labs.

The hacker’s honey pot