Enterprise AI applications are threatening security
Over the past year, AI has emerged as a transformative productivity tool, potentially revolutionizing industries across the board. AI applications, such as ChatGPT and Google Bard, are becoming common tools within the business world to streamline operations and improve decision making. However, the surge in AI’s popularity brings with it a new set of security risks that organizations must address to prevent costly data breaches.
The rapid introduction of generative AI
Just two months after launching into the public domain, ChatGPT became the fastest growing consumer-facing application in history, using generative AI technology to answer questions and serve users’ needs. With a range of benefits that ultimately streamline processes for the individual – suggesting recipes, writing birthday inscriptions and acting as a go-to knowledge encyclopedia – ChatGPT’s wider application and benefit to the workplace was quickly recognized. Today, many employees in offices around the world rely on generative AI systems to compose emails, suggest calls to action, and summarize documents. Netskope’s recent Cloud and Threat Report shows that the use of AI apps is increasing exponentially in enterprises around the world, with a growth of 22.5% in May and June 2023. At this current growth rate, the popularity of these applications will double by 2024.
Ray Canzanese is director of Netskope Threat Labs.
The hacker’s honey pot
An online poll from Reuters and Ipsos found that as many as 28% of workers out of 2,625 U.S. adults have embraced generative AI tools and use ChatGPT regularly throughout the workday. Unfortunately, after proving itself as a useful tool for proofreading documents and checking code for errors, ChatGPT has become an exposure point for sensitive information as employees cut and paste confidential corporate content into the platform. The sheer volume of sensitive information being aggregated in generative AI systems is hard to ignore. Layer
With 1.43 billion people logging into ChatGPT in August, it’s no surprise that its hype and popularity is attractive to malicious actors, who want to use LLMs to achieve their own malicious goals and also exploit the hype surrounding LLMs to further their target victims.
Business leaders are trying to find a way to use third-party AI apps safely. Early this year, JPMorgan blocked access to ChatGPT, saying it wasn’t in line with company policy, and Apple followed the same path after unveiling plans to create its own model. Other companies like Microsoft have simply advised their staff not to share confidential information with the platform. There is not yet a strong regulatory recommendation or best practice for generative AI use, with the most concerning consequence being that 25% of US employees have no idea whether their company allows ChatGPT or not.
Many different types of sensitive information are being uploaded to generative AI applications at work. According to Netskope, the most commonly uploaded information is source code, the basic text that controls the function of a computer program and usually the intellectual property of companies.
ChatGPT’s uncanny ability to assess, explain, and even train users in complex coding makes this trend unsurprising. However, uploading source code to these platforms is a high-risk activity and could lead to the exposure of serious trade secrets. Samsung faced this exact problem in April this year when one of its engineers used ChatGPT to check the source code for errors, leading to a total ban on ChatGPT across the company.
Common scams
Removing generative AI from corporate networks comes with its own risks. In this scenario, users are encouraged to use third-party ‘shadow’ applications (those not approved for safe use by the employer) to streamline their workflows. Capitalizing on this trend, more and more phishing and malware distribution campaigns have been found online, seeking to capitalize on the generative AI hype. In these types of campaigns, websites and proxies disguise themselves as free, unauthenticated access to the chatbot. In reality, all user input is accessible to the proxy operator and collected for future attacks.
Securing the workplace
Fortunately for businesses, there is an alternative middle ground to enable AI adoption in the workplace with security boundaries, and this involves a combination of cloud access controls and user awareness training.
First, data loss prevention policies and tools should be implemented to detect uploads that contain potentially sensitive information such as source code and intellectual property. This can then be combined with real-time user coaching to notify employees when an action appears likely to violate company policy, giving them the opportunity to assess the situation and respond appropriately.
To reduce the threat of scam websites, companies should scan website traffic and URLs and coach users to spot attacks using cloud and AI apps.
The most effective way to implement strict security measures is to ensure that AI app activity and trends are regularly monitored to identify the most critical vulnerabilities for your specific business. Security should not be an afterthought, and with the right care and attention, AI can continue to benefit the enterprise as a force for good.
We recommended the best online cybersecurity course.