How to defeat ‘shadow AI’ within your organization
GenAI is the most disruptive technology to hit society since the internet. Two years after the launch of the most popular major language model, ChatGPT, GenAI tools have fundamentally and forever changed the way we consume information, create content, and interpret data.
Since then, the rapid rise and development of AI tools has left many companies behind when it comes to the regulation, management and governance of GenAI.
This environment has allowed ‘Shadow AI’ to run rampant. According to Microsoft, 78% of knowledge workers regularly use their own AI tools to complete their work, but a huge 52% do not disclose this to employers. As a result, companies are exposed to a multitude of risks, including data breaches, compliance violations and security threats.
Addressing these challenges requires a multifaceted approach, consisting of strong governance, clear communication, and versatile monitoring and management of AI tools, all without sacrificing workforce freedom and flexibility.
General Manager at Kolekti.
Trust is of the utmost importance and works both ways
Employees will use GenAI tools regardless of whether their employer mandates it or not. In fact, blanket bans, or severe restrictions on how it can be used, will likely only increase the challenge of ‘Shadow AI’. In fact, a recent survey found that 46% of employees would refuse to give up AI tools even if they were banned.
GenAI is an incredibly accessible technology that has the power to significantly improve efficiency and bridge skills gaps. These transformative tools are at the fingertips of time-pressured staffers, and employers cannot, without reasonable justification, tell them not to use them.
So the first step for employers to find the right balance between efficiency and authenticity is to establish the blueprints for how GenAI can and should be used within a business environment.
Extensive training is therefore essential to ensure employees know how to use AI tools safely and ethically.
This goes beyond technical knowledge: it also includes training staff on the potential risks associated with AI tools, such as privacy concerns, intellectual property issues and compliance with regulations such as GDPR.
Clearly explaining these risks will go a long way in getting staffers on board with restrictions that may seem too strict at first glance.
Provide clear usage scenarios
Defining clear use cases for AI within a given organization is also extremely important, not only to tell employees how not to use AI, but also how to use it. In fact, a recent survey found that a fifth of staff are not currently using AI because they don’t know how to do so.
So, with the right training, awareness, and understanding of how to use AI tools, they can avoid unnecessary experimentation that could expose their organization to risk, while also reaping the efficiency benefits that naturally come with AI.
Obviously, clear guidelines need to be established on which AI tools are acceptable for use. This may differ per department and workflow. So it’s important that organizations take a flexible approach to AI management.
Once use cases are defined, it is critical to accurately measure AI performance. This includes setting benchmarks for how AI tools are integrated into daily workflow, tracking productivity improvements, and ensuring alignment with business goals. By establishing metrics to monitor success, companies can better track the adoption of AI tools, ensuring they are not only used effectively, but also that their use aligns with business objectives. organization.
Tackling BYO-AI
One of the main reasons why Shadow AI is proliferating is because employees can bypass IT departments and implement their own solutions through unapproved AI tools. The decentralized, plug-and-play nature of many AI platforms allows employees to easily integrate AI into their daily work routines, leading to a proliferation of shadow tools that may not adhere to company policies or security standards.
The solution to this problem lies in versatile API management. By implementing robust API management practices, organizations can effectively manage how internal and external AI tools are integrated into their systems.
From a security perspective, API management allows companies to regulate access to data, monitor interactions between systems, and ensure that AI applications only interact with the right data sets in a controlled and secure manner.
However, it is important not to cross the line in workplace surveillance by monitoring specific inputs and outputs from company-approved tools. This will likely only force AI users back into the shadows.
A good middle ground is that sensitive alerts are configured to prevent accidental leaks of confidential data. For example, AI tools can be set up to detect when personal data, financial details or other proprietary information is inappropriately entered or processed by AI models. Real-time alerts provide an extra layer of protection, ensuring breaches are identified and contained before they escalate into full-blown security incidents.
A well-executed API strategy can provide employees with the freedom to use GenAI tools productively, while protecting origin data and ensuring AI use complies with internal governance policies. This balance can drive innovation and productivity without compromising safety or control.
Finding the right balance
By establishing strong governance with defined use cases, leveraging versatile API management for smooth integration, and continuously monitoring AI usage for compliance and security risks, organizations can find the right balance between productivity and protection. This approach will enable companies to embrace the power of AI while minimizing the risks of ‘Shadow AI’, ensuring GenAI is used in ways that are safe, efficient and compliant, while delivering critical value and return on investment can unlock.
We’ve put together a list of the best network monitoring tools.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro