Four Keys to Help Companies Combat Shadow AI

The advent of Artificial Intelligence (AI) has ushered in an era of unprecedented innovation and efficiency across industries. However, alongside these developments, a new challenge has also emerged in the form of “shadow AI.” This term refers to the unapproved use of consumer AI tools by employees in corporate environments. With nearly 50% of the general population using generative AI, the phenomenon of shadow AI raises critical concerns around data security, compliance, and privacy.

Organizations must learn to navigate the complexities of this emerging trend to protect their operations and maintain control over their technology infrastructure. With that in mind, here are four key strategies for organizations to combat the threats of shadow AI.

Marcin Kleczynski

Founder and CEO, Malwarebytes.

Currently, most AI usage involves a web browser where employees are at risk of sharing highly sensitive data or intellectual property. Proactive web filtering can stop the use of online AI tools. This strategy involves the use of Domain Name System (DNS) filtering, a technique used to control access to websites and online content by filtering DNS queries based on predefined criteria.

In other words, it involves intercepting DNS requests and allowing or blocking access to specific websites or categories of websites based on policies defined by administrators. Organizations can use DNS filtering for content control to enforce acceptable use policies, restrict access to inappropriate or non-work-related content, and promote productivity and compliance.

In this specific case, IT teams can use DNS filtering to block access to AI websites like OpenAI’s ChatGPT, Google’s Gemini, and others. So if organizations want to reduce the risk of their employees potentially entering confidential corporate information into these AI tools via a browser, they can use DNS filtering to block access to those web pages. In this way, organizations can significantly reduce the attack surface and the chance of losing confidential data.

2. Regular audits and compliance checks

For any cybersecurity threat, regular audits and compliance reviews are essential for organizations to meet security standards and regulatory requirements. These audits serve as a proactive measure to identify and address potential vulnerabilities.

For shadow AI, the audit process begins with assessments tailored specifically to AI tools and infrastructure, conducting systematic testing and analysis to identify weaknesses and potential entry points. Compliance checks then ensure that AI initiatives align with industry-specific regulations and data protection and cybersecurity standards. These checks verify that AI systems meet regulatory requirements, such as data privacy laws and industry guidelines.

In addition, employees must be equipped with clear policies on AI use. This promotes ethical, responsible and consistent application of AI technologies, while also protecting data privacy and facilitating regulatory compliance.

3. Ongoing staff training and awareness training

Auditing and compliance checks are essential, but they are not sufficient without ongoing educational efforts. An organization’s lack of awareness leaves it vulnerable to cyberattacks and hampers remediation efforts, despite the increasing frequency and sophistication of these threats. Training and awareness are critical components of any comprehensive cybersecurity strategy, especially with regard to emerging threats such as zero-days.

Regular training sessions are essential to educate employees about potential security challenges. These sessions not only help employees more easily recognize threats, but also promote a better understanding of the consequences of a breach. Additionally, employees should be educated about the dangers of shadow AI to support policies on sanctioned AI use. This ensures that all AI initiatives are approved and comply with security measures.

Raising awareness leads to employees gaining insight and feeling empowered to recognize and report suspicious activity. This proactive approach mitigates threats faster and provides an additional and crucial layer of defense.

4. Promoting a culture of transparency and openness

Finally, there’s no doubt that a collaborative approach strengthens any organization and is a critical part of improving your overall cybersecurity posture. That’s why encouraging transparency and openness is key to effectively managing shadow AI risks. Just as creating a culture of open communication between IT teams and employees fosters a better understanding of security threats and protocols, the same goes for AI applications: sanctioned, shadow, and learning to tell the difference.

So, where do we go from here? With two-thirds (64%) of CEOs concerned about cybersecurity risks associated with AI and 71% of employees already using generative AI at work (and that number is only set to grow), there’s no time to waste. Delaying the implementation of these strategies will only expose your organization to further threats. It’s time to step up, acknowledge the challenges, and take action.

We’ve highlighted the best identity management software for you.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we showcase the best and brightest minds in the technology sector today. The views expressed here are those of the author and do not necessarily represent those of Ny BreakingPro or Future plc. If you’re interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post