Organizations face a critical gap between their data protection protocols and actual practices

From streamlining operations to automating complex processes, AI has revolutionized the way organizations approach tasks. However, as the technology becomes more mainstream, organizations are discovering that the rush to embrace AI can have unintended consequences.

A report of Swimlane reveals that while AI offers enormous benefits, its adoption has surpassed many companies’ ability to protect sensitive data. As companies deeply integrate AI into their operations, they also face associated risks, including data breaches, regulatory non-compliance, and security protocol shortcomings.

AI works with Large Language Models (LLMs) that are trained using massive data sets that often contain publicly available information. These datasets can consist of text from sources like Wikipedia, GitHub and several other online platforms, which provide a rich corpus for training the models. This means that if a company’s data is available online, it will likely be used for training LLMs.

Data Processing and Public LLMs

The research revealed a gap between protocol and practice in data sharing in large public language models (LLMs). While 70% of organizations claim to have specific protocols in place to ensure the sharing of sensitive data with public LLMs, 74% of respondents are aware that individuals within their organizations are still entering sensitive information on these platforms.

This discrepancy highlights a critical weakness in enforcement and employee compliance with established security measures. Furthermore, there is a constant barrage of AI-related messages that are exhausting professionals, and 76% of respondents agree that the market is currently saturated with AI-related hype.

This overexposure is causing a form of AI fatigue and more than half (55%) of respondents report feeling overwhelmed by the continued focus on AI, indicating that the industry may be changing its approach to promoting the technology must change.

Interestingly, despite this fatigue, experience with AI and machine learning (ML) technologies is becoming a crucial factor in hiring. It is striking that 86% of organizations indicate that familiarity with AI plays an important role in determining the suitability of candidates. This shows how ingrained AI is becoming increasingly ingrained, not only in cybersecurity tools, but also in the workforce needed to manage them.

In the cybersecurity sector, AI and LLMs have had a positive impact as, according to the report, 89% of organizations value AI technologies for increasing the efficiency of their cybersecurity teams.

More from Ny Breaking

Related Post