close
close

Organizations are facing a critical gap between their data security protocols and actual practices.

Organizations are facing a critical gap between their data security protocols and actual practices.

When you make a purchase through links in our articles, Future and its syndication partners may earn a commission.

    Computer face among abstract lines and dots representing AI.     Computer face among abstract lines and dots representing AI.

Photo: Shutterstock/IrenaR

From streamlining operations to automating complex processes, AI has revolutionized the way organizations solve problems. However, as the technology becomes more widespread, organizations are discovering that hasty adoption of AI can lead to unintended consequences.

Swimlane’s report shows that while artificial intelligence offers enormous benefits, its adoption has outpaced many companies’ ability to protect sensitive data. As enterprises deeply integrate AI into their operations, they must also contend with associated risks, including data breaches, compliance violations, and security protocol failures.

AI works with large language models (LLMs), which are trained using large datasets, often including publicly available information. These datasets may consist of text from sources such as Wikipedia, GitHub, and other online platforms that provide a rich corpus for training models. This means that if company data is available online, it is likely to be used for LLM training.

Data Science and Public LLM Programs

The study identified a gap between protocol and practice in data exchange in large public language models (LLMs). While 70% of organizations say they have specific protocols in place to protect the sharing of sensitive data with public LLMs, 74% of respondents are aware that individuals in their organizations still enter sensitive information into these platforms.

This discrepancy highlights a serious lack of enforcement and employee compliance with established safety measures. Additionally, there is a constant stream of AI-related messaging that is overwhelming professionals, with 76% of respondents agreeing that the market is currently saturated with AI-related hype.

This overexposure is causing some form of AI fatigue, and more than half (55%) of those surveyed reported feeling overwhelmed by the constant focus on AI, signaling that the industry may need to change its approach to promoting this technology.

Interestingly, despite this fatigue, experience with artificial intelligence and machine learning (ML) technologies is becoming a deciding factor in hiring decisions. A staggering 86% of organizations reported that familiarity with AI plays an important role in determining candidate suitability. This shows how deeply ingrained AI is not only in cybersecurity tools, but also in the workforce needed to manage them.

In the cybersecurity sector, AI and LLMs have had a positive impact, with the report claiming that 89% of organizations believe AI technologies improve the effectiveness of their cybersecurity teams.

More from TechRadar Pro