Artificial intelligence

Breaking News: Only 17% of Organizations Can Actually Stop AI Data Leaks, Kiteworks Survey Reveals

Every day, employees and contractors at thousands of organizations ingest sensitive data into ChatGPT, Claude, Gemini, and dozens of other AI tools. Customer records, financial data, trade secrets, strategic plans—all flowing freely into public AI systems. But what if the vast majority of organizations have no real way to stop it?

Kiteworks, which empowers organizations to effectively manage risk in every send, share, receive, and use of private data, aims to expose this hidden crisis with the release of its “AI Data Security and Compliance Risk Survey” report. At the heart of this study is a stark revelation: While 27% of organizations admit that over 30% of the data their employees feed into AI tools is private or confidential, only 17% have implemented technical controls capable of actually preventing it. Using responses from 461 cybersecurity, IT, risk management, and compliance professionals across industries, the survey uncovers a fundamental disconnect between AI adoption speed and security readiness that threatens to become the next great data breach crisis.

In this exclusive Q&A, Kiteworks CMO Tim Freestone and VP of Corporate Marketing and Research Patrick Spencer unpack the survey’s most alarming findings, explore why traditional security approaches fail catastrophically with AI, and explain how organizations can move from hope-based policies to real technical protection before regulators and attackers exploit this massive security gap.Breaking News: Only 17% of Organizations Can Actually Stop AI Data Leaks, Kiteworks Survey Reveals

Q: Your survey found that only 17% of organizations have technical controls to prevent sensitive data from entering AI systems. Why is this number so alarmingly low?

Tim Freestone: The speed of AI adoption has completely outpaced security planning. Organizations rushed to embrace generative AI for competitive advantage without understanding the unique data risks it presents. Traditional security frameworks weren’t designed for systems that learn from and potentially reproduce every piece of data they ingest. Most companies are still applying yesterday’s security playbook to tomorrow’s technology—relying on training and policies when they need technical controls that block data at the network level. The 17% represents organizations that understood early that hope isn’t a security strategy.

Q: The report shows 27% of organizations admit over 30% of their AI data inputs are private. How did we get to a point where companies are feeding such massive amounts of sensitive data into public AI tools?

Patrick Spencer: It’s a perfect storm of factors. Employees and contractors discovered AI tools make them dramatically more productive—writing reports faster, analyzing data more efficiently, solving problems more creatively. Without technical barriers, they naturally use whatever data produces the best results, including sensitive information. Meanwhile, IT and

cybersecurity departments are playing catch-up, trying to create policies for tools that employees are already using dozens of times daily. The 27% figure is likely conservative—remember, another 17% of organizations don’t even know what percentage of their data is private, suggesting the real exposure could be much higher.

Q: One striking finding is that 44% of organizations either have high private data exposure or no visibility into their AI data usage. What makes this visibility gap so dangerous?

Tim Freestone: You can’t protect what you can’t see. This 44% represents organizations operating completely blind to their risk—they’re essentially running a business without knowing if their most sensitive data is being broadcast to the world. Unlike traditional data breaches that leave forensic trails, AI ingestion happens silently through browsers, apps, and APIs. By the time organizations realize their trade secrets, customer data, or strategic plans have been fed into AI systems, that information has already been processed, learned, and potentially used to train models that millions access. It’s continuous, invisible exposure.

Kiteworks CMO Tim Freestone

Kiteworks CMO Tim Freestone

Q: The survey reveals legal firms have the highest concern about data leakage at 31%, yet show similar poor implementation as other industries. Why this disconnect?

Patrick Spencer: Legal firms perfectly exemplify the industry-wide paralysis we’re seeing. They handle attorney-client privileged information, understand liability better than anyone, and recognize the catastrophic consequences of data exposure. Yet they’re caught between competing pressures—clients expect them to leverage AI for efficiency while maintaining absolute confidentiality. The result is this frozen middle ground where they worry intensely but can’t move decisively. Their 31% concern rate shows they understand the risk; their implementation failures show they don’t know how to solve it without hampering productivity.

Q: Your data shows organizations worry most about data coming OUT of AI systems (28%) rather than going IN. Why is this backwards prioritization so dangerous?

Tim Freestone: It’s like installing security cameras on your store’s exit while leaving the front door wide open. Once sensitive data enters an AI system, the game is essentially over—that information becomes part of the model’s training data, influences its outputs, and can surface in unexpected ways. Organizations focusing on output monitoring are trying to stuff the genie back in the bottle. The 11-point gap between output concerns and input controls represents a fundamental misunderstanding of how AI systems work. Prevention at the point of entry is the only effective strategy.

Q: The report found only 12% of organizations consider compliance violations a top AI concern, despite 59 new AI regulations in 2024 alone. How do you explain this dangerous complacency?

Patrick Spencer: Organizations are overwhelmed by the pace of change and choosing to ignore what they can’t immediately grasp. These 59 new regulations aren’t simple checkbox

exercises—they require demonstrable control over AI data flows, audit trails, and the ability to explain AI decision-making. The 12% figure reveals most companies are betting regulators won’t enforce these rules aggressively. That’s a catastrophic miscalculation. When GDPR enforcement began, companies thought they had more time—until the first major fines hit. AI regulation will follow the same pattern but faster and with higher stakes.

VP of Corporate Marketing and Research Patrick Spencer

VP of Corporate Marketing and Research Patrick Spencer

Q: Manufacturing, healthcare, and financial services all show the same 17% technical control rate despite handling vastly different sensitive data types. What does this uniformity tell us?

Tim Freestone: It tells us this isn’t an industry-specific problem—it’s a fundamental technology adoption failure across the board. Whether you’re protecting patient records, financial algorithms, or manufacturing trade secrets, organizations are taking the same inadequate approach. This uniformity is frightening because it means there’s no industry leader showing the way forward. Everyone’s equally vulnerable, equally unprepared, and equally at risk. The attacks and breaches that are coming won’t discriminate by industry.

Q: Nearly half of organizations (43%) either have no privacy controls or only act reactively while racing to adopt AI. How should companies balance innovation with security?

Patrick Spencer: The phrase “balance innovation with security” is part of the problem—it implies they’re opposing forces when they should be complementary. The organizations succeeding with AI aren’t choosing between innovation and security; they’re recognizing that sustainable AI adoption requires both. Technical controls don’t slow innovation—they enable it by creating safe spaces for experimentation. The 43% operating reactively are building on quicksand. When the inevitable breach or regulatory action hits, their AI initiatives will collapse overnight, taking their competitive advantage with them.

Q: Based on your findings, what’s the single most important action organizations should take immediately?

Tim Freestone: Deploy technical controls that block unauthorized AI access at the network level—full stop. Not next quarter, not after the next board meeting, but immediately. The data shows 83% of organizations are exposed right now, with sensitive data flowing into AI systems every minute of every day. While competitors debate policy nuances and governance frameworks, the 17% with real controls are building sustainable AI advantages. This isn’t about perfection; it’s about moving from completely exposed to basically protected. Every day of delay multiplies the risk and the potential damage. The window for action is closing faster than most organizations realize.

The complete Kiteworks AI Data Security and Compliance Risk Report provides detailed breakdowns by industry, company size, and region, along with actionable recommendations for implementing effective AI security controls. Download the full report here.

Comments
To Top

Pin It on Pinterest

Share This