In this TechBullion Q&A, we speak with Tim Freestone, Chief Strategy Officer at Kiteworks, and Patrick Spencer, SVP of Americas Marketing & Industry Research, about Kiteworks’ newly released Data Security and Compliance Risk: 2026 Forecast Report and why many organizations are entering a critical inflection point. Based on a global survey of security, IT, compliance, and risk leaders, the report argues that enterprises are moving faster than their ability to control sensitive data—especially as AI-driven workflows become autonomous. Freestone and Spencer explain why visibility alone is no longer enough, how enforcement gaps are widening, and what leaders must do now to avoid costly failures in the year ahead.
Q: What’s the clearest signal that 2026 will feel different from “normal” cyber risk?
Tim Freestone: Agentic AI is the signal—not because it’s flashy, but because it fundamentally changes the pace of risk. We’re moving from tools that suggest actions to systems that actually take them. That compresses the time between a mistake and real-world impact, especially when those systems interact with sensitive data or downstream workflows. The risk isn’t just that AI might leak data. It’s that AI is being embedded into everyday business processes—routing, summarizing, extracting, and making decisions—where even small policy gaps can turn into large incidents. Many organizations are adopting these systems to gain speed and productivity, while governance is expected to catch up later. In 2026, teams that treat agentic AI like a standard SaaS rollout will learn quickly that autonomy doesn’t wait for quarterly control reviews.
Q: Your report suggests data security posture management (DSPM) is becoming table stakes. Why isn’t that enough?
Patrick Spencer: Because visibility is not the same as control, and too many organizations stop at visibility. DSPM can show you where sensitive data lives and how it moves, but if you can’t consistently enforce classification and tagging across channels, you’re still making decisions with incomplete authority. That’s how sensitive information drifts into unmanaged workflows, poorly governed shares, or partner exchanges that don’t apply the same controls. It’s also why incident response slows down—teams spend the first day or two debating what the data actually was and where it went instead of containing the problem. Some organizations buy monitoring to feel more confident, when what they really need is enforcement to be safer.
Q: Why does the report emphasize centralized AI data gateways so strongly?
Tim Freestone: Because AI control sprawl is already happening, and sprawl is where accountability breaks down. When each team deploys its own AI tools and point controls, you end up with inconsistent policies, uneven logging, and unclear responsibility when something goes wrong. A centralized AI data gateway provides a control plane—a single place to apply policies consistently across copilots, agents, APIs, and integrations as they scale. It also forces discipline around questions many organizations postpone: what data can be used, for what purpose, with what retention, and with what evidence trail. Centralization doesn’t remove risk, but it prevents hundreds of quiet exceptions from becoming the operating model. In 2026, patchwork AI governance won’t scale—it will fail precisely where leaders assume they’re covered.

Q: If you had to identify one missing control that will hurt teams first, what would it be?
Patrick Spencer: Containment. Containment is what protects you when the “unlikely scenario” becomes a routine incident. Monitoring and human review matter, but they’re upstream controls. Containment is what you rely on when something moves too fast or behaves unexpectedly. Many organizations talk about responsible AI while lacking practical safeguards like purpose limitation or the ability to immediately isolate or terminate a misbehaving agent. When sensitive data is involved, that’s not a theoretical gap—it’s an operational and financial one. If an agent pulls too much data or routes it incorrectly, you don’t want deliberation; you want a hard stop that works instantly. In 2026, the difference between observing risk and stopping it will define outcomes.
Q: You describe evidence-quality audit trails as a “keystone.” Why are they so critical?
Tim Freestone: Because governance without evidence is just an opinion—and auditors, regulators, and customers don’t accept opinions. Evidence-quality audit trails let organizations answer fundamental questions quickly and defensibly: who accessed the data, what happened to it, where it went, what controls applied, and what the result was. When sensitive data moves across multiple channels with uneven logging, you don’t have a coherent narrative—you have fragments. That’s why incident response drags on and communication breaks down. Strong audit trails also influence internal behavior because actions are provable, not just “logged somewhere.” In 2026, “show me the proof” will be the default expectation, which makes building for proof no longer optional.
Q: What’s the most underestimated aspect of third-party risk heading into 2026?
Patrick Spencer: The coordination gap. Many organizations still treat third-party risk as a documentation exercise—questionnaires and attestations—while real risk shows up during an incident that requires partners to act together under pressure. Without shared response playbooks and aligned controls, the first true collaboration often happens during a breach. AI complicates this further because data can be transformed, summarized, or retained in ways traditional controls weren’t designed to capture. If you don’t understand how partners handle your data inside AI systems, you’re accepting risk you can’t measure or explain later. You can outsource work, but you can’t outsource accountability.
Q: If you could mandate one board-level discussion in early 2026, what would it be?
Tim Freestone: Accountability for AI governance—who owns it, how it’s measured, and what “good” actually looks like in plain language. Boards don’t need to debate model architectures, but they should demand enforceable controls, defensible evidence, and clear escalation paths when AI-driven processes fail. The most important question isn’t “Are we using AI?” It’s “Can we prove we’re controlling it everywhere sensitive data moves?” When regulators, customers, or partners ask for proof, you either have it or you don’t—and that moment usually arrives under stress. Boards that treat AI governance as a strategic risk will drive investment in enforcement and evidence. Those that don’t will be surprised by outcomes they can’t explain. In 2026, ambiguity isn’t a strategy—it’s a liability.
For a deeper dive into these findings and what they mean for enterprise security leaders, explore Kiteworks’ Data Security and Compliance Risk: 2026 Forecast Report.