Tech News

Inside the Race to Secure GenAI: Why Data Loss Prevention Leads the SaaS Battle

In early 2024, an international financial services firm discovered that an internal analyst had uploaded proprietary earnings projections to a GenAI chatbot for formatting help. The chatbot, a consumer-tier tool not vetted for enterprise use, stored the query in its training logs. Within weeks, fragments of the data surfaced in unrelated user outputs. It was not a breach in the traditional sense. But for the company’s legal and compliance teams, it was worse: a shadow AI incident, invisible until it was too late.

These risks are not isolated. As enterprises scramble to deploy artificial intelligence tools across workflows, new vulnerabilities are emerging. Shadow AI, the unsanctioned use of GenAI software by employees, is becoming one of the most pressing concerns for chief information security officers. Automated agents, APIs, and unsupervised plug-ins now move data at volumes legacy systems cannot meaningfully track. One misstep can trigger multimillion-dollar fines under global regulations like GDPR and SOX.

Achal Singi, Vice President at WestBridge Capital, has been watching this shift from the front lines. With over a decade of experience in enterprise software and AI infrastructure, Singi focuses on early-stage and growth investments with infrastructure depth. A Globee Awards for Technology judge, he has evaluated a range of cybersecurity platforms responding to the shadow AI crisis. “Enterprises are not asking how fast they can deploy copilots, they are asking how safe their data is once they do,” he says.

His work with Turing, a key provider to OpenAI, Anthropic, Nvidia, and other leading AI labs, has reinforced this reality. “Training models is one thing. Turning those models into safe, compliant, production-ready tools for highly regulated industries is where the real challenge lies,” Singi notes. WestBridge supported Turing’s expansion into LLM safety, audit copilots, and enterprise AI infrastructure, helping the company scale from under $10 million to over $300 million in annualized revenue.

Why Data Loss Prevention Is Back in the Spotlight

For years, data loss prevention felt like a checkbox rather than a shield. Today, it is central to enterprise security posture. The explosion of SaaS applications, the permanence of hybrid work, and the rapid deployment of GenAI have created vast new threat surfaces. Tools built for last decade’s networks are struggling to protect this decade’s workflows.

Enterprise buyers are shifting toward solutions that integrate directly into daily systems. API-native platforms like Nightfall AI are thriving by embedding into Slack, Google Workspace, and Salesforce. They detect and remediate leaks with minimal disruption. Over two billion items have been scanned, and more than 80 percent of incidents are resolved automatically.

“Buyers no longer want gatekeepers, they want smart filters,” says Singi. “Tools that understand how people work and intervene only when necessary.”

This buyer expectation is mirrored in other WestBridge-backed companies. Innovaccer, now used by seven of the top 10 US health systems, has embedded DLP and patient data safeguards into its Healthcare AI modules. From billing code automation to CRM-based engagement tracking, the company’s platform prioritizes compliance by design, not by patchwork.

Rethinking Security Investment Through Strategic Signals

Investors are evolving how they think about security. The pivot from legacy, rules-based engines to modern, ML-native platforms is accelerating. Security now must cover the application layer, the collaboration layer, and the growing ecosystem of APIs and agents that surround every enterprise stack.

This shift has shaped WestBridge’s approach. Singi and his team worked closely with Nightfall to move beyond network detection and toward endpoint exfiltration monitoring, real-time risk scoring, and dynamic policy enforcement. These features were built in response to direct input from enterprise security leaders.

A similar dynamic played out at Turing. Singi supported the company’s development of GenAI audit copilots for sectors like banking and pharma, where compliance is not optional. Customers now use these copilots to meet regulations like SOX 404, HIPAA, and FDA site inspection protocols. “Security that slows down workflows won’t last. The systems that scale are the ones that understand what enterprises can’t afford to get wrong,” Singi says. As a Upekkha Accelerator judge, Singi continues to see usability and extensibility, not just threat detection, as leading indicators of long-term adoption.

Compliance Is the New Infrastructure

The AI revolution is not just about speed. It is about trust. Regulations are tightening, and enterprises are adapting. HIPAA, FedRamp, and Basel III are being updated to address AI explicitly. The result is a new kind of infrastructure demand. Companies are not only adopting AI, they are rebuilding their stack to govern it.

Singi recently wrote that compliance is no longer a documentation exercise. It is a product decision. That insight is visible in companies like Innovaccer, whose Healthcare AI modules have helped systems like CHI Health and Franciscan Alliance reduce readmissions, close coding gaps, and manage Medicare lives with confidence. These systems were architected to meet regulatory scrutiny from the first line of code. Turing followed a similar path, building alignment and RLHF tooling that made LLMs safer and enterprise-ready. Its audit copilots in healthcare and finance have reduced inspection prep time by up to 50 percent, while increasing compliance accuracy.

In his DZone article titled A Step-by-Step Guide to Enterprise Application Development Singi expands on this mindset. He argues that resilient architecture, grounded in real-world constraints, matters more than flashy features. In environments where uptime is measured in SLAs with seven-figure penalties, guardrails are not optional. They are foundational to the product itself, not separate from it.

The Quiet Architecture Behind GenAI’s Enterprise Future

The next breakout enterprise software category may not be the flashiest. It may be the safest. The rise of data security platforms for AI reflects a deeper market shift. Tools are no longer judged solely by features. They are judged by their ability to contain risk.

“Security has quietly become the foundation on which all other innovation rests,” Singi says. “The next generation of enterprise AI will not be judged by its creativity, but by its containment.”

Turing’s trajectory—from powering foundational LLM training to enabling Fortune 500 production rollouts—illustrates this perfectly. So does Innovaccer’s ability to unify patient data across fragmented health systems while maintaining airtight compliance. Both companies are helping define what safe AI actually looks like in practice.

SaaS security is no longer a side conversation. It is a launch condition. The companies that understand this, that make security intrinsic to product design, will define enterprise AI in 2025 and beyond. The loudest copilots may get the headlines. But it is the quietest safeguards that will win the trust.

Comments
To Top

Pin It on Pinterest

Share This