Business news

AI Coding Tools Handle More Code Than Engineers, But Trust Is Still a Handshake

AI Coding Tools Handle More Code Than Engineers, But Trust Is Still a Handshake

AI coding assistants now touch more lines of production code than most engineers write by hand. 

One recent survey found that a third of security leaders said more than 60% of their organization’s code was AI-generated in 2024. And we can safely assume that the number has only grown since.

With that much code flowing through AI systems, security teams have understandably focused on what comes out. 

Researchers have documented how AI-generated output can carry vulnerabilities, insecure defaults, and supply chain risks that conventional review processes weren’t built to flag.

But there’s also a much less visible problem underneath all of that: every time a developer sends a prompt to an AI coding assistant, that prompt carries context with it — source code, internal logic, architecture decisions, sometimes even credentials. 

All of it goes through cloud infrastructure, API layers, and model providers that the developer’s company doesn’t own and often can’t inspect. The code leaves the building every time someone hits Enter.

The Gap Between Policy and Proof

Most companies believe they’ve addressed this risk by choosing AI vendors that promise zero data retention or have passed standard compliance audits. 

One recent survey found that 79% of organizations using AI for automated workflows have no visibility into what data those systems actually touch or where they send it.

So, there’s a significant gap between what vendors promise and what teams can verify.

Yaser Bishr, COO of ORGN and a former technology executive at Al Jazeera with an engineering background rooted in defense systems at Lockheed Martin, sees this pattern playing out across the industry. 

The biggest misconception teams have, he says, is that opting out of model training solves the problem. In reality, training is only one piece of the puzzle. 

The more important question is what happens to the code while it’s being processed, how many systems it passes through, where it is in plaintext, and whether the safeguards around it are enforced by the system or just promised on paper.

What makes this especially sensitive is what the source code actually contains. That can be business logic, internal architecture, proprietary workflows, or years of engineering decisions. 

Once any of that leaves a company’s control, the damage can reach intellectual property, customer trust, and enterprise value. And in most cases, teams can’t even confirm where the data went.

Why Regulated Industries Can’t Wait This Out

The problem is far bigger in regulated industries. Finance, healthcare, defense, and government all operate under rules that demand auditability, data residency, and documented control over every sensitive workflow.

And the regulatory pressure around AI has accelerated fast. 

In the EU, the AI Act entered its phased implementation period in August 2025, with obligations already in effect for general-purpose AI model providers. In the US, states like California, Texas, and Colorado have moved ahead with their own AI legislation in the absence of a federal bill.

At the same time, operational-resilience rules like the EU’s Digital Operational Resilience Act require banks to remain responsible for sensitive customer data even when outsourcing technology. 

That means a financial institution using an AI coding tool can’t transfer liability to the vendor if something goes wrong with how the code or data was handled.

So when a regulator or auditor asks how sensitive code was handled inside an AI workflow, most organizations today have no answer. The tools their engineers depend on weren’t built to produce that kind of evidence.

A Verifiable Trust Layer for AI-Generated Code

A category of infrastructure called confidential computing was designed for exactly this kind of problem. It processes sensitive data inside hardware-isolated environments known as Trusted Execution Environments (TEEs), where everything remains encrypted during execution and invisible to the system’s own operators. 

A verification step called remote attestation allows outside parties to confirm the environment was intact before any data entered it.

The concept isn’t new, but its application to AI development workflows is. As AI coding tools move deeper into production, the need for verifiable execution boundaries has grown faster than most infrastructure providers anticipated.

ORGN, a San Francisco-based company that launched in April 2026, has built what it calls the first confidential AI development environment. 

Engineers can use AI coding assistants the way they normally would, but sensitive code stays inside a protected boundary the entire time. When a workflow requires stronger guarantees, inference runs inside a TEE, and the platform produces a cryptographic record proving that it did.

The goal behind all of that, Bishr says, is to change the kind of conversation companies have with their AI infrastructure providers.

Instead of asking what a vendor’s policy says about data handling, teams should be able to ask for proof. Where did the code go? What touched it? And can the system itself demonstrate that?

The Next Phase of AI Coding Won’t Run on Promises

AI coding tools are already part of how software gets built, and that’s not changing. But the infrastructure underneath them is still catching up.

Right now, most companies are still taking their vendors at their word when it comes to how sensitive code is handled. 

ORGN is building for the moment when that stops being good enough. And given how fast the regulatory and security landscape is moving, that moment probably isn’t far off.

Read More From Techbullion

Comments
To Top

Pin It on Pinterest

Share This