Artificial intelligence

Building AI Agents That Actually Work: Dushyant Singh Parmar’s Safety-Critical Approach

The engineers best positioned to build trustworthy AI agentic systems probably aren’t coming from machine learning labs.

Sounds counterintuitive. But consider: Dushyant Singh Parmar spent years at Alstom engineering autonomous metro systems for Lille and Shenzhen. We’re talking infrastructure that now moves millions of passengers daily. In that world, “it works most of the time” can lead to disasters. Systems have to be deterministic with failure modes mapped before anyone writes a line of code, and has to be done obsessively, as some commuters’s Tuesday morning depends on getting it right.

Dushyant is now applying that thinking to AI agents at Mili. Different context, same paranoia.

The Demo-to-Deployment Gap

Here’s the thing about AI right now: demos are everywhere. Genuinely impressive prototypes. And then you ask “cool, who’s running this in production?” and the room gets quiet.

It’s not a capability problem. AI models can reason and plan well enough. The problem is that reasoning well enough 94% of the time means falling apart catastrophically the other 6%. Dushyant’s belief is straightforward: the same principles that make autonomous trains trustworthy—redundancy, graceful degradation, deterministic behaviour under specified conditions—apply directly to AI agent architecture. The context is different; the engineering philosophy transfers.

At Mili, where he leads AI agents engineering, he’s testing that thesis against the demands of wealth management—an industry where advisors collectively oversee trillions in client assets and where errors carry regulatory and fiduciary consequences.

At Mili, Dushyant obsesses over three things. Persistent context management—can the system process five years of client interaction history without degrading? Intent interpretation—when an advisor says something ambiguous, can the system decompose that into actual executable steps? And then the action layer, which is really about API integrations. CRMs, custodians, calendar systems. The plumbing.

In practice this means an advisor can say something like “pull everything we’ve discussed with the Hendersons about estate planning, find the loose ends, draft prep materials for a follow-up, and get something on the calendar.” One instruction. The system handles retrieval, analysis, document generation, scheduling. Seconds instead of hours.

The Integration Problem

Technical capability means little without ecosystem connectivity. Enterprise AI agents live or die by their ability to work within existing workflows, not replace them.

This is where Dushyant’s partnership work becomes critical. He’s built ai agents powered with integrations spanning major wealth management platforms—Salesforce, Microsoft Dynamics, eMoney, TradePMR, Advyzon, Redtail—creating the connective tissue that lets AI agents operate within advisor workflows rather than alongside them.

Anyone who’s spent time in enterprise API trenches knows how this goes. You read the docs, build to spec, and then discover the docs were aspirational at best. One platform handles contact objects differently depending on when the record was created. Another has auth token refresh logic that times out under conditions nobody bothered to document. Dushyant has lost count of the late nights spent debugging against a client’s staging environment—sometimes that’s the only window where you can test.

The Equivital Detour

Before Mili, Dushyant was at Equivital building wearable physiological monitoring systems. Firefighters, military personnel, people operating in environments where equipment failure has consequences. Hardware-constrained, battery-limited, dealing with hostile RF interference.

Completely different domain. Same underlying obsession: what happens when this breaks?

Here’s what I’ve noticed about people who’ve worked in safety-critical systems. They have this reflexive pessimism which is often genuinely optimistic about what’s possible. They want to see the failure modes before they’ll trust the happy path. They treat integration points with the same seriousness as core logic, because they’ve learned that’s usually where things fall apart.

Autonomous trains, ruggedised wearables, enterprise AI agents. On paper, nothing in common. In practice, same engineering muscle.

What “Production-Ready” Should Mean

The term gets abused constantly in AI. Dushyant has a more specific definition, borrowed from transportation systems. Production-ready means: doesn’t behave erratically under real traffic. Fails predictably when it fails. Admits what it doesn’t know. Fits into existing operations without breaking the workflows people depend on.

By that standard? Most AI agent projects are science experiments with a marketing budget.

The companies that figure out how to actually deploy this stuff—reliably, at scale, in regulated industries—are going to be built by people who understand both the AI and the engineering discipline. Model architecture matters less than you’d think. Prompting techniques, even less. What matters is whether you can ship something that works on the 10,000th execution as well as it worked on the demo.

Dushyant’s background suggests he’s well-positioned for exactly this kind of work. Engineers who’ve built systems where failure isn’t an option tend to approach problems differently—and that discipline is starting to show in what Mili is shipping.

Comments
To Top

Pin It on Pinterest

Share This