Suryakant Kaushik is Senior Business Operations Manager – Global Support Experience at Samsara, Inc., a global industry leader in connected operations and fleet management. As a business and operations leader at the intersection of AI, operations, and customer experience, driving performance and efficiency at enterprise scale, Suryakant applies a decade of specialized experience in fleet safety and connected operations to develop and deliver proven cross-functional programs that combine technology, analytics, and strategy.
Suryakant holds multiple U.S. patents implemented in large-scale commercial fleet-safety operations. His original innovation, Event Analysis and Review Tool, helps fleets prioritize and review safety incidents such as harsh braking or mobile phone usage, replacing manual review workflows with structured, data-driven processes. Another technical innovation is his Monitoring Safe Distance Between Vehicles patent, which uses sensor data to identify unsafe following distance behavior, addressing one of the leading causes of commercial fleet collisions.
Surykant’s technical articles about AI-driven connected operations have been published in peer-reviewed journals and presented at prestigious international technology and fleet management industry conferences. He has also contributed chapters on AI-driven decision support and operational research in fleet management to academic books published globally . He serves as Vice President of the Product Development and Management Association (PDMA) Texas Chapter and is a Full Member of the distinguished Sigma Xi Scientific Research Honor Society, among other professional affiliations. Suryakant further contributes his time and expertise by serving as a judge in various technology awards competitions and as an IGI Global Peer Reviewer. As a recognized subject matter expert and thought leader, he also contributes to industry-wide discussions as a member of the AI Advisory Council at Products That Count, focused on AI governance, measurement, and operational readiness.
Suryakant earned a Bachelor of Technology in Mechanical Engineering degree from the National Institute of Technology in Sichar, India, and received his MBA with a major in Operations from Texas A&M University in College Station, Texas (US). More recently, he has deepened his AI fluency through a post-graduate program in Generative AI for Business Applications at the University of Texas-Austin McCombs School of Business. Suryakant is also a Certified COPC Customer Experience Performance Leader.
ELLEN WARREN: Suryakant, your work sits at a rare intersection: AI model validation, human judgment, and operational scale. When you designed Samsara’s Safety Event Review (SER) program as a human-in-the-loop system, what were the hardest tradeoffs between automation, accuracy, and reviewer trust – and how did you resolve them in practice?
SURYAKANT KAUSHIK: When I took over the Safety Event Review program in 2022 after joining Samsara, one of the first things I noticed was that both speed and accuracy were struggling. At first, that looked like a typical scaling issue, but once we started digging into reviewer decisions, it became clear that inconsistency was the bigger problem.
To understand where that inconsistency was coming from, we ran a simple experiment. We sent the same safety events to multiple reviewers and compared their decisions. That slowed our turnaround time initially, but it gave us a clear picture of where variance was showing up. A lot of it came down to cases where our guidelines weren’t specific enough. For example, with seatbelt events, some reviewers marked an event as non-coachable, while others marked the exact same behavior as coachable. We saw similar variance with inattentive driving; some reviewers flagged it, others didn’t, and customer thumbs-down feedback made it clear that this inconsistency was hurting perceived accuracy.
Once we saw that pattern, the fix became clearer. Instead of asking reviewers to rely on intuition, we added more objective anchors to guide decisions. For inattentive driving, we surfaced clearer indicators so reviewers weren’t making calls based on a single moment. For seatbelt-related events, we clarified how certain behaviors should be treated from a coachability standpoint, reducing ambiguity. Reviewers still applied judgment, but they were doing it against the same reference points. That change reduced variance, improved customer feedback, and helped us scale the program without sacrificing trust.
EW: You’ve led initiatives that reduced SER turnaround times by orders of magnitude in enterprise environments. From an operations and systems-design perspective, what architectural decisions made that level of compression possible without sacrificing quality or compliance?
SK: When I looked closely at SER turnaround times, it was clear that pushing reviewers harder wasn’t the answer. Most of the delay was coming from the system around them, not from the decisions themselves. A lot of time was being lost in pre- and post-processing: how events were prepared, enriched, queued, and finalized, so we reengineered those flows first to remove latency that had nothing to do with human judgment.
We also simplified the review experience itself. Reviewers didn’t need to watch entire videos end-to-end in most cases, so we surfaced the trigger point first while keeping full context available when needed. We reduced unnecessary clicks, embedded guidance directly into the review flow, and introduced skill-based routing so reviewers handled familiar event types instead of everything. Together, those changes shifted effort away from navigation and context-gathering and toward judgment. By letting automation handle preparation and routing while humans focused on decisions, we were able to compress turnaround times significantly without sacrificing accuracy or compliance.
EW: Across your career, you have led initiatives that delivered dramatic margin expansion – for example, taking a program from roughly 40% to over 85% margins – while increasing accuracy and coverage. What does that experience teach about the misconception that AI-driven quality and cost efficiency are inherently at odds?
SK: When I took over the program, the prevailing assumption was that it was already operating close to its margin ceiling. The perception was that if we expanded coverage or pushed for higher accuracy, margins would naturally get worse, so the program had to be tightly constrained to avoid dilution.
One of the first things I did was work closely with our finance and strategy teams to understand the actual economics of the operation. We broke it down to something very simple: the cost of reviewing a single event. From there, we tied that cost back to license volume and revenue. That exercise was eye-opening. It gave us a true picture of the program’s margins for the first time, and it also showed us exactly which levers mattered if we wanted to improve them. It sounds like basic math, but no one had really looked at the program through that lens before.
That clarity exposed a number of guardrails we had put in place based on the wrong assumptions. We were limiting scale because we thought margins would collapse, when in reality they didn’t have to. Once we understood that, we focused on taking the easiest, most repetitive work away from reviewers through AI and automation, reducing the cost per event, and then shifting the remaining work to a lower-cost region. As a result, margins increased significantly while accuracy and coverage improved at the same time. That experience made it very clear to me that quality and efficiency only appear to be at odds when you’re operating on assumptions instead of real economics.
EW: Your three U.S. patents address event analysis, vehicle spacing, and dynamic geofencing – each a safety-critical problem with real-world consequences. How do you approach inventing in domains where false positives and false negatives both carry operational risk, and how does that mindset differ from purely academic AI research?
SK: Most of my invention work started in customer business reviews, not in model development. In those reviews, I kept seeing the same pattern: customers had a lot of data, but no clear, holistic view of their fleet that told them where to act and when. The impact only really landed when I pulled together insights for them and walked through what had happened over the previous week or month or quarter. That was a clear signal to me that we weren’t presenting information in a way that helped customers understand risk or take action on their own. My first patent came directly out of that realization. It focused on how safety events are analyzed and surfaced so customers can see patterns and act without relying on an SME to explain the data.
The second patent came from a similar place. In multiple business reviews and support tickets, customers told us that our following-distance alerts didn’t feel accurate or actionable. When we dug into it, we realized part of the issue was how we defined correctness. Our evaluation wasn’t precise enough at highway speeds, and we were losing important signals because of how coarse that definition was. Once we increased the resolution of how we evaluated the following distance, the system aligned much better with real-world driving behavior.
In both cases, I was balancing the risk of over-alerting with the risk of missing genuinely unsafe behavior. That’s where my mindset differs from academic AI research. My goal is to build systems people trust in real operations, rather than optimize metrics in isolation.
EW: Across both Samsara and in your previous role at VMware, you’ve repeatedly designed rule-based and AI-assisted systems to detect fraud, risk, or abnormal behavior at scale. How do you decide when a deterministic, rules-first approach is superior to a machine-learning model – and when hybrid architectures are unavoidable?
SK: Earlier in my career at VMware, the problems I was working on were much more bounded. When I was building indicators to flag risky partner registrations, the patterns were fairly obvious once I looked at enough cases, such as unusually high volumes in a short time window, repeated submissions with slight variations, or activity that didn’t match the normal onboarding flow.
Where rules start to break down is when human behavior or environmental context varies too much to capture cleanly in logic. A good example is drowsiness detection. Visual cues like eye closure or head movement can vary widely based on lighting conditions, camera angle, or individual differences. A rigid rule that works in one scenario can quickly generate false positives in another.
Because of that, I don’t approach these problems by picking rules or machine learning upfront. I start by asking which parts of the workflow need to behave consistently and which parts are likely to vary. Rules still work well for the predictable pieces, but once variability starts to dominate, that’s where machine learning becomes necessary.
EW: You’ve overseen global operations involving hundreds of people, multimillion-dollar budgets, and 24×7 coverage, while still engaging deeply with analytics, dashboards, and model performance. How do you personally stay close enough to the data to make good decisions without becoming a bottleneck?
SK: When I first started managing larger, global teams, I learned pretty quickly that trying to stay close to everything myself didn’t scale. Early on, especially in SER, I found myself owning too much of the quality and monitoring work because we didn’t have a dedicated Quality Assurance (QA) function. I built automations and dashboards to surface leading indicators, which helped, but acting on those signals still depended on me. That quickly became a bottleneck.
The fix wasn’t more visibility, it was redistributing ownership. I spearheaded the formation of a dedicated QA team and stayed closely involved in hiring, making sure we brought in people who could think critically and own outcomes. I did the same with workforce management and training, building those functions out so capacity planning, skill development, and day-to-day coaching didn’t roll up to me. We also gave supervisors more autonomy to act on data without waiting for escalation. Once those pieces were in place, the indicators I was watching became actionable across the organization. This has allowed me to stay close to the data while keeping decisions moving at scale.
EW: Your academic publications and book chapters explore how AI, IoT, and analytics can transform fleet operations at scale. In your view, what separates organizations that deploy AI from those that actually extract durable strategic value from it?
SK: What I’ve seen, especially with the recent AI surge, is that deploying AI has become relatively easy. Teams are quickly building tools to speed up work or reduce operational friction. In my work advising teams on AI adoption, the harder problems showed up after deployment. We started seeing duplicated AI efforts solving the same problem in slightly different ways, and it wasn’t always clear who owned those tools once they were live. This is something I’ve also explored in my academic work—how AI only creates lasting value when it is tied to governance and decision-making, not just deployment.
I believe this is where durable value starts to separate from experimentation. AI systems need ownership and lifecycle management just like any other part of the operating model. Someone has to decide which tools should be scaled, which should be integrated more deeply into workflows, and which should be retired when they’re no longer useful or are duplicative. In my experience, I have often seen that if no one owns AI after it goes live, it slowly stops being useful.
EW: Fleet safety, logistics, and connected operations increasingly operate under regulatory, ethical, and public scrutiny. Based on your experience designing safety systems and validation frameworks, what responsibilities do AI and operations leaders have that go beyond traditional KPIs?
SK: One thing I’ve learned is that when you’re building safety systems, you can’t look at success through KPIs alone. A decision doesn’t stop at a metric – it flows through an entire ecosystem. It affects drivers, customers, support teams, legal and compliance groups, and sometimes regulators. If you don’t understand how those pieces connect, it’s easy to optimize one part of the system while creating risk somewhere else.
Thus, I try to stay close to both internal and external signals. Internally, I focus on understanding downstream team impact and regularly checking how customers are actually experiencing the output, not just whether accuracy numbers look good. Externally, I make it a point to stay engaged with the broader industry by attending and speaking at summits, participating in roundtables, and being active in forums like Samsara’s safety summits. Those conversations are often where early signals show up – whether it’s shifting customer expectations, new regulatory direction, changes around PII, or emerging technologies that will soon matter.
EW: Your career started with large-scale physical infrastructure – power distribution, fault recovery, and real-time operational systems – long before “AI at scale” became fashionable. How has that hands-on experience with real-world constraints shaped the way you design digital and AI-enabled systems today?
SK: My early work in power distribution was very execution-heavy and put me directly in front of customers during high-stress situations. When there was an electrical fault, the expectation was often to restore power within a couple of hours – and if it was peak summer, the pressure escalated quickly. I understood that more than just fixing infrastructure, this required managing frustrated communities in real time while coordinating teams to resolve the issue as fast as possible. Those environments were chaotic, but they forced you to make decisions under pressure and communicate clearly when emotions were running high.
That experience taught me two things that still shape how I work today. First, systems need to be designed for failure, not just normal operation. In the field, you learn very quickly that things will go wrong, and what matters most is how predictable and recoverable the system is when they do. Second, I learned the importance of stakeholder alignment. Whether it was restoring power or upgrading infrastructure – like replacing a transformer to handle higher summer demand – success depended as much on working through resistance and building local support as it did on the technical solution itself. Those lessons carry directly into how I design AI-enabled systems now. I think carefully about failure modes, escalation paths, and how people will react when the system is under stress, because that’s when design decisions matter most.
EW: The AI landscape is evolving faster than most careers can keep pace with. How do you stay relevant – not just aware, but genuinely fluent – in a field that reinvents itself every few months?
SK: I read and follow what’s happening in the field, but I’ve learned that reading alone doesn’t build judgment. What’s helped me most is actually building things and realizing pretty quickly where my thinking was wrong.
This is why, after receiving my MBA, I pursued post-graduate education in Generative AI for Business Applications at UT Austin’s McCombs School of Business. The focus is very applied, using tools like LLMs and RAGs in real business scenarios rather than theoretical exercises. I also spend time learning from peers through the AI Advisory Council at Products That Count, where product and AI leaders regularly compare notes on what’s working, what isn’t, and where the hard problems still are. Between hands-on work and those conversations, I stay grounded in how the technology actually behaves in practice, not just how it’s described.
EW: Looking ahead, as generative AI, IoT, and autonomous decision support converge, what do you believe will be the defining operational advantage for fleet and logistics organizations over the next five years – and where are most leaders currently underestimating the complexity?
SK: I don’t think the defining advantage over the next few years will come from having better models. Access to AI, IoT data, and automation is becoming table stakes, which means the competitive gap is going to be much smaller than people expect. The real advantage will come from how well organizations understand their customers and industry pain points – even the small ones – and how quickly they can act on them.
Where I see leaders underestimating complexity is in everything that happens after deployment. Rolling tools out is easy. Getting people to actually use them consistently is much harder. That work shows up in workflow design, training, ownership, and governance – not just in the technology itself. Over time, that difference is what separates teams that sustain momentum from those that stall.