Artificial intelligence

Designing for Accountability: How Traceable UX Is Redefining Trust in AI and Enterprise Systems

In intelligent systems, trust is currency. Whether an algorithm is optimising financial decisions or analysing biological samples, users must understand how and why those outcomes appear. The concept driving that clarity is traceable design, a discipline that unites human–computer interaction, product usability, and system accountability.

To unpack what that means in practice, we spoke with Jialun Sun, Project Manager at TechExcel and Co-Founder & Chief Product Officer at Labro Inc., a biotech company developing AI-powered tools that bring precision, speed, and transparency to laboratory workflows. He blends a background in Human–Computer Interaction with hands-on product design, using UX principles to make scientific automation both efficient and accountable. Jialun also serves as a Paper Reviewer at the SARC Journals, where he assesses research on explainable AI and user-centered system design, which reinforces his conviction that every intelligent product must be traceable, interpretable, and trustworthy by design.

Jialun, you’ve described “traceable design” as a framework for both usability and accountability. How do you define it?

Traceable design is the idea that every decision a system makes should be visible, explainable, and verifiable by the user—not hidden inside a model or backend log. It goes beyond aesthetics or smooth interactions. In practical terms, it means that every workflow, from data capture to decision output, carries its own audit path. Users can trace what happened, when, and why, without relying on engineering teams to interpret the system’s logic for them.

In environments like laboratories, that level of clarity is crucial. A missed sample annotation or an unchecked segmentation step can alter results downstream. The design has to surface those dependencies before they become errors. That’s why at Labro, we embed traceability into the interface itself, users see lineage in real time: where data originates, how it’s transformed, and which algorithmic rules or parameters were applied.

It’s what I call human-verifiable intelligence; a form of transparency that restores human judgment in automated contexts. Traceable design ensures that AI doesn’t just make accurate predictions; it shows its reasoning clearly enough for humans to validate, challenge, or correct it. That’s how you close the confidence gap between automation and oversight, and that’s how trust becomes measurable.

Labro’s AI-Powered Automated Cell Counter has become an example of that philosophy in action. How does the product embody traceability?

We designed the counter to replace manual microscope-based cell counting, which typically requires around 15 minutes per sample. Our automated tool completes scanning, analysis, and result generation in under one minute, delivering more than 90% efficiency gains while maintaining a full audit trail. Every captured image, segmentation event, and flagged correction is stored with context so users can see exactly how the system arrived at its final count.

We also rethought the interaction flow. Instead of requiring technicians to calibrate manually, the device guides them step-by-step, loading slides, validating samples, and reviewing segmented outputs with clear overlays. This reduced onboarding time for novice lab staff by >50%, reduced validation errors by ~30% in internal testing.

The financial implications are tangible. A mid-sized lab performing 100 counts per week can save roughly $8,000 per technician annually by reducing review time and eliminating repeat analyses. For small research teams, that difference isn’t just productivity, it’s operational survival.

How does this focus on traceability influence collaboration between teams within Labro?

It’s reshaped how we design and ship products. Traditionally, lab instruments are built sequentially, hardware first, software second, and user interface last. We reversed that order. Design drives the process because it defines how information is surfaced, verified, and acted upon.

Every iteration is documented in what we call a feedback map—a structured record of changes, reasons, and impact. That map travels through engineering, manufacturing, and QA. It’s not a static document; it’s a shared accountability system. By making rationale traceable, we’ve cut cross-team miscommunication by more than half and reduced prototype-to-production hand-off delays by 25%.

The philosophy is simple: if a decision can’t be explained, it can’t be shipped.

In your recent scholarly paper, “Transforming Revenue Management with UX-Driven GenAI for Enterprise CPQ Systems,” you explored how generative AI and UX design jointly improve pricing accuracy and user adoption. What parallels do you see between that research and Labro’s work on traceable systems?

The connection lies in explainability. The research showed that GenAI-enabled CPQ systems improved deal win rates, pricing accuracy, margins, and quote cycle times. Still, those gains depended on how clearly the interface conveyed the reasoning behind each recommendation. GenAI features drove revenue outcomes, yet UX determined whether teams actually trusted and used the system. When the rationale was visible, adoption rose; when it was opaque, performance gains flattened.

At Labro, it’s the same concept applied to biological intelligence. Our AI segmentations must be explainable at the visual level: users see confidence overlays, uncertainty zones, and correction paths. This not only accelerates analysis but also teaches the AI in context. Traceable UX transforms opaque automation into a transparent partnership between human and machine.

Both examples prove that trust scales only when visibility scales. Without that, even the most advanced AI becomes an expensive liability.

You’ve also judged the 2025 Business Intelligence Awards. How has that experience influenced your view on what defines innovation in AI-driven products?

What stood out to me was that innovation isn’t about novelty anymore, it’s about measurable accountability. The products that impressed me weren’t just automating tasks; they were quantifying reliability. They showed how a model’s behaviour could be tracked, explained, and corrected without breaking the user experience.

At Labro, that perspective reinforces our approach. Whether it’s audit logs embedded in the interface or version-controlled configurations for each AI model release, we want users to understand the lifecycle of every decision. When design leads transparency, compliance becomes inherent rather than enforced.

Looking ahead, how do you see traceable UX shaping the next generation of intelligent systems?

I think we’re approaching a shift where design becomes the governance layer of AI. Regulations will evolve, models will change, but design will remain the interpreter, translating complexity into comprehension.

In the next few years, the most competitive systems won’t be those that act autonomously; they’ll be the ones that can explain their autonomy. Traceable UX will drive that. It’ll define standards for explainable interfaces, data lineage visualisation, and decision reproducibility, not as optional features but as product fundamentals.

This next phase of traceable design is about extending those principles into a broader framework for verifiable automation, systems that can learn, adapt, and still remain accountable to the people who rely on them. That, ultimately, is the real frontier of design integrity.

Comments
To Top

Pin It on Pinterest

Share This