Data is all around us. We increasingly see beautiful graphs and tables based on data analysis. However, behind the external clarity, there are often errors, inaccuracies, or simply an incomplete picture of what is happening.
Even researchers face this situation. An international study published in PLOS ONE in 2025 showed that more than 80% of university professors in India and the United States are aware of problems with the repeatability of scientific data. However, ensuring transparency in work remains difficult, as there are not enough tools to record each step of the analysis.
To understand how analytics can be misleading, where errors are most often hidden, and whether you can trust the visualization in front of you, we turned to talented software engineer Kiran Gadhave. While studying for a PhD at the University of Utah, he developed Trrack, a unique system that records every step of data analysis and has peer-reviewed submissions for the Computer Human Interaction (CHI) conference in 2022 and the PacificVis conference in 2022 and 2024.
If you can’t see how a chart was made, don’t rush to believe it.
“Any visualization is not just a picture. There is a chain of actions behind it: what data was selected, how it was processed, what was hidden or highlighted,” says Kiran.
Many charts and dashboards show the finished result. However, how exactly it was obtained often remains unknown. Were filters applied? Were “inconvenient” values removed? Were axes, scales, or colors changed? All of this can significantly affect the output.
A good sign is if the chart has a history of actions, a link to the original data set, or a brief description of the method. If not, then you are left to rely on the trust of the author, which is not always reliable.
Researchers from the Visualization Design Lab at the University of Utah (USA) encountered this problem when creating the medical system Sanguine, a tool for analyzing blood transfusion data. The team integrated Kiran Gadhave’s Trrack library into the system to make the analysis process more transparent and reproducible. This means that every action a clinician takes – applying filters, switching between indicators, adding comments – is automatically saved as an interactive history. This allows for alternative scenarios to be explored, steps to be undone, and exact analysis sessions to be shared. This approach is critical in clinical practice, where accuracy and consistency are critical.
A graph can be beautiful and precise, but still misleading.
“Graphs don’t lie outright. But they can ‘nudge’ you towards a conclusion if certain data, scales, or visual emphasis are chosen,” explains Kiran.
Sometimes the author of a report or presentation doesn’t want to hide the truth, but simply presents the data in a compelling way. For example, only one period may be shown, outliers may be removed, or a scale may be used to make the difference seem dramatic when in fact it is minimal.
Always ask: What’s not shown? Can the original data be seen? Can the filter or range be switched for comparison? Flexibility and transparency are hallmarks of a good visualization.
An example of this approach is the Ferret project, created in the Visualization Design Lab at the University of Utah (USA). It is a tool developed for experts who check scientific articles for reliability. To make the evaluation procedure more objective, the team integrated the Trrack system, developed by Kiran Gadhave, into Ferret. It allows you to track the steps of the reviewer: which sections of the graphs he examined, which hypotheses he tested, and what actions he took. This reduces the risk of biased or erroneous conclusions, since the review process becomes verifiable.
A reliable result can be reproduced.
“If the same analysis can be repeated and the same result can be obtained, then the process was clear and honest,” emphasizes Kiran.
This is especially important in science, medicine, and education. Research that is not reproducible cannot be verified.
If you can see that the result can be recalculated, compared with the original data, or verified in another way, you have high-quality analytics. The more traces the work leaves, the easier it is to trust it.
A striking example of this approach is the reVISit project, in which Kiran was one of the main developers. This system, created in collaboration between the University of Utah (USA), Worcester Polytechnic Institute, and the University of Toronto (Canada), is designed to conduct online experiments with visualizations.
Trrack, developed by Kiran, has become a key component of reVISit. It allows you to track every action of the experiment participant, from clicks to changes in parameters. This data is used to “replay” the user’s session, analyze behavior, and compare different interface options.
Trrack has been implemented in a similar way in the Loops project, developed at the Visual Data Science Lab at Johannes Kepler University Linz, Austria. This system helps scientists track changes to data and code when building machine learning models. With Trrack’s built-in history system, researchers can revert to previous versions of the analysis and accurately reproduce experimental steps.
Trust in data starts with transparency.
Trust in data started with transparency. Visualizations became a part of everyday life. They influenced the medicines we received, how budgets were allocated, and what news stories appeared in our feeds. But even the most beautiful chart could contain imprecise steps, random filters, or sloppy logic. Understanding what was behind the graph became a new form of digital literacy. Whether people worked with data every day or just read a report, they needed to ask a simple question: “How did you come to this conclusion?”
At the University of Utah, a prestigious research institution recognized globally for its cutting-edge work in data science and visualization, Kiran Gadhave served as a Data Visualization Researcher. In this capacity, he played a critical role in advancing the university’s capabilities in transparent and reproducible data analysis. His development of a TypeScript library that captured and visualized user interaction histories was integrated into multiple university-led projects, including clinical decision support tools like Sanguine. This technology not only enhanced the accuracy and reliability of data-driven medical analyses but also positioned the University of Utah at the forefront of research in data provenance and interactive visualization. By enabling detailed tracking of analytical workflows, Kiran’s work supported the institution’s commitment to scientific rigor and innovation, making complex data processes accessible and verifiable to researchers and clinicians alike.
While some chased a beautiful presentation, others like Kiran Gadhave worked to allow us to look inside the process. These technologies that showed the result and allowed us to follow the path represented the future where data could truly be trusted.
