This interview on TechBullion discusses the importance of interpretability in machine learning models, particularly in critical areas such as healthcare and finance. Valentyn Krykun’s research focuses on using mathematical tools to enhance interpretability in neural network models. He also collaborates with other fields, such as medicine and finance, to apply this approach in practical applications for decision support systems. Challenges include finding a balance between interpretability and accuracy, as well as addressing ethical concerns through collaboration with experts. Valentyn Krykun advises young researchers to have a strong understanding of both machine learning and mathematics and stay updated on current research.

Valentyn Krykun
Good afternoon, could you tell us which specific area of machine learning you are working on?
I focus on researching the interpretability of machine learning models, with an emphasis on neural network structures and methods that help explain their functioning. My goal is to create tools that enhance the understanding and analysis of complex algorithms, especially in critical areas such as healthcare and finance.
How did you come to pursue this topic?
The fields that most urgently need the application of artificial intelligence, such as healthcare, are poorly integrated with AI due to the unpredictability of results and the complexity of analysis. That’s precisely why I decided to focus on this area.
Valentyn, what is your vision for the future of interpretable machine learning models?
The interpretability of machine learning models is becoming increasingly important as algorithms grow more complex and their influence on critical decisions in medicine, finance, and industry intensifies. In the future, I foresee the development of hybrid models that combine the high accuracy of sophisticated neural networks with the transparency of classical mathematical methods. The primary goal is to create systems capable of explaining their predictions in ways that are accessible to both technical specialists and end-users.
What exactly do you focus on in your scientific work?
My work is dedicated to enhancing the interpretability of neural network models through mathematical tools such as Volterra series and Maclaurin series. These methods allow for the approximation of complex nonlinear dependencies within neural networks, transforming them into more comprehensible forms. For example, Volterra series are useful for modeling dynamic systems, which is beneficial for time series analysis, while Maclaurin series simplify complex functions into polynomial expressions that are easier to interpret.
What are the advantages of this approach?
It not only increases trust in the models but also helps uncover hidden patterns that might otherwise go unnoticed. This is particularly crucial in areas where human lives or significant financial risks are at stake. Moreover, interpretable models simplify debugging and algorithm improvement, making them more adaptive and resilient to changes in input data.
How do you see the practical applications of your research?
My developments can be valuable in decision support systems where a high degree of transparency is required, such as in medical diagnostics or financial forecasting. Additionally, this approach is important for explainable artificial intelligence (XAI), which is actively being integrated into government and industrial projects.
What challenges do you face in your research?
One of the main challenges is balancing interpretability with model accuracy. Simplifying a model to make it more interpretable can sometimes lead to a loss of predictive power. Finding methods that maintain high accuracy while providing clear explanations is a continuous challenge. Additionally, computational complexity can increase when applying mathematical transformations to large-scale neural networks.
Are there any specific projects or collaborations you are currently working on?
Yes, I am currently collaborating with healthcare institutions to develop interpretable AI models for early disease detection. These projects aim to assist medical professionals in understanding AI-generated predictions, thereby improving diagnostic accuracy and patient outcomes. I am also involved in a financial analytics project focused on creating transparent risk assessment tools.
How do you ensure the ethical use of AI in your research?
Ethical considerations are integral to my work. I prioritize fairness, transparency, and accountability in AI systems. This includes conducting bias audits, ensuring data privacy, and developing models that provide explanations for their decisions. Collaborating with ethicists and domain experts helps address potential ethical issues early in the development process.
What advice would you give to young researchers interested in the field of interpretable machine learning?
I would advise them to develop a strong foundation in both machine learning techniques and mathematical theory. Understanding the theoretical underpinnings of algorithms is crucial for making them interpretable. Additionally, staying updated with the latest research and actively participating in interdisciplinary collaborations can provide valuable insights and foster innovation.
Thank you for the interview, Valentyn. Best of luck with your research!
Thank you, I was glad to share my ideas.
