• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

‘My Research Has Evolved into A Broader and More Encompassing Vision’

Seungmin Jin

Seungmin Jin
Photo courtesy of Seungmin Jin

Seungmin Jin, from South Korea, is researching the field of Explainable AI and planning to defend his PhD on ‘A Visual Analytics System for Explaining and Improving Attention-Based Traffic Forecasting Models’ at HSE University this year. In September, he passed the pre-defence procedure at the HSE Faculty of Computer Science School of Data Analysis and Artificial Intelligence. In his interview for the HSE News Service, he talks about his academic path and plans for the future.

I'm Seungmin Jin, originally from South Korea and currently based in Moscow. I'm a doctoral candidate and researcher specialising in Explainable AI (XAI), with a particular focus on enhancing the interpretability of deep learning models. My journey began in the field of traffic forecasting, where I applied XAI principles to improve model performance. However, my expertise extends beyond traffic, and I'm passionate about making AI more transparent and interpretable across various domains.

My collaboration with HSE in pursuit of my PhD research was a well-considered choice. It was initiated when I recognised HSE's strong reputation in data analytics and its commitment to advancing XAI.

This alignment with my research interests led me to explore collaboration possibilities. As I delved into my studies, my research evolved from traffic forecasting to a broader focus on XAI, driven by the desire to make AI models more interpretable and trustworthy. My work with HSE has been instrumental in this evolution, providing invaluable resources and an environment conducive to innovative research.

The primary focus of my research revolves around the advancement of Explainable Artificial Intelligence (XAI), a critical field in modern AI research. While my initial work was centred on enhancing the interpretability of deep learning models within the context of traffic forecasting, my research has since evolved into a broader and more encompassing vision.

At its core, my research is driven by the need to address the inherent black-box nature of complex AI models, particularly deep learning models. These models, while highly powerful and capable of making accurate predictions, often lack transparency. Stakeholders, including domain experts and end-users, are frequently left in the dark when it comes to understanding why these models make specific decisions or predictions.

To bridge this gap, I have developed a novel Visual Analytics system, known as AttnAnalyzer, which serves as a pioneering solution to unravel the inner workings of deep learning models. This system enables users to explore the decision-making processes of these models in a highly interactive and intuitive manner. By visualising attention distributions and uncovering the intricate dependencies captured by the model, AttnAnalyzer provides a comprehensive view of how AI decisions are made.

My research not only involves the creation and refinement of this Visual Analytics system, but also its application across various domains.

I aim to empower stakeholders, whether they are in traffic management, healthcare, finance, or other sectors, to gain deeper insights into AI-driven decisions. My work ensures that these decisions are not only accurate, but also understandable and trustworthy.

In essence, my research in Explainable AI transcends the boundaries of specific applications, encompassing a broader mission to democratise AI understanding. It is about making AI more accessible to all stakeholders, regardless of their technical expertise, and fostering a sense of trust and accountability in AI systems across diverse domains.

In terms of my future professional plans, I plan to apply for a position at HSE for further research. I'm deeply committed to the field of Explainable AI (XAI). While I've made significant contributions in the context of traffic forecasting, I aspire to apply XAI principles across diverse domains.

My vision is to collaborate with both academia and industry to develop cutting-edge XAI solutions that address the black-box nature of deep learning models.

I see myself contributing to the broader adoption of transparent and interpretable AI in critical areas such as healthcare diagnostics, financial risk assessment, and beyond. Ultimately, my aim is to drive the responsible and ethical use of AI by making its decision-making processes more understandable and accessible to all stakeholders. 

While my work has involved significant collaboration with my Korean colleagues, I must acknowledge that without the guidance and support of my HSE-based supervisor, Professor Attila Kertesz-Farkas, I wouldn't have been able to complete this research.

I wish to express my deep gratitude to Professor Attila Kertesz-Farkas for his invaluable guidance and insights throughout my research journey. Under his leadership, the AIC Lab has been at the forefront of developing cutting-edge deep learning technologies for mass spectrometry data analysis in the fields of life sciences and biomedical applications.