Explainability in AI and the role of Visualizations

Problem

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. 

As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. The whole calculation process is turned into what is commonly referred to as a “black box" that is impossible to interpret. These black box models are created directly from the data. And, not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them or how the AI algorithm arrived at a specific result.

Aim

The aims of this project are multiple - and multiple students can work simultaneously on one of the following challenges.

  1. For (SE) Seminar Students: You should provide a comprehensive overview of the literature on this topic. Possible aims include: 
    1. How visualizations can support the explanations?
    2. What type of explanations are possible?
    3. How can we visualize explanations?
    4. What type of visual encoding can we use?
    5. How can the user interact with the explanations? (e.g., to ask for more details)
    6. How can the user interact with the AI model? (e.g., to steer and influence the modeling process)
  2. For (PR) Project Students: You should at analyze and read the relevant literature on this topic to get familiar with the topic Then you should provide a practical implementation or analysis of the following:
    1. Use existing models and ML/AI tools, then explore and implement different types of explanations/verbalizations
    2. Design and implement explanations (e.g., in the form of text or visualizations) for existing AI models
    3. Implement different types of explanations and evaluate which is best suited for different tasks
    4. Implement interactive prototypes that allow the user to interact and explore different explanations (e.g., ask for explanation, modify it, provide feedback to the explanation, ask for more details)

Other information

Please contact Davide Ceneda with a detailed description of your aims (what particular challenge do you want to research/analyze/explore?) -- You can select some of the aims listed above but also propose your own ideas. For practical projects you are also encouraged to provide your own/favotie datasets.

Please note that for practical projects the knowledge of programming languages could be necessary (e.g., to implement some prototype)

Contact

Further information

Area
Information Visualization (IV)
Visual Analytics (VA)
Previous knowledge
Relevant Literature:

Hohman, Fred, et al. "Visual analytics in deep learning: An interrogative survey for the next frontiers." IEEE transactions on visualization and computer graphics 25.8 (2018): 2674-2693.

Carter, Shan, and Michael Nielsen. "Using artificial intelligence to augment human intelligence." Distill 2.12 (2017): e9.

La Rosa, Biagio, et al. "State of the art of visual analytics for explainable deep learning." Computer Graphics Forum. Vol. 42. No. 1. 2023.

Spinner, Thilo, et al. "explAIner: A visual analytics framework for interactive and explainable machine learning." IEEE transactions on visualization and computer graphics 26.1 (2019): 1064-1074.

Sperrle, Fabian, et al. "A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning." Computer Graphics Forum. Vol. 40. No. 3. 2021.

Sevastjanova, Rita, et al. "Going beyond visualization: Verbalization as complementary medium to explain machine learning models." Workshop on Visualization for AI Explainability at IEEE VIS. 2018.

English
Scope
SE
PR
Status
open