Post-hoc Explanations for Integrating User Feedback in Dimensionality Reduction
Real-world data have become increasingly complex and challenging to understand. However, they can be visualized thanks to dimensionality reduction (DR) methods such as PCA, MDS and t-SNE. They summarize the information for each object in two dimensions to visualize entire datasets on a screen. DR can also be used to understand convolutional neural networks (CNNs), whose decisions are notably hard to grasp. However, in both cases, visualizations produced through DR remain hard to understand and cannot be used by users to provide feedback. For real-world data, this is important, as these data can be seen from different vantage points. Also, users would like not only to understand CNN embeddings, but also to control them. The goal of this research project is to create new, user-friendly ways to understand and constrain DR processes through post-hoc explanation (PHE) techniques. PHEs are external tools that provide an explanation of model behavior, especially if the model is not directly interpretable. This project is about developing new algorithms to automatically create PHEs to understand the structure of DR visualizations. Then the idea is to use these PHEs to constrain the DR method itself and to align it with user needs. The user can then elicit what is wrong with the DR explanation or pre-emptively indicate how the PHE should look for specific objects to guide the DR. This forces the DR to focus on aspects of interest for complex datasets. Since preprocessing may occur before the DR itself (e.g., a linear transformation of the data), an aspect of the project considers how to adapt it to improve the subsequent DR result. Finally, the project considers how to leverage the developed PHEs to adapt CNNs when they are visually validated through DR. As user feedback information is costly, there is also a focus on exploring how to efficiently collect and use it.