User-steering interpretable visualization with probabilistic principal components analysis

Abstract

The lack of interpretability generally in machine learning and specifically in visualization is often encountered. Integration of user’s feedbacks into visualization process is a potential solution. This paper shows that the user’s knowledge expressed by the positions of fixed points in the visualization can be transferred directly into a probabilistic principal components analysis (PPCA) model to help user steer the visualization. Our proposed interactive PPCA model is evaluated with different datasets to prove the feasibility of creating explainable axes for the visualization.

Publication
ESANN
Date
Links
PDF