Our Research

Our research focuses on Human-centered Machine Learning, covering a variety of research topics including:

  • Interpretability and explainability (for dimensionality reduction, recommender systems…)

  • Constraints for encouraging trustworthiness, fairness and interpretability

  • Deep learning models with embedded invariance (rotation, translation…)

  • Robustness in machine learning (label noise, outliers…)

  • Stability of feature selection and dimensionality reduction methods

Several of our research works tackle dimensionality reduction and deep learning from different perspectives: safety (enforcing constraints, fairness and testing for decision trees and neural networks, as well as developing new rotation-invariant CNNs), interactivity (integration of users in dimensionality reduction for visualization and representation learning) and interpretability/explainability (user studies, extensions of LIME, saliency maps, metrics for interpretability and explanations for unsupervised learning). Our work on robustness also focuses on label noise and probabilistic models, to account for the influence of noisy labels and other types of abnormal data. We use a variety of models, such as neural networks, probabilistic models and kernel machines, and we collaborate with colleagues in application contexts such as industry 4.0, open data, sign language, law, software engineering and physics, among others.