Trust Mechanisms for AI
As AI becomes more and more ubiquitous and penetrates domains in which it was unknown until recently (human resources management, administrations, fintech, basic sciences, e-health, justice, industry 4.0, etc.), it is becoming obvious that a relationship of trust must be established between users and AI. Whether they are experts or not, people confronted with AI are entitled to expect certain guarantees (reliability, respect of data confidentiality, stability and consistency of decisions, etc.). The ARIAC work package “trust mechanisms for AI" seeks to address this need from different angles: federated learning based on blockchain technologies (such as TCLearn and BFAs); inductive logic for predictive justice; guarantees such as stability, choice of good metaparameters (especially in deep learning), self-assessment and certification ; robustness to objective variations, quality of supervision and unseen data, including internal representations; model interpretability via hybridization, distillation, constraints and complexity-optimality trade-offs; interaction for example for infovis and multi-agent systems for by-design guarantees for robots and swarms.