Swiss Ai Research Overview Platform
Regroupant des experts de l'apprentissage automatique, de l'analyse de données, de la génomique ainsi que des interfaces homme-machine, ce projet a pour but d'appliquer la puissance de l'apprentissage automatique à la génomique avec une contrainte d'explicabilité des décisions apprises. D'un point de vue apprentissage, il cible un meilleur éclairage du processus de décision automatique, en façonnant des architectures d'apprentissages plus interprétables ainsi qu'en élaborant des outils pour l'analyse de l’intérieur des réseaux de neurones. Il permettra d'identifier les interactions homme-machine les plus appropriées pour favoriser l'explicabilité des décision tant pour l'expert que pour le public. D'un point de vue génomique il tentera de percer certains secrets du génome et de ses interactions avec les composants intra et extra-cellulaires, avec l'ambition de pouvoir créer de nouveaux médicaments pour certaines maladies génétiques telles que le diabète.
GraphNEx will contribute a graph-based framework for developing inherently explainable AI. Unlike current AI systems that utilise complex networks to learn high-dimensional, abstract representations of data, GraphNEx embeds symbolic meaning within AI frameworks. We will combine semantic reasoning over knowledge bases with simple modular learning on new data observations, to adaptively evolve the graphical knowledge base. The core concept is to decompose the monolithic block of highly connected layers of deep learning architectures into many smaller, simpler units. Units perform a precise task with an interpretable output, associating a value or vector to nodes. Nodes will be connected depending on their (learned) similarity, correlation in the data or closeness in some space. We will employ the concepts and tools from graph signal processing and graph machine learning (e.g. Graph Neural Networks) to extrapolate semantic concepts and meaningful relationships from sub-graphs (concepts) within the knowledge base that can be used for semantic reasoning. By enforcing sparsity and domain-specific priors between concepts we will promote human interpretability. The integration of game-based user feedback will ensure that explanations (and therefore core mechanisms of the AI system) are understandable and relevant to humans. We will validate the GraphNEx framework, including both model performance and the performance of the explanations, with two application scenarios in which transparency and trust are critical: System Genetics to assist the discovery of clinically and biologically relevant concepts on large, multimodal genetic and genomic datasets; and Privacy Protection to safeguard personal information from unwanted non-essential inferences from multimedia data (images, video and audio) as well as to support informed consent. These applications, for which understanding the decision process is paramount for acquisition of new knowledge and for trust by the users, will demonstrate the utility of the GraphNEx framework to adapt to contexts where data are heterogeneous, incomplete, noisy and time-varying, providing new inherently explainable AI models, and a means to explain existing AI systems via post-hoc analyses.
Last updated:30.05.2022