Variance Tolerance Factors For Interpreting All Neural Networks

Sichao Li, Amanda Barnard

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    1 Citation (Scopus)


    Black box models only provide results for deep learning tasks, and lack informative details about how these results were obtained. Knowing how input variables are related to outputs, in addition to why they are related, can be critical to translating predictions into laboratory experiments, or defending a model prediction under scrutiny. In this paper, we propose a general theory that defines a variance tolerance factor (VTF) inspired by influence function, to interpret features in the context of black box neural networks by ranking the importance of features, and construct a novel architecture consisting of a base model and feature model to explore the feature importance in a Rashomon set that contains all well-performing neural networks. Two feature importance ranking methods in the Rashomon set and a feature selection method based on the VTF are created and explored. A thorough evaluation on synthetic and benchmark datasets is provided, and the method is applied to two real world examples predicting the formation of noncrystalline gold nanoparticles and the chemical toxicity 1793 aromatic compounds exposed to a protozoan ciliate for 40 hours.

    Original languageEnglish
    Title of host publicationIJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    ISBN (Electronic)9781665488679
    Publication statusPublished - 2023
    Event2023 International Joint Conference on Neural Networks, IJCNN 2023 - Gold Coast, Australia
    Duration: 18 Jun 202323 Jun 2023

    Publication series

    NameProceedings of the International Joint Conference on Neural Networks


    Conference2023 International Joint Conference on Neural Networks, IJCNN 2023
    CityGold Coast


    Dive into the research topics of 'Variance Tolerance Factors For Interpreting All Neural Networks'. Together they form a unique fingerprint.

    Cite this