Critical Challenges for the Visual Representation of Deep Neural Networks

Kieran Browne, Ben Swift, Henry Gardner

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


    Artificial neural networks have proved successful in a broad range of applications over the last decade. However, there remain significant concerns about their interpretability. Visual representation is one way researchers are attempting to make sense of these models and their behaviour. The representation of neural networks raises questions which cross disciplinary boundaries. This chapter draws on a growing collection of interdisciplinary scholarship regarding neural networks. We present six case studies in the visual representation of neural networks and examine the particular representational challenges posed by these algorithms. Finally we summarise the ideas raised in the case studies as a set of takeaways for researchers engaging in this area.
    Original languageEnglish
    Title of host publicationHuman and Machine Learning: Visible, Explainable, Trustworthy and Transparent
    EditorsJianlong Zhou and Fang Chen
    Place of PublicationSwitzerland
    PublisherSpringer, Cham
    ISBN (Print)978-3-319-90402-3
    Publication statusPublished - 2018


    Dive into the research topics of 'Critical Challenges for the Visual Representation of Deep Neural Networks'. Together they form a unique fingerprint.

    Cite this