Abstract
Artificial neural networks have proved successful in a broad range of applications over the last decade. However, there remain significant concerns about their interpretability. Visual representation is one way researchers are attempting to make sense of these models and their behaviour. The representation of neural networks raises questions which cross disciplinary boundaries. This chapter draws on a growing collection of interdisciplinary scholarship regarding neural networks. We present six case studies in the visual representation of neural networks and examine the particular representational challenges posed by these algorithms. Finally we summarise the ideas raised in the case studies as a set of takeaways for researchers engaging in this area.
Original language | English |
---|---|
Title of host publication | Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent |
Editors | Jianlong Zhou and Fang Chen |
Place of Publication | Switzerland |
Publisher | Springer Cham |
Pages | 119-136 |
Volume | 1 |
Edition | 1 |
ISBN (Print) | 978-3-319-90402-3 |
DOIs | |
Publication status | Published - 2018 |