Explaining Neural Networks by Decoding Layer Activations
- AI4CE
To better understand classifiers such as those based on deep learning models, we propose a `CLAssifier-DECoder' architecture (\emph{ClaDec}). \emph{ClaDec} facilitates the comprehension of the output of an arbitrary layer in a neural network. It uses a decoder that transforms the non-interpretable representation of the given layer to a representation that is more similar to the domain a human is familiar with, such as the training data. For example, in an image recognition problem, one can recognize what information a layer maintains by contrasting reconstructed images of \emph{ClaDec} with those of a conventional auto-encoder(AE) serving as reference. An extended version of \emph{ClaDec} also allows to trade human interpretability and fidelity by customizing explanations to individual needs. We evaluate our approach for image classification using Convolutional NNs. The qualitative evaluation highlights that reconstructed images (of the network to be explained) tend to replace specific objects with more generic object templates and provide smoother reconstructions. We also show that reconstructed visualizations using encodings from a classifier do capture more relevant information for classification than conventional AEs. This holds despite the fact that AEs contain more information on the original input.
View on arXiv