Skip to main content

Semantic Bottlenecks: Quantifying and Improving Inspectability of Deep Representations

  • 686 Accesses

Part of the Lecture Notes in Computer Science book series (LNIP,volume 12544)

Abstract

Today’s deep learning systems deliver high performance based on end-to-end training but are notoriously hard to inspect. We argue that there are at least two reasons making inspectability challenging: (i) representations are distributed across hundreds of channels and (ii) a unifying metric quantifying inspectability is lacking. In this paper, we address both issues by proposing Semantic Bottlenecks (SB), integrated into pretrained networks, to align channel outputs with individual visual concepts and introduce the model agnostic AUiC metric to measure the alignment. We present a case study on semantic segmentation to demonstrate that SBs improve the AUiC up to four-fold over regular network outputs. We explore two types of SB-layers in this work: while concept-supervised SB-layers (SSB) offer the greatest inspectability, we show that the second type, unsupervised SBs (USB), can match the SSBs by producing one-hot encodings. Importantly, for both SB types, we can recover state of the art segmentation performance despite a drastic dimensionality reduction from 1000s of non aligned channels to 10s of semantics-aligned channels that all downstream results are based on.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
EUR   29.95
Price includes VAT (Germany)
  • DOI: 10.1007/978-3-030-71278-5_2
  • Chapter length: 15 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
EUR   71.68
Price includes VAT (Germany)
  • ISBN: 978-3-030-71278-5
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
EUR   90.94
Price includes VAT (Germany)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Notes

  1. 1.

    For brevity we call all types of concepts simply: concept.

References

  1. Al-Shedivat, M., Dubey, A., Xing, E.P.: Contextual explanation networks. arXiv:1705.10301 (2017)

  2. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)

    CrossRef  Google Scholar 

  3. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: CVPR (2017)

    Google Scholar 

  4. Bau, D., et al.: Gan dissection: visualizing and understanding generative adversarial networks. In: ICLR (2019)

    Google Scholar 

  5. Bell, S., Upchurch, P., Snavely, N., Bala, K.: Opensurfaces: a richly annotated catalog of surface appearance. ACM Trans. Graph. (TOG) 32(4), 111 (2013)

    CrossRef  Google Scholar 

  6. Bucher, M., Herbin, S., Jurie, F.: Semantic bottleneck for computer vision tasks. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11362, pp. 695–712. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20890-5_44

    CrossRef  Google Scholar 

  7. Burgess, C.P., et al.: Monet: unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390 (2019)

  8. Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: deep learning for interpretable image recognition. In: NeurIPS, pp. 8930–8941 (2019)

    Google Scholar 

  9. Chen, R., Chen, H., Ren, J., Huang, G., Zhang, Q.: Explaining neural networks semantically and quantitatively. In: ICCV, pp. 9187–9196 (2019)

    Google Scholar 

  10. Chen, X., Mottaghi, R., Liu, X., Fidler, S., Urtasun, R., Yuille, A.: Detect what you can: detecting and representing objects using holistic models and body parts. In: CVPR (2014)

    Google Scholar 

  11. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: CVPR (2016)

    Google Scholar 

  12. Esser, P., Rombach, R., Ommer, B.: A disentangling invertible interpretation network for explaining latent representations. In: CVPR, pp. 9223–9232 (2020)

    Google Scholar 

  13. Fong, R., Vedaldi, A.: Net2vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: CVPR, pp. 8730–8738 (2018)

    Google Scholar 

  14. Greff, K., et al.: Multi-object representation learning with iterative variational inference. arXiv preprint arXiv:1903.00450 (2019)

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  16. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-vae: learning basic visual concepts with a constrained variational framework. ICLR 2(5), 6 (2017)

    Google Scholar 

  17. Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: ICML (2018)

    Google Scholar 

  18. Kindermans, P.J., et al.: The (un) reliability of saliency methods. arXiv:1711.00867 (2017)

  19. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  20. Li, L.J., Su, H., Fei-Fei, L., Xing, E.P.: Object bank: a high-level image representation for scene classification & semantic feature sparsification. In: NeurIPS (2010)

    Google Scholar 

  21. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: AAAI (2018)

    Google Scholar 

  22. Lin, D., Shen, X., Lu, C., Jia, J.: Deep lac: deep localization, alignment and classification for fine-grained recognition. In: CVPR, pp. 1666–1674 (2015)

    Google Scholar 

  23. Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 30 (2018)

    CrossRef  Google Scholar 

  24. Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: ICLR (2019)

    Google Scholar 

  25. Marcos, D., Lobry, S., Tuia, D.: Semantically interpretable activation maps: what-where-how explanations within CNNs. arXiv preprint arXiv:1909.08442 (2019)

  26. Melis, D.A., Jaakkola, T.: Towards robust interpretability with self-explaining neural networks. In: NeurIPS (2018)

    Google Scholar 

  27. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., et al.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)

    Google Scholar 

  28. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: ICML (2017)

    Google Scholar 

  29. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034 (2013)

  30. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: ICML (2017)

    Google Scholar 

  31. Xiao, T., Liu, Y., Zhou, B., Jiang, Y., Sun, J.: Unified perceptual parsing for scene understanding. In: ECCV (2018)

    Google Scholar 

  32. Xie, S., Zheng, H., Liu, C., Lin, L.: SNAS: stochastic neural architecture search. In: ICLR (2019)

    Google Scholar 

  33. Yeh, C.K., Kim, B., Arik, S.O., Li, C.L., Ravikumar, P., Pfister, T.: On concept-based explanations in deep neural networks. arXiv preprint arXiv:1910.07969 (2019)

  34. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv:1506.06579 (2015)

  35. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    CrossRef  Google Scholar 

  36. Zhang, Q., Nian Wu, Y., Zhu, S.C.: Interpretable convolutional neural networks. In: CVPR, pp. 8827–8836 (2018)

    Google Scholar 

  37. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: CVPR (2017)

    Google Scholar 

  38. Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting Deep Visual Representations via Network Dissection. arXiv e-prints arXiv:1711.05611, November 2017

  39. Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ade20k dataset. In: CVPR (2017)

    Google Scholar 

  40. Zintgraf, L.M., Cohen, T.S., Adel, T., Welling, M.: Visualizing deep neural network decisions: Prediction difference analysis. arXiv:1702.04595 (2017)

Download references

Acknowledgements

This research was supported by the Bosch Computer Vision Research Lab Hildesheim, Germany. We thank Dimitrios Bariamis and Oliver Lange for the insightful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Max Maria Losch .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 6809 KB)

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Losch, M.M., Fritz, M., Schiele, B. (2021). Semantic Bottlenecks: Quantifying and Improving Inspectability of Deep Representations. In: Akata, Z., Geiger, A., Sattler, T. (eds) Pattern Recognition. DAGM GCPR 2020. Lecture Notes in Computer Science(), vol 12544. Springer, Cham. https://doi.org/10.1007/978-3-030-71278-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-71278-5_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-71277-8

  • Online ISBN: 978-3-030-71278-5

  • eBook Packages: Computer ScienceComputer Science (R0)