Contrasting contrastive and supervised models interpretability
Author(s)
You, Yejin.
Download1251801865-MIT.pdf (60.38Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Antonio Torralba and David Bau.
Terms of use
Metadata
Show full item recordAbstract
In this thesis, we compare the representations of an unsupervised contrastive model to those of an equivalent supervised model using several deep neural network interpretability methods: network dissection, sparsity experiments, and saliency maps. Network dissections of self-supervised contrastive and supervised models show that the neurons of the contrastive model tend to learn about different parts of an object (ie. top-half of a dog or left-half of a person) while the neurons of the supervised model tend to learn about the entire object (ie. a dog or a person). Sparsity experiments show that the representations learned by the contrastive model are less sparse than the representations learned by the supervised counterpart model. Saliency maps show that the contrastive model focuses more on specific parts of the input image. Finally, we find that the contrastive model representations transfer better to finegrained classification tasks than the supervised model representations.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021 Cataloged from the official PDF of thesis. Includes bibliographical references (pages 61-62).
Date issued
2021Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.