日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes

MPS-Authors
/persons/resource/persons290511

Pham,  Nhi
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt       
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons283728

Kortylewski,  Adam       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons229482

Fischer,  Jonas
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:2503.13429.pdf
(プレプリント), 10MB

付随資料 (公開)
There is no public supplementary material available
引用

Pham, N., Schiele, B., Kortylewski, A., & Fischer, J. (2025). Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes. Retrieved from https://arxiv.org/abs/2503.13429.


引用: https://hdl.handle.net/21.11116/0000-0010-EB66-3
要旨
With the rise of neural networks, especially in high-stakes applications,
these networks need two properties (i) robustness and (ii) interpretability to
ensure their safety. Recent advances in classifiers with 3D volumetric object
representations have demonstrated a greatly enhanced robustness in
out-of-distribution data. However, these 3D-aware classifiers have not been
studied from the perspective of interpretability. We introduce CAVE - Concept
Aware Volumes for Explanations - a new direction that unifies interpretability
and robustness in image classification. We design an inherently-interpretable
and robust classifier by extending existing 3D-aware classifiers with concepts
extracted from their volumetric representations for classification. In an array
of quantitative metrics for interpretability, we compare against different
concept-based approaches across the explainable AI literature and show that
CAVE discovers well-grounded concepts that are used consistently across images,
while achieving superior robustness.