Slide 20
Slide 20 text
දత૬ؔʹରͯ͠Կ͕Ͱ͖Δ͔ʁ
• ϕϯνϚʔΫ༻ͷσʔληοτ
• Agrawal et al., “Don’t assume; look and answer: Overcoming priors for
visual question answering,” CVPR 2018
• Hendrycks and Dietterich, “Benchmarking neural network robustness to
common corruption and perturbations,” ICLR 2019
• Hendrycks et al., “Natural adversarial examples,” CVPR 2021
• Out-of-distributionݕग़
• Hendrycks and Gimpel, “A baseline for detecting misclassification and out-
of-distribution examples in neural networks,” ICLR 2017
• Hein et al., “Why ReLU networks yield high-confidence predictions far away
from the training data and how to mitigate the problem,” CVPR 2019
• දత૬ؔΛ࣋ͭಛྔͷݕग़
• Wong et al., “Leveraging sparse linear layers for debuggable deep
networks,” ICML 2021
• Anders et al., “Finding and removing Clever Hans: Using explanation
methods to debug and improve deep models,” Information Fusion, Vol. 77,
2022
• Neuhaus et al., “Spurious features everywhere – Large-scale detection of
harmful spurious features in ImageNet,” ICCV 2023