ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.02564
38
3

Probing Human Visual Robustness with Neurally-Guided Deep Neural Networks

4 May 2024
Zhenan Shao
Linjian Ma
Yiqing Zhou
Yibo Jacky Zhang
Sanmi Koyejo
Bo Li
Diane M. Beck
    AAML
ArXivPDFHTML
Abstract

Humans effortlessly navigate the dynamic visual world, yet deep neural networks (DNNs), despite excelling at many visual tasks, are surprisingly vulnerable to minor image perturbations. Past theories suggest that human visual robustness arises from a representational space that evolves along the ventral visual stream (VVS) of the brain to increasingly tolerate object transformations. To test whether robustness is supported by such progression as opposed to being confined exclusively to specialized higher-order regions, we trained DNNs to align their representations with human neural responses from consecutive VVS regions while performing visual tasks. We demonstrate a hierarchical improvement in DNN robustness: alignment to higher-order VVS regions leads to greater improvement. To investigate the mechanism behind such robustness gains, we test a prominent hypothesis that attributes human robustness to the unique geometry of neural category manifolds in the VVS. We first reveal that more desirable manifold properties, specifically, smaller extent and better linear separability, indeed emerge across the human VVS. These properties can be inherited by neurally aligned DNNs and predict their subsequent robustness gains. Furthermore, we show that supervision from neural manifolds alone, via manifold guidance, is sufficient to qualitatively reproduce the hierarchical robustness improvements. Together, these results highlight the critical role of the evolving representational space across VVS in achieving robust visual inference, in part through the formation of more linearly separable category manifolds, which may in turn be leveraged to develop more robust AI systems.

View on arXiv
@article{shao2025_2405.02564,
  title={ Probing Human Visual Robustness with Neurally-Guided Deep Neural Networks },
  author={ Zhenan Shao and Linjian Ma and Yiqing Zhou and Yibo Jacky Zhang and Sanmi Koyejo and Bo Li and Diane M. Beck },
  journal={arXiv preprint arXiv:2405.02564},
  year={ 2025 }
}
Comments on this paper