ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.07197
  4. Cited By
Are Convolutional Neural Networks or Transformers more like human
  vision?

Are Convolutional Neural Networks or Transformers more like human vision?

15 May 2021
Shikhar Tuli
Ishita Dasgupta
Erin Grant
Thomas Griffiths
    ViT
    FaML
ArXivPDFHTML

Papers citing "Are Convolutional Neural Networks or Transformers more like human vision?"

27 / 27 papers shown
Title
Do computer vision foundation models learn the low-level characteristics of the human visual system?
Do computer vision foundation models learn the low-level characteristics of the human visual system?
Yancheng Cai
Fei Yin
Dounia Hammou
Rafal Mantiuk
VLM
Presented at ResearchTrend Connect | VLM on 14 Mar 2025
147
1
0
13 Mar 2025
Accuracy Improvement of Cell Image Segmentation Using Feedback Former
Accuracy Improvement of Cell Image Segmentation Using Feedback Former
Hinako Mitsuoka
Kazuhiro Hotta
ViT
MedIm
44
0
0
23 Aug 2024
Trapped in texture bias? A large scale comparison of deep instance
  segmentation
Trapped in texture bias? A large scale comparison of deep instance segmentation
J. Theodoridis
Jessica Hofmann
J. Maucher
A. Schilling
SSeg
32
5
0
17 Jan 2024
PlaNet-S: Automatic Semantic Segmentation of Placenta
PlaNet-S: Automatic Semantic Segmentation of Placenta
Shinnosuke Yamamoto
Isso Saito
Eichi Takaya
Ayaka Harigai
Tomomi Sato
Tomoya Kobayashi
Kei Takase
Takuya Ueda
26
0
0
18 Dec 2023
Automated Sperm Assessment Framework and Neural Network Specialized for
  Sperm Video Recognition
Automated Sperm Assessment Framework and Neural Network Specialized for Sperm Video Recognition
T. Fujii
Hayato Nakagawa
T. Takeshima
Y. Yumura
T. Hamagami
30
3
0
10 Nov 2023
Progressive Attention Guidance for Whole Slide Vulvovaginal Candidiasis
  Screening
Progressive Attention Guidance for Whole Slide Vulvovaginal Candidiasis Screening
Jiangdong Cai
Honglin Xiong
Mao-Hong Cao
Luyan Liu
Lichi Zhang
Qian Wang
20
4
0
06 Sep 2023
Large-kernel Attention for Efficient and Robust Brain Lesion
  Segmentation
Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation
Liam Chalcroft
Ruben Lourencco Pereira
Mikael Brudfors
Andrew S. Kayser
M. D’Esposito
Cathy J. Price
Ioannis Pappas
John Ashburner
ViT
3DV
MedIm
32
8
0
14 Aug 2023
Two-Stream Regression Network for Dental Implant Position Prediction
Two-Stream Regression Network for Dental Implant Position Prediction
Xinquan Yang
Xuguang Li
Xuechen Li
Wenting Chen
Linlin Shen
Xuzhao Li
Yongqiang Deng
38
6
0
17 May 2023
Self-attention in Vision Transformers Performs Perceptual Grouping, Not
  Attention
Self-attention in Vision Transformers Performs Perceptual Grouping, Not Attention
Paria Mehrani
John K. Tsotsos
25
24
0
02 Mar 2023
Transformadores: Fundamentos teoricos y Aplicaciones
Transformadores: Fundamentos teoricos y Aplicaciones
J. D. L. Torre
78
0
0
18 Feb 2023
V1T: large-scale mouse V1 response prediction using a Vision Transformer
V1T: large-scale mouse V1 response prediction using a Vision Transformer
Bryan M. Li
I. M. Cornacchia
Nathalie L Rochefort
A. Onken
26
8
0
06 Feb 2023
A Study on the Generality of Neural Network Structures for Monocular
  Depth Estimation
A Study on the Generality of Neural Network Structures for Monocular Depth Estimation
Ji-Hoon Bae
K. Hwang
Sunghoon Im
MDE
32
7
0
09 Jan 2023
Unveiling the Tapestry: the Interplay of Generalization and Forgetting
  in Continual Learning
Unveiling the Tapestry: the Interplay of Generalization and Forgetting in Continual Learning
Zenglin Shi
Jing Jie
Ying Sun
J. Lim
Mengmi Zhang
CLL
42
1
0
21 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
32
17
0
06 Nov 2022
Delving into Masked Autoencoders for Multi-Label Thorax Disease
  Classification
Delving into Masked Autoencoders for Multi-Label Thorax Disease Classification
Junfei Xiao
Yutong Bai
Alan Yuille
Zongwei Zhou
MedIm
ViT
37
59
0
23 Oct 2022
Scratching Visual Transformer's Back with Uniform Attention
Scratching Visual Transformer's Back with Uniform Attention
Nam Hyeon-Woo
Kim Yu-Ji
Byeongho Heo
Doonyoon Han
Seong Joon Oh
Tae-Hyun Oh
364
23
0
16 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
44
4
0
07 Oct 2022
Deep Digging into the Generalization of Self-Supervised Monocular Depth
  Estimation
Deep Digging into the Generalization of Self-Supervised Monocular Depth Estimation
Ji-Hoon Bae
Sungho Moon
Sunghoon Im
MDE
33
84
0
23 May 2022
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Xiaohan Ding
Xinming Zhang
Yi Zhou
Jungong Han
Guiguang Ding
Jian Sun
VLM
49
528
0
13 Mar 2022
Joint rotational invariance and adversarial training of a dual-stream
  Transformer yields state of the art Brain-Score for Area V4
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4
William Berrios
Arturo Deza
MedIm
ViT
30
13
0
08 Mar 2022
Arbitrary Shape Text Detection using Transformers
Arbitrary Shape Text Detection using Transformers
Z. Raisi
Georges Younes
John S. Zelek
ViT
36
13
0
22 Feb 2022
How Do Vision Transformers Work?
How Do Vision Transformers Work?
Namuk Park
Songkuk Kim
ViT
47
466
0
14 Feb 2022
MPViT: Multi-Path Vision Transformer for Dense Prediction
MPViT: Multi-Path Vision Transformer for Dense Prediction
Youngwan Lee
Jonghee Kim
Jeffrey Willette
Sung Ju Hwang
ViT
29
244
0
21 Dec 2021
nnFormer: Interleaved Transformer for Volumetric Segmentation
nnFormer: Interleaved Transformer for Volumetric Segmentation
Hong-Yu Zhou
J. Guo
Yinghao Zhang
Lequan Yu
Liansheng Wang
Yizhou Yu
ViT
MedIm
27
307
0
07 Sep 2021
Partial success in closing the gap between human and machine vision
Partial success in closing the gap between human and machine vision
Robert Geirhos
Kantharaju Narayanappa
Benjamin Mitzkus
Tizian Thieringer
Matthias Bethge
Felix Wichmann
Wieland Brendel
VLM
AAML
48
221
0
14 Jun 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Ming-Hsuan Yang
ViT
265
625
0
21 May 2021
Vision Transformers are Robust Learners
Vision Transformers are Robust Learners
Sayak Paul
Pin-Yu Chen
ViT
28
307
0
17 May 2021
1