ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.12574
  4. Cited By
A Good Student is Cooperative and Reliable: CNN-Transformer
  Collaborative Learning for Semantic Segmentation

A Good Student is Cooperative and Reliable: CNN-Transformer Collaborative Learning for Semantic Segmentation

24 July 2023
Jinjing Zhu
Yuan Luo
Xueye Zheng
Hao Wang
Lin Wang
ArXivPDFHTML

Papers citing "A Good Student is Cooperative and Reliable: CNN-Transformer Collaborative Learning for Semantic Segmentation"

9 / 9 papers shown
Title
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
47
0
0
13 Jan 2025
Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ
  Segmentation
Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation
Xiaoyu Liu
Linhao Qu
Ziyue Xie
Yonghong Shi
Zhijian Song
41
1
0
17 Jul 2024
Soft labelling for semantic segmentation: Bringing coherence to label
  down-sampling
Soft labelling for semantic segmentation: Bringing coherence to label down-sampling
Roberto Alcover-Couso
Marcos Escudero-Viñolo
Juan C. Sanmiguel
Jose M. Martínez
32
7
0
27 Feb 2023
Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the
  Best of Both Students
Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the Best of Both Students
Xueye Zheng
Yuan Luo
Hao Wang
Chong Fu
Lin Wang
ViT
41
18
0
06 Sep 2022
Mobile-Former: Bridging MobileNet and Transformer
Mobile-Former: Bridging MobileNet and Transformer
Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Xiaoyi Dong
Lu Yuan
Zicheng Liu
ViT
180
476
0
12 Aug 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
277
3,623
0
24 Feb 2021
Learning Student-Friendly Teacher Networks for Knowledge Distillation
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
121
100
0
12 Feb 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
1