ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.04924
  4. Cited By
Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are
  Better Continual Learners

Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners

13 January 2022
Duo Li
Guimei Cao
Yunlu Xu
Zhanzhan Cheng
Yi Niu
    CLL
ArXivPDFHTML

Papers citing "Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners"

18 / 18 papers shown
Title
Sequential Learning in the Dense Associative Memory
Sequential Learning in the Dense Associative Memory
Hayden McAlister
Anthony Robins
Lech Szymanski
CLL
163
1
0
24 Sep 2024
Lifelong Person Search
Lifelong Person Search
Jae-Won Yang
Seungbin Hong
Jae-Young Sim
CLL
32
0
0
31 Jul 2024
Adaptive Memory Replay for Continual Learning
Adaptive Memory Replay for Continual Learning
James Seale Smith
Lazar Valkov
Shaunak Halbe
V. Gutta
Rogerio Feris
Z. Kira
Leonid Karlinsky
44
6
0
18 Apr 2024
Defending Against Unforeseen Failure Modes with Latent Adversarial
  Training
Defending Against Unforeseen Failure Modes with Latent Adversarial Training
Stephen Casper
Lennart Schulze
Oam Patel
Dylan Hadfield-Menell
AAML
57
28
0
08 Mar 2024
Eight Methods to Evaluate Robust Unlearning in LLMs
Eight Methods to Evaluate Robust Unlearning in LLMs
Aengus Lynch
Phillip Guo
Aidan Ewart
Stephen Casper
Dylan Hadfield-Menell
ELM
MU
42
57
0
26 Feb 2024
Effects of Architectures on Continual Semantic Segmentation
Effects of Architectures on Continual Semantic Segmentation
Tobias Kalb
Niket Ahuja
Jingxing Zhou
Jürgen Beyerer
CLL
34
3
0
21 Feb 2023
CoMFormer: Continual Learning in Semantic and Panoptic Segmentation
CoMFormer: Continual Learning in Semantic and Panoptic Segmentation
Fabio Cermelli
Matthieu Cord
Arthur Douillard
CLL
VLM
32
20
0
25 Nov 2022
CODA-Prompt: COntinual Decomposed Attention-based Prompting for
  Rehearsal-Free Continual Learning
CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning
James Smith
Leonid Karlinsky
V. Gutta
Paola Cascante-Bonilla
Donghyun Kim
Assaf Arbelle
Yikang Shen
Rogerio Feris
Z. Kira
CLL
VPVLM
VLM
37
260
0
23 Nov 2022
Delving into Transformer for Incremental Semantic Segmentation
Delving into Transformer for Incremental Semantic Segmentation
Zekai Xu
Mingying Zhang
Jiayue Hou
Xing Gong
Chuan Wen
Chengjie Wang
Junge Zhang
CLL
24
1
0
18 Nov 2022
S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for
  Domain Incremental Learning
S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning
Yabin Wang
Zhiwu Huang
Xiaopeng Hong
CLL
VLM
27
210
0
26 Jul 2022
E2-AEN: End-to-End Incremental Learning with Adaptively Expandable
  Network
E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network
Guimei Cao
Zhanzhan Cheng
Yunlu Xu
Duo Li
Shiliang Pu
Yi Niu
Fei Wu
CLL
23
2
0
14 Jul 2022
Continual Learning with Transformers for Image Classification
Continual Learning with Transformers for Image Classification
Beyza Ermis
Giovanni Zappella
Martin Wistuba
Aditya Rawal
Cédric Archambeau
CLL
41
21
0
28 Jun 2022
Continual Object Detection: A review of definitions, strategies, and
  challenges
Continual Object Detection: A review of definitions, strategies, and challenges
Angelo G. Menezes
Gustavo de Moura
Cézanne Alves
André C. P. L. F. de Carvalho
ObjD
63
51
0
30 May 2022
A Continual Deepfake Detection Benchmark: Dataset, Methods, and
  Essentials
A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials
Chuqiao Li
Zhiwu Huang
D. Paudel
Yabin Wang
Mohamad Shahbazi
Xiaopeng Hong
Luc Van Gool
25
48
0
11 May 2022
Simpler is Better: off-the-shelf Continual Learning Through Pretrained
  Backbones
Simpler is Better: off-the-shelf Continual Learning Through Pretrained Backbones
Francesco Pelosin
VLM
16
11
0
03 May 2022
Towards Exemplar-Free Continual Learning in Vision Transformers: an
  Account of Attention, Functional and Weight Regularization
Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
Francesco Pelosin
Saurav Jha
A. Torsello
Bogdan Raducanu
Joost van de Weijer
CLL
23
28
0
24 Mar 2022
Memory Efficient Continual Learning with Transformers
Memory Efficient Continual Learning with Transformers
Beyza Ermis
Giovanni Zappella
Martin Wistuba
Aditya Rawal
Cédric Archambeau
CLL
26
42
0
09 Mar 2022
Distilling Causal Effect of Data in Class-Incremental Learning
Distilling Causal Effect of Data in Class-Incremental Learning
Xinting Hu
Kaihua Tang
Chunyan Miao
Xiansheng Hua
Hanwang Zhang
CML
176
175
0
02 Mar 2021
1