ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.15355
  4. Cited By
Online Continual Learning on a Contaminated Data Stream with Blurry Task
  Boundaries

Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries

29 March 2022
Jihwan Bang
Hyun-woo Koh
Seulki Park
Hwanjun Song
Jung-Woo Ha
Jonghyun Choi
    CLL
ArXivPDFHTML

Papers citing "Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries"

24 / 24 papers shown
Title
Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback
Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback
Yutao Yang
Jie Zhou
Junsong Li
Qianjun Pan
Bihao Zhan
Qin Chen
Xipeng Qiu
Liang He
CLL
26
0
0
15 May 2025
Self-Controlled Dynamic Expansion Model for Continual Learning
Self-Controlled Dynamic Expansion Model for Continual Learning
RunQing Wu
KaiHui Huang
HanYi Zhang
Fei Ye
CLL
VLM
47
0
0
14 Apr 2025
Online Prototypes and Class-Wise Hypergradients for Online Continual Learning with Pre-Trained Models
Online Prototypes and Class-Wise Hypergradients for Online Continual Learning with Pre-Trained Models
Nicolas Michel
Maorong Wang
Jiangpeng He
Toshihiko Yamasaki
CLL
54
0
0
26 Feb 2025
Towards Robust Incremental Learning under Ambiguous Supervision
Towards Robust Incremental Learning under Ambiguous Supervision
Rui Wang
Mingxuan Xia
Chang Yao
Lei Feng
Junbo Zhao
Gang Chen
Haobo Wang
CLL
138
0
0
23 Jan 2025
Incrementally Learning Multiple Diverse Data Domains via Multi-Source Dynamic Expansion Model
Incrementally Learning Multiple Diverse Data Domains via Multi-Source Dynamic Expansion Model
RunQing Wu
Fei Ye
QiHe Liu
Guoxi Huang
Jinyu Guo
Rongyao Hu
CLL
175
0
0
15 Jan 2025
May the Forgetting Be with You: Alternate Replay for Learning with Noisy
  Labels
May the Forgetting Be with You: Alternate Replay for Learning with Noisy Labels
Monica Millunzi
Lorenzo Bonicelli
Angelo Porrello
Jacopo Credi
Petter N. Kolm
Simone Calderara
CLL
40
2
0
26 Aug 2024
Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning
Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning
N. Raichur
Lucas Heublein
Tobias Feigl
A. Rügamer
Christopher Mutschler
Felix Ott
CLL
BDL
73
7
0
17 May 2024
Data Stream Sampling with Fuzzy Task Boundaries and Noisy Labels
Data Stream Sampling with Fuzzy Task Boundaries and Noisy Labels
Yu-Hsi Chen
28
0
0
07 Apr 2024
Just Say the Name: Online Continual Learning with Category Names Only
  via Data Generation
Just Say the Name: Online Continual Learning with Category Names Only via Data Generation
Minhyuk Seo
Diganta Misra
Seongwon Cho
Minjae Lee
Jonghyun Choi
CLL
35
7
0
16 Mar 2024
Coherent Temporal Synthesis for Incremental Action Segmentation
Coherent Temporal Synthesis for Incremental Action Segmentation
Guodong Ding
Hans Golong
Angela Yao
CLL
29
4
0
10 Mar 2024
Class-incremental Learning for Time Series: Benchmark and Evaluation
Class-incremental Learning for Time Series: Benchmark and Evaluation
Zhongzheng Qiao
Quang-Cuong Pham
Zhen Cao
Hoang H Le
Ponnuthurai Nagaratnam Suganthan
Xudong Jiang
Savitha Ramasamy
40
10
0
19 Feb 2024
BECoTTA: Input-dependent Online Blending of Experts for Continual
  Test-time Adaptation
BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation
Daeun Lee
Jaehong Yoon
Sung Ju Hwang
CLL
TTA
62
5
0
13 Feb 2024
Label Delay in Online Continual Learning
Label Delay in Online Continual Learning
Botos Csaba
Wenxuan Zhang
Matthias Muller
Ser-Nam Lim
Mohamed Elhoseiny
Philip H. S. Torr
Adel Bibi
CLL
39
0
0
01 Dec 2023
Class-Incremental Learning Using Generative Experience Replay Based on
  Time-aware Regularization
Class-Incremental Learning Using Generative Experience Replay Based on Time-aware Regularization
Zizhao Hu
Mohammad Rostami
CLL
BDL
20
3
0
05 Oct 2023
Chunking: Continual Learning is not just about Distribution Shift
Chunking: Continual Learning is not just about Distribution Shift
Thomas L. Lee
Amos Storkey
25
1
0
03 Oct 2023
Class Incremental Learning via Likelihood Ratio Based Task Prediction
Class Incremental Learning via Likelihood Ratio Based Task Prediction
Haowei Lin
Yijia Shao
W. Qian
Ningxin Pan
Yiduo Guo
Bing-Quan Liu
CLL
31
13
0
26 Sep 2023
Rethinking Momentum Knowledge Distillation in Online Continual Learning
Rethinking Momentum Knowledge Distillation in Online Continual Learning
Nicolas Michel
Maorong Wang
L. Xiao
T. Yamasaki
CLL
27
8
0
06 Sep 2023
Online Continual Learning on Hierarchical Label Expansion
Online Continual Learning on Hierarchical Label Expansion
Byung Hyun Lee
Ok-Cheol Jung
Jonghyun Choi
S. Chun
CLL
22
8
0
28 Aug 2023
Continual Learning in the Presence of Spurious Correlation
Continual Learning in the Presence of Spurious Correlation
Donggyu Lee
Sangwon Jung
Taesup Moon
CLL
22
2
0
21 Mar 2023
A Comprehensive Survey of Continual Learning: Theory, Method and
  Application
A Comprehensive Survey of Continual Learning: Theory, Method and Application
Liyuan Wang
Xingxing Zhang
Hang Su
Jun Zhu
KELM
CLL
38
601
0
31 Jan 2023
Continual Learning for Predictive Maintenance: Overview and Challenges
Continual Learning for Predictive Maintenance: Overview and Challenges
J. Hurtado
Dario Salvati
Rudy Semola
Mattia Bosio
Vincenzo Lomonaco
21
32
0
29 Jan 2023
SimCS: Simulation for Domain Incremental Online Continual Segmentation
SimCS: Simulation for Domain Incremental Online Continual Segmentation
Motasem Alfarra
Zhipeng Cai
Adel Bibi
Guohao Li
Matthias Müller
CLL
19
4
0
29 Nov 2022
In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
  Selection Framework for Semi-Supervised Learning
In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
Mamshad Nayeem Rizve
Kevin Duarte
Y. S. Rawat
M. Shah
229
509
0
15 Jan 2021
Mean teachers are better role models: Weight-averaged consistency
  targets improve semi-supervised deep learning results
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Antti Tarvainen
Harri Valpola
OOD
MoMe
261
1,275
0
06 Mar 2017
1