Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.15875
Cited By
v1
v2 (latest)
Data Poisoning Attack Aiming the Vulnerability of Continual Learning
29 November 2022
Gyojin Han
Jaehyun Choi
H. Hong
Junmo Kim
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Data Poisoning Attack Aiming the Vulnerability of Continual Learning"
9 / 9 papers shown
Title
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi Zhou
AAML
66
105
0
22 May 2019
Gradient Episodic Memory for Continual Learning
David Lopez-Paz
MarcÁurelio Ranzato
VLM
CLL
133
2,743
0
26 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
319
12,151
0
19 Jun 2017
PathNet: Evolution Channels Gradient Descent in Super Neural Networks
Chrisantha Fernando
Dylan Banarse
Charles Blundell
Yori Zwols
David R Ha
Andrei A. Rusu
Alexander Pritzel
Daan Wierstra
75
881
0
30 Jan 2017
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
374
7,587
0
02 Dec 2016
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
164
2,534
0
26 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
282
8,587
0
16 Aug 2016
Progressive Neural Networks
Andrei A. Rusu
Neil C. Rabinowitz
Guillaume Desjardins
Hubert Soyer
J. Kirkpatrick
Koray Kavukcuoglu
Razvan Pascanu
R. Hadsell
CLL
AI4CE
83
2,465
0
15 Jun 2016
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
282
19,145
0
20 Dec 2014
1