ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.11010
  4. Cited By
GFCL: A GRU-based Federated Continual Learning Framework against Data
  Poisoning Attacks in IoV
v1v2 (latest)

GFCL: A GRU-based Federated Continual Learning Framework against Data Poisoning Attacks in IoV

23 April 2022
Anum Talpur
M. Gurusamy
    AAML
ArXiv (abs)PDFHTML

Papers citing "GFCL: A GRU-based Federated Continual Learning Framework against Data Poisoning Attacks in IoV"

8 / 8 papers shown
Title
Adversarial Attacks Against Deep Reinforcement Learning Framework in
  Internet of Vehicles
Adversarial Attacks Against Deep Reinforcement Learning Framework in Internet of Vehicles
Anum Talpur
G. Mohan
AAML
56
7
0
02 Aug 2021
Machine Learning for Security in Vehicular Networks: A Comprehensive
  Survey
Machine Learning for Security in Vehicular Networks: A Comprehensive Survey
Anum Talpur
M. Gurusamy
41
63
0
31 May 2021
Deep Reinforcement Learning for Autonomous Driving: A Survey
Deep Reinforcement Learning for Autonomous Driving: A Survey
B. R. Kiran
Ibrahim Sobh
V. Talpaert
Patrick Mannion
A. A. Sallab
S. Yogamani
P. Pérez
352
1,687
0
02 Feb 2020
Challenges and Countermeasures for Adversarial Attacks on Deep
  Reinforcement Learning
Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
Inaam Ilahi
Muhammad Usama
Junaid Qadir
M. Janjua
Ala I. Al-Fuqaha
D. Hoang
Dusit Niyato
AAML
127
135
0
27 Jan 2020
MULDEF: Multi-model-based Defense Against Adversarial Examples for
  Neural Networks
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks
Siwakorn Srisakaokul
Yuhao Zhang
Zexuan Zhong
Wei Yang
Tao Xie
Bo Li
AAML
53
19
0
31 Aug 2018
On Detecting Adversarial Perturbations
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
61
950
0
14 Feb 2017
Adversarial Attacks on Neural Network Policies
Adversarial Attacks on Neural Network Policies
Sandy Huang
Nicolas Papernot
Ian Goodfellow
Yan Duan
Pieter Abbeel
MLAUAAML
102
837
0
08 Feb 2017
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
148
2,533
0
26 Oct 2016
1