ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.10939
  4. Cited By
Evading classifiers in discrete domains with provable optimality
  guarantees

Evading classifiers in discrete domains with provable optimality guarantees

25 October 2018
B. Kulynych
Jamie Hayes
N. Samarin
Carmela Troncoso
    AAML
ArXivPDFHTML

Papers citing "Evading classifiers in discrete domains with provable optimality guarantees"

9 / 9 papers shown
Title
CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers
CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers
Matan Ben-Tov
Daniel Deutch
Nave Frost
Mahmood Sharif
AAML
107
0
0
20 Jan 2025
Adversarial Robustness for Tabular Data through Cost and Utility
  Awareness
Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Klim Kireev
B. Kulynych
Carmela Troncoso
AAML
26
16
0
27 Aug 2022
On The Empirical Effectiveness of Unrealistic Adversarial Hardening
  Against Realistic Adversarial Attacks
On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks
Salijona Dyrmishi
Salah Ghamizi
Thibault Simonetto
Yves Le Traon
Maxime Cordy
AAML
26
16
0
07 Feb 2022
A Unified Framework for Adversarial Attack and Defense in Constrained
  Feature Space
A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
Thibault Simonetto
Salijona Dyrmishi
Salah Ghamizi
Maxime Cordy
Yves Le Traon
AAML
24
21
0
02 Dec 2021
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and
  Adversarial Training in NLP
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
John X. Morris
Eli Lifland
Jin Yong Yoo
J. E. Grigsby
Di Jin
Yanjun Qi
SILM
25
69
0
29 Apr 2020
Reevaluating Adversarial Examples in Natural Language
Reevaluating Adversarial Examples in Natural Language
John X. Morris
Eli Lifland
Jack Lanchantin
Yangfeng Ji
Yanjun Qi
SILM
AAML
20
111
0
25 Apr 2020
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
20
61
0
08 Jun 2019
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
245
914
0
21 Apr 2018
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
249
1,837
0
03 Feb 2017
1