ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.14729
  4. Cited By
longhorns at DADC 2022: How many linguists does it take to fool a
  Question Answering model? A systematic approach to adversarial attacks

longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks

29 June 2022
Venelin Kovatchev
Trina Chatterjee
Venkata S Govindarajan
Jifan Chen
Eunsol Choi
Gabriella Chronis
Anubrata Das
K. Erk
Matthew Lease
Junyi Jessy Li
Yating Wu
Kyle Mahowald
    AAML
    ELM
ArXivPDFHTML

Papers citing "longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks"

13 / 13 papers shown
Title
Assessing Brittleness of Image-Text Retrieval Benchmarks from
  Vision-Language Models Perspective
Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective
Mariya Hendriksen
Shuo Zhang
R. Reinanda
Mohamed Yahya
Edgar Meij
Maarten de Rijke
59
0
0
21 Jul 2024
MirrorCheck: Efficient Adversarial Defense for Vision-Language Models
MirrorCheck: Efficient Adversarial Defense for Vision-Language Models
Samar Fares
Klea Ziu
Toluwani Aremu
N. Durasov
Martin Takáč
Pascal Fua
Karthik Nandakumar
Ivan Laptev
VLM
AAML
40
4
0
13 Jun 2024
Benchmark Transparency: Measuring the Impact of Data on Evaluation
Benchmark Transparency: Measuring the Impact of Data on Evaluation
Venelin Kovatchev
Matthew Lease
35
4
0
31 Mar 2024
An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts
  on Vision-Language Models
An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models
Haochen Luo
Jindong Gu
Fengyuan Liu
Philip Torr
VLM
VPVLM
AAML
60
21
0
14 Mar 2024
On Evaluating Adversarial Robustness of Large Vision-Language Models
On Evaluating Adversarial Robustness of Large Vision-Language Models
Yunqing Zhao
Tianyu Pang
Chao Du
Xiao Yang
Chongxuan Li
Ngai-man Cheung
Min Lin
VLM
AAML
MLLM
33
166
0
26 May 2023
Think Twice: Measuring the Efficiency of Eliminating Prediction
  Shortcuts of Question Answering Models
Think Twice: Measuring the Efficiency of Eliminating Prediction Shortcuts of Question Answering Models
Lukávs Mikula
Michal vStefánik
Marek Petrovivc
Petr Sojka
41
3
0
11 May 2023
The State of Human-centered NLP Technology for Fact-checking
The State of Human-centered NLP Technology for Fact-checking
Anubrata Das
Houjiang Liu
Venelin Kovatchev
Matthew Lease
HILM
34
61
0
08 Jan 2023
DALL-E 2 Fails to Reliably Capture Common Syntactic Processes
DALL-E 2 Fails to Reliably Capture Common Syntactic Processes
Evelina Leivada
Elliot Murphy
G. Marcus
138
37
0
23 Oct 2022
InferES : A Natural Language Inference Corpus for Spanish Featuring
  Negation-Based Contrastive and Adversarial Examples
InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples
Venelin Kovatchev
Mariona Taulé
40
4
0
06 Oct 2022
Frequency Effects on Syntactic Rule Learning in Transformers
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
88
63
0
14 Sep 2021
DynaSent: A Dynamic Benchmark for Sentiment Analysis
DynaSent: A Dynamic Benchmark for Sentiment Analysis
Christopher Potts
Zhengxuan Wu
Atticus Geiger
Douwe Kiela
230
77
0
30 Dec 2020
ANLIzing the Adversarial Natural Language Inference Dataset
ANLIzing the Adversarial Natural Language Inference Dataset
Adina Williams
Tristan Thrush
Douwe Kiela
AAML
183
46
0
24 Oct 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
304
7,005
0
20 Apr 2018
1