ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.03453
  4. Cited By
BERT & Family Eat Word Salad: Experiments with Text Understanding

BERT & Family Eat Word Salad: Experiments with Text Understanding

10 January 2021
Ashim Gupta
Giorgi Kvernadze
Vivek Srikumar
ArXivPDFHTML

Papers citing "BERT & Family Eat Word Salad: Experiments with Text Understanding"

15 / 15 papers shown
Title
Optimizing Estimators of Squared Calibration Errors in Classification
Optimizing Estimators of Squared Calibration Errors in Classification
Sebastian G. Gruber
Francis Bach
77
1
0
24 Feb 2025
Adversarial Clean Label Backdoor Attacks and Defenses on Text
  Classification Systems
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems
Ashim Gupta
Amrith Krishna
AAML
22
16
0
31 May 2023
Towards preserving word order importance through Forced Invalidation
Towards preserving word order importance through Forced Invalidation
Hadeel Al-Negheimish
Pranava Madhyastha
Alessandra Russo
19
3
0
11 Apr 2023
Local Structure Matters Most in Most Languages
Local Structure Matters Most in Most Languages
Louis Clouâtre
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
31
1
0
09 Nov 2022
Why is Winoground Hard? Investigating Failures in Visuolinguistic
  Compositionality
Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality
Anuj Diwan
Layne Berry
Eunsol Choi
David Harwath
Kyle Mahowald
CoGe
108
41
0
01 Nov 2022
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
19
13
0
28 Sep 2021
Studying word order through iterative shuffling
Studying word order through iterative shuffling
Nikolay Malkin
Sameera Lanka
Pranav Goel
Nebojsa Jojic
31
14
0
10 Sep 2021
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Albert Webson
Ellie Pavlick
LRM
35
352
0
02 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
26
4
0
31 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
30
13
0
29 Jul 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word
  Matters Pre-training for Little
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
45
243
0
14 Apr 2021
Calibration of Pre-trained Transformers
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
243
289
0
17 Mar 2020
Hypothesis Only Baselines in Natural Language Inference
Hypothesis Only Baselines in Natural Language Inference
Adam Poliak
Jason Naradowsky
Aparajita Haldar
Rachel Rudinger
Benjamin Van Durme
190
576
0
02 May 2018
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
245
914
0
21 Apr 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1