ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12393
  4. Cited By
Sorting through the noise: Testing robustness of information processing
  in pre-trained language models

Sorting through the noise: Testing robustness of information processing in pre-trained language models

25 September 2021
Lalchand Pandia
Allyson Ettinger
ArXivPDFHTML

Papers citing "Sorting through the noise: Testing robustness of information processing in pre-trained language models"

9 / 9 papers shown
Title
ACCORD: Closing the Commonsense Measurability Gap
ACCORD: Closing the Commonsense Measurability Gap
François Roewer-Després
Jinyue Feng
Zining Zhu
Frank Rudzicz
LRM
48
0
0
04 Jun 2024
Trustworthy Formal Natural Language Specifications
Trustworthy Formal Natural Language Specifications
Colin S. Gordon
Sergey Matskevich
HILM
27
3
0
05 Oct 2023
Large Language Models Can Be Easily Distracted by Irrelevant Context
Large Language Models Can Be Easily Distracted by Irrelevant Context
Freda Shi
Xinyun Chen
Kanishka Misra
Nathan Scales
David Dohan
Ed H. Chi
Nathanael Scharli
Denny Zhou
ReLM
RALM
LRM
30
529
0
31 Jan 2023
Transparency Helps Reveal When Language Models Learn Meaning
Transparency Helps Reveal When Language Models Learn Meaning
Zhaofeng Wu
William Merrill
Hao Peng
Iz Beltagy
Noah A. Smith
19
9
0
14 Oct 2022
Natural Language Specifications in Proof Assistants
Natural Language Specifications in Proof Assistants
Colin S. Gordon
Sergey Matskevich
33
1
0
16 May 2022
When a sentence does not introduce a discourse entity, Transformer-based
  models still sometimes refer to it
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
Sebastian Schuster
Tal Linzen
13
25
0
06 May 2022
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
263
346
0
01 Feb 2021
Spying on your neighbors: Fine-grained probing of contextual embeddings
  for information about surrounding words
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
51
42
0
04 May 2020
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
415
2,586
0
03 Sep 2019
1