ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.16572
  4. Cited By
Counterfactual reasoning: Testing language models' understanding of
  hypothetical scenarios

Counterfactual reasoning: Testing language models' understanding of hypothetical scenarios

26 May 2023
Jiaxuan Li
Lang-Chi Yu
Allyson Ettinger
    LRM
    ELM
ArXivPDFHTML

Papers citing "Counterfactual reasoning: Testing language models' understanding of hypothetical scenarios"

7 / 7 papers shown
Title
Controllable Context Sensitivity and the Knob Behind It
Controllable Context Sensitivity and the Knob Behind It
Julian Minder
Kevin Du
Niklas Stoehr
Giovanni Monea
Chris Wendler
Robert West
Ryan Cotterell
KELM
78
5
0
11 Nov 2024
Reasoning Elicitation in Language Models via Counterfactual Feedback
Reasoning Elicitation in Language Models via Counterfactual Feedback
Alihan Hüyük
Xinnuo Xu
Jacqueline R. M. A. Maasch
Aditya V. Nori
Javier González
ReLM
LRM
353
1
0
02 Oct 2024
ACCORD: Closing the Commonsense Measurability Gap
ACCORD: Closing the Commonsense Measurability Gap
François Roewer-Després
Jinyue Feng
Zining Zhu
Frank Rudzicz
LRM
68
0
0
04 Jun 2024
MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset
MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset
Weiqi Wang
Yangqiu Song
LRM
84
10
0
04 Jun 2024
CRASS: A Novel Data Set and Benchmark to Test Counterfactual Reasoning
  of Large Language Models
CRASS: A Novel Data Set and Benchmark to Test Counterfactual Reasoning of Large Language Models
Jorg Frohberg
Frank Binder
SLR
38
28
0
22 Dec 2021
MPNet: Masked and Permuted Pre-training for Language Understanding
MPNet: Masked and Permuted Pre-training for Language Understanding
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
94
1,105
0
20 Apr 2020
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
408
24,160
0
26 Jul 2019
1