ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.05422
  4. Cited By
Calibrate your listeners! Robust communication-based training for
  pragmatic speakers

Calibrate your listeners! Robust communication-based training for pragmatic speakers

11 October 2021
Rose E. Wang
Julia White
Jesse Mu
Noah D. Goodman
ArXivPDFHTML

Papers citing "Calibrate your listeners! Robust communication-based training for pragmatic speakers"

9 / 9 papers shown
Title
GLIMPSE: Pragmatically Informative Multi-Document Summarization for
  Scholarly Reviews
GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews
Maxime Darrin
Ines Arous
Pablo Piantanida
Jackie CK Cheung
55
2
0
11 Jun 2024
LACIE: Listener-Aware Finetuning for Confidence Calibration in Large
  Language Models
LACIE: Listener-Aware Finetuning for Confidence Calibration in Large Language Models
Elias Stengel-Eskin
Peter Hase
Mohit Bansal
37
4
0
31 May 2024
Expanding the Set of Pragmatic Considerations in Conversational AI
Expanding the Set of Pragmatic Considerations in Conversational AI
S. M. Seals
V. Shalin
30
2
0
27 Oct 2023
Towards More Human-like AI Communication: A Review of Emergent
  Communication Research
Towards More Human-like AI Communication: A Review of Emergent Communication Research
Nicolo’ Brandizzi
47
12
0
01 Aug 2023
Language Models are Bounded Pragmatic Speakers: Understanding RLHF from
  a Bayesian Cognitive Modeling Perspective
Language Models are Bounded Pragmatic Speakers: Understanding RLHF from a Bayesian Cognitive Modeling Perspective
Khanh Nguyen
LRM
29
8
0
28 May 2023
Discourse over Discourse: The Need for an Expanded Pragmatic Focus in
  Conversational AI
Discourse over Discourse: The Need for an Expanded Pragmatic Focus in Conversational AI
S. M. Seals
V. Shalin
29
4
0
27 Apr 2023
Color Overmodification Emerges from Data-Driven Learning and Pragmatic
  Reasoning
Color Overmodification Emerges from Data-Driven Learning and Pragmatic Reasoning
Fei Fang
Kunal Sinha
Noah D. Goodman
Christopher Potts
Elisa Kreiss
23
1
0
18 May 2022
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
301
1,610
0
18 Sep 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
287
9,145
0
06 Jun 2015
1