ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.00871
  4. Cited By
Using In-Context Learning to Improve Dialogue Safety

Using In-Context Learning to Improve Dialogue Safety

2 February 2023
Nicholas Meade
Spandana Gella
Devamanyu Hazarika
Prakhar Gupta
Di Jin
Siva Reddy
Yang Liu
Dilek Z. Hakkani-Tür
ArXivPDFHTML

Papers citing "Using In-Context Learning to Improve Dialogue Safety"

35 / 35 papers shown
Title
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering
Heng Chang
Zhiting Fan
Ruizhe Chen
Xiaotang Gai
Luqi Gong
Yan Zhang
Zuozhu Liu
LLMSV
37
1
0
20 Apr 2025
Understanding the Generalization of In-Context Learning in Transformers: An Empirical Study
Understanding the Generalization of In-Context Learning in Transformers: An Empirical Study
Xingxuan Zhang
Haoran Wang
Jiansheng Li
Yuan Xue
Shikai Guan
Renzhe Xu
Hao Zou
Han Yu
Peng Cui
50
0
0
19 Mar 2025
In-Context Learning with Iterative Demonstration Selection
In-Context Learning with Iterative Demonstration Selection
Chengwei Qin
Aston Zhang
Cheng Chen
Anirudh Dagar
Wenming Ye
LRM
68
38
0
31 Dec 2024
SAIL: Sample-Centric In-Context Learning for Document Information
  Extraction
SAIL: Sample-Centric In-Context Learning for Document Information Extraction
Jinyu Zhang
Zhiyuan You
Jize Wang
Xinyi Le
69
1
0
22 Dec 2024
On LLM Wizards: Identifying Large Language Models' Behaviors for Wizard
  of Oz Experiments
On LLM Wizards: Identifying Large Language Models' Behaviors for Wizard of Oz Experiments
Jingchao Fang
Nikos Aréchiga
Keiichi Namaoshi
N. Bravo
Candice L Hogan
David A. Shamma
44
3
0
10 Jul 2024
AI Safety in Generative AI Large Language Models: A Survey
AI Safety in Generative AI Large Language Models: A Survey
Jaymari Chua
Yun Yvonna Li
Shiyi Yang
Chen Wang
Lina Yao
LM&MA
39
12
0
06 Jul 2024
Should We Fine-Tune or RAG? Evaluating Different Techniques to Adapt
  LLMs for Dialogue
Should We Fine-Tune or RAG? Evaluating Different Techniques to Adapt LLMs for Dialogue
Simone Alghisi
Massimo Rizzoli
Gabriel Roccabruna
Seyed Mahed Mousavi
Giuseppe Riccardi
OffRL
44
8
0
10 Jun 2024
Deconstructing The Ethics of Large Language Models from Long-standing
  Issues to New-emerging Dilemmas
Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas
Chengyuan Deng
Yiqun Duan
Xin Jin
Heng Chang
Yijun Tian
...
Kuofeng Gao
Sihong He
Jun Zhuang
Lu Cheng
Haohan Wang
AILaw
43
16
0
08 Jun 2024
Synergizing In-context Learning with Hints for End-to-end Task-oriented
  Dialog Systems
Synergizing In-context Learning with Hints for End-to-end Task-oriented Dialog Systems
Vishal Vivek Saley
Rocktim Jyoti Das
Dinesh Raghu
Mausam
21
1
0
24 May 2024
Unifying Bias and Unfairness in Information Retrieval: A Survey of
  Challenges and Opportunities with Large Language Models
Unifying Bias and Unfairness in Information Retrieval: A Survey of Challenges and Opportunities with Large Language Models
Sunhao Dai
Chen Xu
Shicheng Xu
Liang Pang
Zhenhua Dong
Jun Xu
48
59
0
17 Apr 2024
High-Dimension Human Value Representation in Large Language Models
High-Dimension Human Value Representation in Large Language Models
Samuel Cahyawijaya
Delong Chen
Yejin Bang
Leila Khalatbari
Bryan Wilie
Ziwei Ji
Etsuko Ishii
Pascale Fung
71
5
0
11 Apr 2024
Detoxifying Large Language Models via Knowledge Editing
Detoxifying Large Language Models via Knowledge Editing
Meng Wang
Ningyu Zhang
Ziwen Xu
Zekun Xi
Shumin Deng
Yunzhi Yao
Qishen Zhang
Linyi Yang
Jindong Wang
Huajun Chen
KELM
38
54
0
21 Mar 2024
Potential and Challenges of Model Editing for Social Debiasing
Potential and Challenges of Model Editing for Social Debiasing
Jianhao Yan
Futing Wang
Yafu Li
Yue Zhang
KELM
68
9
0
21 Feb 2024
GrounDial: Human-norm Grounded Safe Dialog Response Generation
GrounDial: Human-norm Grounded Safe Dialog Response Generation
Siwon Kim
Shuyang Dai
Mohammad Kachuee
Shayan Ray
Tara Taghavi
Sungroh Yoon
14
0
0
14 Feb 2024
Self-Debiasing Large Language Models: Zero-Shot Recognition and
  Reduction of Stereotypes
Self-Debiasing Large Language Models: Zero-Shot Recognition and Reduction of Stereotypes
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Tong Yu
Hanieh Deilamsalehy
Ruiyi Zhang
Sungchul Kim
Franck Dernoncourt
24
19
0
03 Feb 2024
Self-Supervised Position Debiasing for Large Language Models
Self-Supervised Position Debiasing for Large Language Models
Zhongkun Liu
Zheng Chen
Mengqi Zhang
Zhaochun Ren
Pengjie Ren
Zhumin Chen
36
1
0
02 Jan 2024
Improving In-context Learning via Bidirectional Alignment
Improving In-context Learning via Bidirectional Alignment
Chengwei Qin
Wenhan Xia
Fangkai Jiao
Chen Chen
Yuchen Hu
Bosheng Ding
Shafiq R. Joty
35
7
0
28 Dec 2023
Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability,
  Explainability, and Safety
Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety
Manas Gaur
Amit P. Sheth
26
17
0
05 Dec 2023
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
43
11
0
03 Dec 2023
JAB: Joint Adversarial Prompting and Belief Augmentation
JAB: Joint Adversarial Prompting and Belief Augmentation
Ninareh Mehrabi
Palash Goyal
Anil Ramakrishna
Jwala Dhamala
Shalini Ghosh
Richard Zemel
Kai-Wei Chang
Aram Galstyan
Rahul Gupta
AAML
33
7
0
16 Nov 2023
In-context Vectors: Making In Context Learning More Effective and
  Controllable Through Latent Space Steering
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
Sheng Liu
Haotian Ye
Lei Xing
James Y. Zou
26
84
0
11 Nov 2023
Fine-tune Language Models to Approximate Unbiased In-context Learning
Fine-tune Language Models to Approximate Unbiased In-context Learning
Timothy Chu
Zhao-quan Song
Chiwun Yang
27
15
0
05 Oct 2023
Beyond Task Performance: Evaluating and Reducing the Flaws of Large
  Multimodal Models with In-Context Learning
Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context Learning
Mustafa Shukor
Alexandre Ramé
Corentin Dancette
Matthieu Cord
LRM
MLLM
40
20
0
01 Oct 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
26
490
0
02 Sep 2023
Domain Specialization as the Key to Make Large Language Models
  Disruptive: A Comprehensive Survey
Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey
Chen Ling
Xujiang Zhao
Jiaying Lu
Chengyuan Deng
Can Zheng
...
Chris White
Quanquan Gu
Jian Pei
Carl Yang
Liang Zhao
ALM
30
126
0
30 May 2023
PlugMed: Improving Specificity in Patient-Centered Medical Dialogue
  Generation using In-Context Learning
PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning
Chengfeng Dou
Zhi Jin
Wenpin Jiao
Haiyan Zhao
Zhengwei Tao
Yongqiang Zhao
LM&MA
MedIm
26
6
0
19 May 2023
StarCoder: may the source be with you!
StarCoder: may the source be with you!
Raymond Li
Loubna Ben Allal
Yangtian Zi
Niklas Muennighoff
Denis Kocetkov
...
Sean M. Hughes
Thomas Wolf
Arjun Guha
Leandro von Werra
H. D. Vries
50
716
0
09 May 2023
Learn What NOT to Learn: Towards Generative Safety in Chatbots
Learn What NOT to Learn: Towards Generative Safety in Chatbots
Leila Khalatbari
Yejin Bang
Dan Su
Willy Chung
Saeedeh Ghadimi
Hossein Sameti
Pascale Fung
33
7
0
21 Apr 2023
A Survey on In-context Learning
A Survey on In-context Learning
Qingxiu Dong
Lei Li
Damai Dai
Ce Zheng
Jingyuan Ma
...
Zhiyong Wu
Baobao Chang
Xu Sun
Lei Li
Zhifang Sui
ReLM
AIMat
20
462
0
31 Dec 2022
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
227
502
0
28 Sep 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
319
11,953
0
04 Mar 2022
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
279
1,124
0
18 Apr 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
  Bias in NLP
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
259
374
0
28 Feb 2021
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,312
0
17 Jan 2021
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
280
1,595
0
18 Sep 2019
1