ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.16482
  4. Cited By
Self-Alignment: Improving Alignment of Cultural Values in LLMs via
  In-Context Learning

Self-Alignment: Improving Alignment of Cultural Values in LLMs via In-Context Learning

29 August 2024
Rochelle Choenni
Ekaterina Shutova
ArXivPDFHTML

Papers citing "Self-Alignment: Improving Alignment of Cultural Values in LLMs via In-Context Learning"

8 / 8 papers shown
Title
NurValues: Real-World Nursing Values Evaluation for Large Language Models in Clinical Context
NurValues: Real-World Nursing Values Evaluation for Large Language Models in Clinical Context
Ben Yao
Qiuchi Li
Yazhou Zhang
Siyu Yang
Bohan Zhang
Prayag Tiwari
Jing Qin
31
0
0
13 May 2025
CAReDiO: Cultural Alignment of LLM via Representativeness and Distinctiveness Guided Data Optimization
CAReDiO: Cultural Alignment of LLM via Representativeness and Distinctiveness Guided Data Optimization
Jing Yao
Xiaoyuan Yi
Jindong Wang
Zhicheng Dou
Xing Xie
28
0
0
09 Apr 2025
Can Large Language Models Predict Associations Among Human Attitudes?
Can Large Language Models Predict Associations Among Human Attitudes?
Ana Ma
Derek Powell
34
0
0
26 Mar 2025
An Investigation into Value Misalignment in LLM-Generated Texts for Cultural Heritage
An Investigation into Value Misalignment in LLM-Generated Texts for Cultural Heritage
Fan Bu
Zheng Wang
Siyi Wang
Ziyao Liu
33
0
0
03 Jan 2025
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
Wonduk Seo
Zonghao Yuan
Yi Bu
VLM
48
1
0
02 Jan 2025
Self-Pluralising Culture Alignment for Large Language Models
Self-Pluralising Culture Alignment for Large Language Models
Shaoyang Xu
Yongqi Leng
Linhao Yu
Deyi Xiong
24
0
0
16 Oct 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
279
1,121
0
18 Apr 2021
1