ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.15067
  4. Cited By
Not All Metrics Are Guilty: Improving NLG Evaluation by Diversifying
  References

Not All Metrics Are Guilty: Improving NLG Evaluation by Diversifying References

24 May 2023
Tianyi Tang
Hongyuan Lu
Yuchen Eleanor Jiang
Haoyang Huang
Dongdong Zhang
Wayne Xin Zhao
Tom Kocmi
Furu Wei
ArXivPDFHTML

Papers citing "Not All Metrics Are Guilty: Improving NLG Evaluation by Diversifying References"

3 / 3 papers shown
Title
RevisEval: Improving LLM-as-a-Judge via Response-Adapted References
RevisEval: Improving LLM-as-a-Judge via Response-Adapted References
Qiyuan Zhang
Yufei Wang
Tiezheng YU
Yuxin Jiang
Chuhan Wu
...
Xin Jiang
Lifeng Shang
Ruiming Tang
Fuyuan Lyu
Chen Ma
31
4
0
07 Oct 2024
Chain-of-Dictionary Prompting Elicits Translation in Large Language
  Models
Chain-of-Dictionary Prompting Elicits Translation in Large Language Models
Hongyuan Lu
Haoran Yang
Haoyang Huang
Dongdong Zhang
Wai Lam
Furu Wei
LRM
AI4CE
38
15
0
11 May 2023
Can Large Language Models Be an Alternative to Human Evaluations?
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
229
574
0
03 May 2023
1