Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.20550
Cited By
LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and Mitigation
20 January 2025
Ziyao Zhang
Yanlin Wang
Chong Wang
Jiachi Chen
Zibin Zheng
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and Mitigation"
6 / 6 papers shown
Title
An AI-Powered Research Assistant in the Lab: A Practical Guide for Text Analysis Through Iterative Collaboration with LLMs
Gino Carmona-Díaz
William Jiménez-Leal
María Alejandra Grisales
Chandra Sripada
Santiago Amaya
Michael Inzlicht
Juan Pablo Bermúdez
21
0
0
14 May 2025
Hallucination by Code Generation LLMs: Taxonomy, Benchmarks, Mitigation, and Challenges
Yunseo Lee
John Youngeun Song
Dongsun Kim
Jindae Kim
Mijung Kim
Jaechang Nam
HILM
LRM
37
0
0
29 Apr 2025
Automated Factual Benchmarking for In-Car Conversational Systems using Large Language Models
Rafael Giebisch
Ken E. Friedl
Lev Sorokin
Andrea Stocco
HILM
50
0
0
01 Apr 2025
Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval
Jiarong Wu
Songqiang Chen
Jialun Cao
Hau Ching Lo
S. Cheung
51
0
0
26 Feb 2025
SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment
Ariful Haque
Sunzida Siddique
M. Rahman
Ahmed Rafi Hasan
Laxmi Rani Das
Marufa Kamal
Tasnim Masura
Kishor Datta Gupta
51
1
0
31 Jan 2025
A Deep Dive Into Large Language Model Code Generation Mistakes: What and Why?
QiHong Chen
Jiawei Li
Jiecheng Deng
Jiachen Yu
Justin Tian Jin Chen
Iftekhar Ahmed
56
0
0
03 Nov 2024
1