ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11989
56
0

Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images

17 February 2025
Negar Kamali
Karyn Nakamura
Aakriti Kumar
Angelos Chatzimparmpas
Jessica Hullman
Matthew Groh
ArXivPDFHTML
Abstract

Diffusion model-generated images can appear indistinguishable from authentic photographs, but these images often contain artifacts and implausibilities that reveal their AI-generated provenance. Given the challenge to public trust in media posed by photorealistic AI-generated images, we conducted a large-scale experiment measuring human detection accuracy on 450 diffusion-model generated images and 149 real images. Based on collecting 749,828 observations and 34,675 comments from 50,444 participants, we find that scene complexity of an image, artifact types within an image, display time of an image, and human curation of AI-generated images all play significant roles in how accurately people distinguish real from AI-generated images. Additionally, we propose a taxonomy characterizing artifacts often appearing in images generated by diffusion models. Our empirical observations and taxonomy offer nuanced insights into the capabilities and limitations of diffusion models to generate photorealistic images in 2024.

View on arXiv
@article{kamali2025_2502.11989,
  title={ Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images },
  author={ Negar Kamali and Karyn Nakamura and Aakriti Kumar and Angelos Chatzimparmpas and Jessica Hullman and Matthew Groh },
  journal={arXiv preprint arXiv:2502.11989},
  year={ 2025 }
}
Comments on this paper