ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.01653
105
235
v1v2v3v4v5v6v7v8 (latest)

Metrics reloaded: Recommendations for image analysis validation

3 June 2022
Lena Maier-Hein
Annika Reinke
Patrick Godau
M. Tizabi
Florian Buettner
E. Christodoulou
Ben Glocker
Fabian Isensee
Jens Kleesiek
Michal Kozubek
M. Reyes
Michael A. Riegler
Manuel Wiesenfarth
A. Emre Kavur
Carole H. Sudre
Michael Baumgartner
Matthias Eisenmann
Doreen Heckmann-Notzel
Tim Radsch
Laura Acion
Michela Antonelli
Tal Arbel
Spyridon Bakas
Allison Benis
P. Bankhead
M. Jorge Cardoso
Veronika Cheplygina
Beth A. Cimini
Gary S. Collins
Keyvan Farahani
Luciana Ferrer
Adrian Galdran
Bram van Ginneken
Robert Haase
Daniel A. Hashimoto
Michael M. Hoffman
M. Huisman
Pierre Jannin
Charles E. Kahn
Dagmar Kainmueller
Bernhard Kainz
Alexandros Karargyris
Alan Karthikesalingam
H. Kenngott
D. Moher
A. Kopp-Schneider
Anna Kreshuk
Tahsin M. Kurc
Bennett A. Landman
G. Litjens
Amin Madani
Klaus Maier-Hein
Anne L. Martel
Peter Mattson
Erik H. W. Meijering
Bjoern Menze
Karel G. M. Moons
Henning Muller
Brennan Nichyporuk
Felix Nickel
Jens Petersen
Nasir M. Rajpoot
Nicola Rieke
Julio Saez-Rodriguez
Clarisa Sánchez Gutiérrez
S. Shetty
Maarten van Smeden
Ronald M. Summers
A. Taha
Aleksei Tiulpin
Sotirios A. Tsaftaris
Ben Van Calster
Gaël Varoquaux
Paul F. Jäger
ArXiv (abs)PDFHTML
Abstract

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. Particularly in automatic biomedical image analysis, chosen performance metrics often do not reflect the domain interest, thus failing to adequately measure scientific progress and hindering translation of ML techniques into practice. To overcome this, our large international expert consortium created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. The framework was developed in a multi-stage Delphi process and is based on the novel concept of a problem fingerprint - a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), data set and algorithm output. Based on the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as a classification task at image, object or pixel level, namely image-level classification, object detection, semantic segmentation, and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool, which also provides a point of access to explore weaknesses, strengths and specific recommendations for the most common validation metrics. The broad applicability of our framework across domains is demonstrated by an instantiation for various biological and medical image analysis use cases.

View on arXiv
Comments on this paper