Scientific expertise often requires recognizing subtle visual differences that remain challenging to articulate even for domain experts. We present a system that leverages generative models to automatically discover and visualize minimal discriminative features between categories while preserving instance identity. Our method generates counterfactual visualizations with subtle, targeted transformations between classes, performing well even in domains where data is sparse, examples are unpaired, and category boundaries resist verbal description. Experiments across six domains, including black hole simulations, butterfly taxonomy, and medical imaging, demonstrate accurate transitions with limited training data, highlighting both established discriminative features and novel subtle distinctions that measurably improved category differentiation. User studies confirm our generated counterfactuals significantly outperform traditional approaches in teaching humans to correctly differentiate between fine-grained classes, showing the potential of generative models to advance visual learning and scientific research.
View on arXiv@article{chiquier2025_2504.08046, title={ Teaching Humans Subtle Differences with DIFFusion }, author={ Mia Chiquier and Orr Avrech and Yossi Gandelsman and Berthy Feng and Katherine Bouman and Carl Vondrick }, journal={arXiv preprint arXiv:2504.08046}, year={ 2025 } }