ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17482
38
0

What's Producible May Not Be Reachable: Measuring the Steerability of Generative Models

21 March 2025
Keyon Vafa
Sarah Bentley
Jon M. Kleinberg
S. Mullainathan
ArXivPDFHTML
Abstract

How should we evaluate the quality of generative models? Many existing metrics focus on a model's producibility, i.e. the quality and breadth of outputs it can generate. However, the actual value from using a generative model stems not just from what it can produce but whether a user with a specific goal can produce an output that satisfies that goal. We refer to this property as steerability. In this paper, we first introduce a mathematical framework for evaluating steerability independently from producibility. Steerability is more challenging to evaluate than producibility because it requires knowing a user's goals. We address this issue by creating a benchmark task that relies on one key idea: sample an output from a generative model and ask users to reproduce it. We implement this benchmark in a large-scale user study of text-to-image models and large language models. Despite the ability of these models to produce high-quality outputs, they all perform poorly on steerabilty. This suggests that we need to focus on improving the steerability of generative models. We show such improvements are indeed possible: through reinforcement learning techniques, we create an alternative steering mechanism for image models that achieves more than 2x improvement on this benchmark.

View on arXiv
@article{vafa2025_2503.17482,
  title={ What's Producible May Not Be Reachable: Measuring the Steerability of Generative Models },
  author={ Keyon Vafa and Sarah Bentley and Jon Kleinberg and Sendhil Mullainathan },
  journal={arXiv preprint arXiv:2503.17482},
  year={ 2025 }
}
Comments on this paper