7
0

Soft Prompts for Evaluation: Measuring Conditional Distance of Capabilities

Abstract

To help evaluate and understand the latent capabilities of language models, this paper introduces an approach using optimized input embeddings, or 'soft prompts,' as a metric of conditional distance between a model and a target behavior. The technique aims to facilitate latent capability discovery as a part of automated red teaming/evaluation suites and to provide quantitative feedback about the accessibility of potentially concerning behaviors in a way that may scale to powerful future models, including those which may otherwise be capable of deceptive alignment. An evaluation framework using soft prompts is demonstrated in natural language, chess, and pathfinding, and the technique is extended with generalized conditional soft prompts to aid in constructing task evaluations.

View on arXiv
@article{nordby2025_2505.14943,
  title={ Soft Prompts for Evaluation: Measuring Conditional Distance of Capabilities },
  author={ Ross Nordby },
  journal={arXiv preprint arXiv:2505.14943},
  year={ 2025 }
}
Comments on this paper