58
0

Anticipating Gaming to Incentivize Improvement: Guiding Agents in (Fair) Strategic Classification

Main:27 Pages
12 Figures
Bibliography:4 Pages
3 Tables
Abstract

As machine learning algorithms increasingly influence critical decision making in different application areas, understanding human strategic behavior in response to these systems becomes vital. We explore individuals' choice between genuinely improving their qualifications (``improvement'') vs. attempting to deceive the algorithm by manipulating their features (``manipulation'') in response to an algorithmic decision system. We further investigate an algorithm designer's ability to shape these strategic responses, and its fairness implications. Specifically, we formulate these interactions as a Stackelberg game, where a firm deploys a (fair) classifier, and individuals strategically respond. Our model incorporates both different costs and stochastic efficacy for manipulation and improvement. The analysis reveals different potential classes of agent responses, and characterizes optimal classifiers accordingly. Based on these, we highlight the impact of the firm's anticipation of strategic behavior, identifying when and why a (fair) strategic policy can not only prevent manipulation, but also incentivize agents to opt for improvement.

View on arXiv
@article{alhanouti2025_2505.05594,
  title={ Anticipating Gaming to Incentivize Improvement: Guiding Agents in (Fair) Strategic Classification },
  author={ Sura Alhanouti and Parinaz Naghizadeh },
  journal={arXiv preprint arXiv:2505.05594},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.