Predictive risk scores estimating probabilities for a binary outcome on the basis of observed covariates are common across the sciences. They are frequently developed with the intent of avoiding the outcome in question by intervening in response to estimated risks. Since risk scores are usually developed in complex systems, interventions usually take the form of expert agents responding to estimated risks as they best see fit. In this case, interventions may be complex and their effects difficult to observe or infer, meaning that explicit specification of interventions in response to risk scores is impractical. Scope to modulate the aggregate model-intervention scheme so as to optimise an objective is hence limited. We propose an algorithm by which a model-intervention scheme can be developed by 'stacking' possibly unknown intervention effects. By repeatedly observing and updating the intervention and model, we show that this scheme leads to convergence or almost-convergence of eventual outcome risk to an equivocal value for any initial value of covariates. Our approach deploys a series of risk scores to expert agents, with instructions to act on them in succession according to their best judgement. Our algorithm uses only observations of pre-intervention covariates and the eventual outcome as input. It is not necessary to know or infer the effect of the intervention, other than making a general assumption that it is 'well-intentioned'. The algorithm can also be used to safely update risk scores in the presence of unknown interventions and concept drift. We demonstrate convergence of expectation of outcome in a range of settings and show robustness to errors in risk estimation and to concept drift. We suggest several practical applications and demonstrate a potential implementation by simulation, showing that the algorithm leads to a fair distribution of outcome risk across a population.
View on arXiv