Quantifying disparities, that is differences in outcomes among population groups, is an important task in public health, economics, and increasingly in machine learning. In this work, we study the question of how to collect data to measure disparities. The field of survey statistics provides extensive guidance on sample sizes necessary to accurately estimate quantities such as averages. However, there is limited guidance for estimating disparities. We consider a broad class of disparity metrics including those used in machine learning for measuring fairness of model outputs. For each metric, we derive the number of samples to be collected per group that increases the precision of disparity estimates given a fixed data collection budget. We also provide sample size calculations for hypothesis tests that check for significant disparities. Our methods can be used to determine sample sizes for fairness evaluations. We validate the methods on two nationwide surveys, used for understanding population-level attributes like employment and health, and a prediction model. Absent a priori information on the groups, we find that equally sampling the groups typically performs well.
ICML
When do Minimax-fair Learning and Empirical Risk Minimization Coincide?
Harvineet Singh, Matthäus Kleindessner, Volkan Cevher, and 2 more authors
In Proceedings of the 40th International Conference on Machine Learning 23–29 jul 2023
Minimax-fair machine learning minimizes the error for the worst-off group. However, empirical evidence suggests that when sophisticated models are trained with standard empirical risk minimization (ERM), they often have the same performance on the worst-off group as a minimax-trained model. Our work makes this counter-intuitive observation concrete. We prove that if the hypothesis class is sufficiently expressive and the group information is recoverable from the features, ERM and minimax-fairness learning formulations indeed have the same performance on the worst-off group. We provide additional empirical evidence of how this observation holds on a wide range of datasets and hypothesis classes. Since ERM is fundamentally easier than minimax optimization, our findings have implications on the practice of fair machine learning.
ICML
"Why did the Model Fail?": Attributing Model Performance Changes to Distribution Shifts
Haoran Zhang, Harvineet Singh, Marzyeh Ghassemi, and 1 more author
In Proceedings of the 40th International Conference on Machine Learning 23–29 jul 2023
Machine learning models frequently experience performance drops under distribution shifts. The underlying cause of such shifts may be multiple simultaneous factors such as changes in data quality, differences in specific covariate distributions, or changes in the relationship between label and features. When a model does fail during deployment, attributing performance change to these factors is critical for the model developer to identify the root cause and take mitigating actions. In this work, we introduce the problem of attributing performance differences between environments to distribution shifts in the underlying data generating mechanisms. We formulate the problem as a cooperative game where the players are distributions. We define the value of a set of distributions to be the change in model performance when only this set of distributions has changed between environments, and derive an importance weighting method for computing the value of an arbitrary set of distributions. The contribution of each distribution to the total performance change is then quantified as its Shapley value. We demonstrate the correctness and utility of our method on synthetic, semi-synthetic, and real-world case studies, showing its effectiveness in attributing performance changes to a wide range of distribution shifts.
2022
preprint
Active Linear Regression in the Online Setting via Leverage Score Sampling
Harvineet Singh, Christopher Musco, and Rumi Chunara
Modern predictive models require large amounts of data for training and evaluation, absence of which may result in models that are specific to certain locations, populations in them and clinical practices. Yet, best practices for clinical risk prediction models have not yet considered such challenges to generalizability. Here we ask whether population- and group-level performance of mortality prediction models vary significantly when applied to hospitals or geographies different from the ones in which they are developed. Further, what characteristics of the datasets explain the performance variation? In this multi-center cross-sectional study, we analyzed electronic health records from 179 hospitals across the US with 70,126 hospitalizations from 2014 to 2015. Generalization gap, defined as difference between model performance metrics across hospitals, is computed for area under the receiver operating characteristic curve (AUC) and calibration slope. To assess model performance by the race variable, we report differences in false negative rates across groups. Data were also analyzed using a causal discovery algorithm “Fast Causal Inference” that infers paths of causal influence while identifying potential influences associated with unmeasured variables. When transferring models across hospitals, AUC at the test hospital ranged from 0.777 to 0.832 (1st-3rd quartile or IQR; median 0.801); calibration slope from 0.725 to 0.983 (IQR; median 0.853); and disparity in false negative rates from 0.046 to 0.168 (IQR; median 0.092). Distribution of all variable types (demography, vitals, and labs) differed significantly across hospitals and regions. The race variable also mediated differences in the relationship between clinical variables and mortality, by hospital/region. In conclusion, group-level performance should be assessed during generalizability checks to identify potential harms to the groups. Moreover, for developing methods to improve model performance in new environments, a better understanding and documentation of provenance of data and health processes are needed to identify and mitigate sources of variation.
AIES
Towards Robust Off-Policy Evaluation via Human Inputs
Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez, and 1 more author
AAAI/ACM Conference on AI, Ethics, and Society 23–29 jul 2022
We study the problem of learning fair prediction models for unseen test sets distributed differently from the train set. Stability against changes in data distribution is an important mandate for responsible deployment of models. The domain adaptation literature addresses this concern, albeit with the notion of stability limited to that of prediction accuracy. We identify sufficient conditions under which stable models, both in terms of prediction accuracy and fairness, can be learned. Using the causal graph describing the data and the anticipated shifts, we specify an approach based on feature selection that exploits conditional independencies in the data to estimate accuracy and fairness metrics for the test set. We show that for specific fairness definitions, the resulting model satisfies a form of worst-case optimality. In context of a healthcare task, we illustrate the advantages of the approach in making more equitable decisions.
2019
UAI
Cascading Linear Submodular Bandits: Accounting for Position Bias and Diversity in Online Learning to Rank
Gaurush Hiranandani, Harvineet Singh, Prakhar Gupta, and 3 more authors