How racial bias infected a major health-care algorithm
By Jeff Cockrell – As data science has developed in recent decades, algorithms have come to play a role in assisting decision-making in a wide variety of contexts, making predictions that in some cases have enormous human consequences. Algorithms may help decide who is admitted to an elite school, approved for a mortgage, or allowed to await trial from home rather than behind bars.
But there are well-publicized concerns that algorithms may perpetuate or systematize biases. And research by University of California at Berkeley’s Ziad Obermeyer, Brian Powers of Boston’s Brigham and Women’s Hospital, Christine Vogeli of Partners HealthCare, and Chicago Booth’s Sendhil Mullainathan finds that one algorithm, used to make an important health-care determination for millions of patients in the United States, produces racially biased results.
The algorithm in question is used to help identify candidates for enrollment in “high-risk care management” programs, which provide additional resources and attention to patients with complex health needs. Such programs, which can improve patient outcomes and reduce costs, are employed by many large US health systems, and therefore the decision of whom to enroll affects tens of millions of people. The algorithm assigns each patient a risk score that is used to guide enrollment decisions: a patient with a risk score in the 97th percentile and above is automatically identified for enrollment, while one with a score from the 55th to 96th percentiles is flagged for possible enrollment depending on input from the patient’s doctor.
Obermeyer, Powers, Vogeli, and Mullainathan find that black patients are on average far less healthy than white patients assigned the same score. For instance, for patients with risk scores in the 97th percentile of the researchers’ sample, black patients had on average 26 percent more chronic illnesses than white patients did. The result of this bias: black patients were significantly less likely to be identified for program enrollment than they would have been otherwise. Due to algorithmic bias, 17.7 percent of patients automatically identified for enrollment were black; without it, the researchers calculate, 46.5 percent would have been black.
The bias stems from what the algorithm is being asked to predict. more>