Tag Archives: Racial bias

Updates from Chicago Booth

Think you’re not racist?
Research uncovers our secret prejudices, and ways to overcome them
By Alice G. Walton – It has been 50 years since the Civil Rights Act outlawed discrimination based on race, color, religion, sex, or national origin. The landmark legislation marked the end of the era of legalized racism. Now some affirmative action programs, created to encourage and promote diversity and the presence of underrepresented minorities, are being rolled back.

However, while overt racism may be on the wane in the US, research suggests it remains just below the surface. Very few people would admit to being biased, yet there’s strong evidence that biases continue, often under the level of our expression and of our awareness.

Ten years ago Marianne Betrand, Chris P. Dialynas Distinguished Service Professor of Economics at Chicago Booth, and Sendhil Mullainathan, then at MIT, published a famous study entitled, “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination,” in which 5,000 fictitious resumes were sent in response to 1,300 job postings in Chicago and Boston. The resumes were either “high quality” or “low quality,” varying in the typical things that set resumes apart—job and internship experiences, academic institutions, and languages spoken. Then, the team randomly assigned either a “white-sounding” name, such as Emily Walsh, or an “African American–sounding” name, such as Lakisha Washington, to each resume.

The results were unambiguous. White-sounding applicants got 50% more callbacks than African American–sounding candidates. This didn’t seem to be a matter of how common the names were or the apparent social status of the applicant, but simply a function of what the names suggested about the race of the fictional applicants.

Even more disturbingly, white applicants with higher-quality resumes had a strong advantage over their African American peers. The authors suggest that this makes it less enticing for African Americans to develop high-quality resumes, which makes hiring discrimination part of a destructive cycle. more> [VIDEO]

Related>

Updates from Chicago Booth

How racial bias infected a major health-care algorithm
By Jeff Cockrell – As data science has developed in recent decades, algorithms have come to play a role in assisting decision-making in a wide variety of contexts, making predictions that in some cases have enormous human consequences. Algorithms may help decide who is admitted to an elite school, approved for a mortgage, or allowed to await trial from home rather than behind bars.

But there are well-publicized concerns that algorithms may perpetuate or systematize biases. And research by University of California at Berkeley’s Ziad Obermeyer, Brian Powers of Boston’s Brigham and Women’s Hospital, Christine Vogeli of Partners HealthCare, and Chicago Booth’s Sendhil Mullainathan finds that one algorithm, used to make an important health-care determination for millions of patients in the United States, produces racially biased results.

The algorithm in question is used to help identify candidates for enrollment in “high-risk care management” programs, which provide additional resources and attention to patients with complex health needs. Such programs, which can improve patient outcomes and reduce costs, are employed by many large US health systems, and therefore the decision of whom to enroll affects tens of millions of people. The algorithm assigns each patient a risk score that is used to guide enrollment decisions: a patient with a risk score in the 97th percentile and above is automatically identified for enrollment, while one with a score from the 55th to 96th percentiles is flagged for possible enrollment depending on input from the patient’s doctor.

Obermeyer, Powers, Vogeli, and Mullainathan find that black patients are on average far less healthy than white patients assigned the same score. For instance, for patients with risk scores in the 97th percentile of the researchers’ sample, black patients had on average 26 percent more chronic illnesses than white patients did. The result of this bias: black patients were significantly less likely to be identified for program enrollment than they would have been otherwise. Due to algorithmic bias, 17.7 percent of patients automatically identified for enrollment were black; without it, the researchers calculate, 46.5 percent would have been black.

The bias stems from what the algorithm is being asked to predict. more>

Related>