Updates from Chicago Booth

Can regulation rein in algorithmic bias?
By Sendhil Mullainathan – Last year, you published a paper documenting how an algorithm used by health-care organizations generated racially biased results. What takeaways did that offer in terms of how algorithmic bias differs from human bias?

That paper might be, by some measures, among the strangest papers I’ve ever worked on. It’s a reminder of the sheer scale that algorithms can reach.

Exact numbers are hard to get, but about 80 million Americans are evaluated through this algorithm. And it’s not for some inconsequential thing: it is an algorithm used by many health-care systems to decide which patients should get put into what are called care-management programs. Care-management programs are for people who are going to be at the hospital a lot. If you have many conditions, you’re going to be in the system frequently, so you shouldn’t have to go through the normal front door, and maybe you should have a concierge who works just with you. You get additional resources to manage this complex care.

It costs a lot of money to put somebody in a care-management program. You really want to target these programs. So the question is, who should be in them?

Over the past five years, there have been algorithms developed using health records of people to figure out who is at highest risk of using health care a lot. These algorithms produce a risk score, and my coresearchers and I wanted to know if there was any racial bias in these scores.

The way we looked for it was to take two people given the same score by the algorithm—one white and one Black. Then we looked at those two people and asked whether, on average, the white person had the same level of sickness as the Black person. What we found is that he or she didn’t, that when the algorithm gives two people the same score, the white person tends to be much healthier than the Black person. And I mean much healthier, extremely so. If you said, “How many white people would I have to remove from the program, and how many Black people would I have to put in, until their sickness levels were roughly equalized?” you would have to double the number of Black patients. It is an enormous gap.

READ  Shore up the euro before it's too late, experts warn

I say it’s one of the craziest projects I’ve worked on in part because of the sheer scale of this thing. But there are a lot of social injustices that happen at a large scale. What made it really weird was when we said, “Let’s figure out what’s causing it.” In the literature on algorithmic bias, everyone acts like algorithms are people, like they’re biased [in the sense that people are]. It’s just a little piece of code. What went wrong in the code?

What we found is something that we’re finding again and again in all of our A.I. work, that every time you see that an algorithm has done something really bad, there’s no engineering error. That’s very, very different than the traditional bugs in code that you’re used to: when your computer crashes, some engineering bug has shown up. I’ve never seen an engineering bug in A.I. The bug is in what people asked the algorithm to do. They just made a mistake in how they asked the question. more>

Related>

Leave a Reply

Your email address will not be published. Required fields are marked *