Tag Archives: Algorithm

Updates from Chicago Booth

How racial bias infected a major health-care algorithm
By Jeff Cockrell – As data science has developed in recent decades, algorithms have come to play a role in assisting decision-making in a wide variety of contexts, making predictions that in some cases have enormous human consequences. Algorithms may help decide who is admitted to an elite school, approved for a mortgage, or allowed to await trial from home rather than behind bars.

But there are well-publicized concerns that algorithms may perpetuate or systematize biases. And research by University of California at Berkeley’s Ziad Obermeyer, Brian Powers of Boston’s Brigham and Women’s Hospital, Christine Vogeli of Partners HealthCare, and Chicago Booth’s Sendhil Mullainathan finds that one algorithm, used to make an important health-care determination for millions of patients in the United States, produces racially biased results.

The algorithm in question is used to help identify candidates for enrollment in “high-risk care management” programs, which provide additional resources and attention to patients with complex health needs. Such programs, which can improve patient outcomes and reduce costs, are employed by many large US health systems, and therefore the decision of whom to enroll affects tens of millions of people. The algorithm assigns each patient a risk score that is used to guide enrollment decisions: a patient with a risk score in the 97th percentile and above is automatically identified for enrollment, while one with a score from the 55th to 96th percentiles is flagged for possible enrollment depending on input from the patient’s doctor.

Obermeyer, Powers, Vogeli, and Mullainathan find that black patients are on average far less healthy than white patients assigned the same score. For instance, for patients with risk scores in the 97th percentile of the researchers’ sample, black patients had on average 26 percent more chronic illnesses than white patients did. The result of this bias: black patients were significantly less likely to be identified for program enrollment than they would have been otherwise. Due to algorithmic bias, 17.7 percent of patients automatically identified for enrollment were black; without it, the researchers calculate, 46.5 percent would have been black.

The bias stems from what the algorithm is being asked to predict. more>

Related>

‘Hate Is Way More Interesting Than That’: Why Algorithms Can’t Stop Toxic Speech Online

Researchers have recently discovered that anyone can trick hate speech detectors with simple changes to their language—and typos are just one way that neo-Nazis are foiling the algorithms.
By Morgan Meaker – Erin Schrode didn’t know much about the extreme right before she ran for Congress. “I’m not going to tell you I thought anti-Semitism was dead, but I had never personally been the subject of it,” she says.

That changed when The Daily Stormer, a prominent neo-Nazi website, posted an article about her 2016 campaign.

For years, social media companies have struggled to contain the sort of hate speech Schrode describes. When Facebook founder Mark Zuckerberg spoke before the Senate in April of 2018, he acknowledged that human moderators were not enough to remove toxic content from Facebook; in addition, he said, they needed help from technology.

“Over time, we’re going to shift increasingly to a method where more of this content is flagged up front by [artificial intelligence] tools that we develop,” Zuckerberg said.

Zuckerberg estimated that A.I. could master the nuances of hate speech in five to 10 years. “But today, we’re just not there,” he told senators.

He’s right: Researchers have recently discovered anyone can trick hate speech detectors with simple changes to their language—removing spaces in sentences, changing “S” to “$,” or changing vowels to numbers. more>

The body is the missing link for truly intelligent machines

BOOK REVIEW

Basin and Range, Author: John McPhee.
Descartes’ Error, Author: Antonio Damasio.

By Ben Medlock – Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text.

But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet – all of which arose out of a longstanding, embedded encounter with the natural world.

We only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’,

In other words, we think with our whole body, not just with the brain. more> https://goo.gl/oBgkRF

Algorithms: Based on your preferences, you may also enjoy this column

By Neal Gabler – Facebook has an algorithm that predicts the news a user is likely to want. Dating sites like Match.com and OkCupid use algorithms to predict with whom we would fall in love. Google, with the most famous algorithm of all, predicts what we want when we type a search term.

As one scientist put it, “Algorithms rule the world.”

But there is a problem with that rule.

Because algorithms are based on the past, they only satisfy preconditioned responses. They cannot give us what is new, surprising, challenging or different. Difference is what they are designed to dismiss.

In effect, they hollow out life. more> http://tinyurl.com/otfjmkt

Uncovering hidden structures in massive data collections

R&D – Finding patterns in the connections among points of data can be critical for many applications. For example, checking citations to scientific papers can provide insights to the development of new fields of study or show overlap between different academic disciplines. Links between patents can map out groups that indicate new technological developments. And analysis of social networks can provide information about communities and allow predictions of future interests.

In very basic terms, the researchers approached the problem by dividing the analysis into two broad tasks. In one, they created an algorithm that quickly analyzes a subset of a large database. The algorithm calculates the likelihood that nodes belong to various groups in the database. In the second broad task, the researchers created an adjustable matrix that accepts the analysis of the subset and assigns “weights” to each data point reflecting the likelihood that it belongs to different groups. more> http://tinyurl.com/kjazhc4