Tag Archives: Chicago Booth

Updates from Chicago Booth

Want to be happier? Give more to others
By Alice G. Walton – There’s scientific evidence, it turns out, to back up centuries-old religious teachings that it’s better to give than to receive. Chicago Booth’s Ed O’Brien and Northwestern PhD candidate Samantha Kassirer find that giving to others might make you happier in the long run and has staying power, whereas the joy of receiving fades quickly.

In an experiment, the researchers gave college students $5 for five days in a row and told them to spend it the same way each day. Half of the students were instructed to spend the money on themselves, say by buying a coffee daily. The other half were told to spend the money on others, for example by donating to a single charity or leaving a tip in the same coffee shop each day. Every night, the participants completed a survey in which they reported how they felt overall that day.

The happiness level of the students who spent money on themselves declined over the five days, the researchers find. But for those who donated the money, their happiness level on the fifth day was similarly high as on the first. more>

Related>

Updates from Chicago Booth

How racial bias infected a major health-care algorithm
By Jeff Cockrell – As data science has developed in recent decades, algorithms have come to play a role in assisting decision-making in a wide variety of contexts, making predictions that in some cases have enormous human consequences. Algorithms may help decide who is admitted to an elite school, approved for a mortgage, or allowed to await trial from home rather than behind bars.

But there are well-publicized concerns that algorithms may perpetuate or systematize biases. And research by University of California at Berkeley’s Ziad Obermeyer, Brian Powers of Boston’s Brigham and Women’s Hospital, Christine Vogeli of Partners HealthCare, and Chicago Booth’s Sendhil Mullainathan finds that one algorithm, used to make an important health-care determination for millions of patients in the United States, produces racially biased results.

The algorithm in question is used to help identify candidates for enrollment in “high-risk care management” programs, which provide additional resources and attention to patients with complex health needs. Such programs, which can improve patient outcomes and reduce costs, are employed by many large US health systems, and therefore the decision of whom to enroll affects tens of millions of people. The algorithm assigns each patient a risk score that is used to guide enrollment decisions: a patient with a risk score in the 97th percentile and above is automatically identified for enrollment, while one with a score from the 55th to 96th percentiles is flagged for possible enrollment depending on input from the patient’s doctor.

Obermeyer, Powers, Vogeli, and Mullainathan find that black patients are on average far less healthy than white patients assigned the same score. For instance, for patients with risk scores in the 97th percentile of the researchers’ sample, black patients had on average 26 percent more chronic illnesses than white patients did. The result of this bias: black patients were significantly less likely to be identified for program enrollment than they would have been otherwise. Due to algorithmic bias, 17.7 percent of patients automatically identified for enrollment were black; without it, the researchers calculate, 46.5 percent would have been black.

The bias stems from what the algorithm is being asked to predict. more>

Related>

Updates from Chicago Booth

Startups, forget about the technology
New ventures should focus all their efforts on problem-solving
By Michael D. Alter – Soon after Brian Chesky graduated in 2007 with a degree in industrial design, he moved from Rhode Island to San Francisco. He was shocked by the cost of living, at one point owing $1,200 for his share of the rent for an apartment, but with only $1,000 in his bank account. Chesky saw an ad for an international design conference being held in the city, which mentioned that all the nearby hotels were completely booked up. He immediately saw an opportunity: designers needed a place to stay, and he needed rent money. So he set up a website and advertised that his roommates had space to accommodate three visitors, if they would sleep on inflatable air beds. What would later become Airbnb was born.

The following year, Chesky was reading an article about the Democratic National Convention, which was due to be held in Denver. How, the article wondered, would Denver, which had some 28,000 hotel rooms at the time, accommodate about 80,000 convention goers? The entrepreneur immediately recognized that this could be a big break for his fledgling startup. “Obama supporters [could] host other Obama supporters from all over the world,” Chesky recalled three years later. “All we did was become part of the story.”

As well as being a memorable origin story that explains their name (air mattresses were the air in Airbnb), this is an instructive lesson in entrepreneurship. Chesky and his cofounders identified a twofold problem—lack of affordable accommodations and sky-high rents—and thought creatively of how they might be able to solve it and make some money at the same time.

In the startup world, it isn’t necessarily the best product that ultimately wins out. Rather, it’s the best way to solve the problem. Once you do that, you can figure out how to scale it. more>

Related>

Updates from Chicago Booth

Why do analysts low-ball earnings forecasts?
By Martin Daks – The market-research company FactSet reports that for each quarter over the past five years, an average of 72 percent of companies in the S&P 500 beat earnings estimates. Past research, including by University of Pennsylvania’s Scott Richardson, University of California at Irvine’s Siew Hong Teoh, and Boston University’s Peter D. Wysocki has found that analysts’ forecasts become more pessimistic and thus beatable as the quarter end approaches, but an unaddressed question is how this walk-down affects clients. If analysts revise their forecasts downward each quarter to placate managers, wouldn’t this confuse the investors who ultimately pay for their services?

According to Chicago Booth’s Philip G. Berger and Washington University’s Charles G. Ham and Zachary R. Kaplan, analysts walk down forecasts by suppressing positive news from quarterly forecasts, not by issuing misleading negative revisions. When analysts have positive news, they will often revise the share price target upward or state explicitly that they expect companies to beat earnings estimates, while leaving the quarterly forecast unrevised. Suppressing positive news leads to beatable forecasts—behavior that benefits corporate executives but carries important implications for both the individual investors who rely on these predictions and researchers studying investor expectations.

When securities analysts receive updated information after issuing a quarterly forecast, they have three options: revise the current-quarter earnings forecast; issue an alternative forecast signal, such as a revision to the share price target or future-quarter earnings; or issue no additional forecast.

By not disseminating all information through the current-quarter earnings forecast, which is widely available through commercial databases, analysts provide an advantage to investment clients who have paid for access to the full breadth of their research product.

“Analysts convey information in ways that enable them to be of service to clients, who they care about, and, at the same time, to avoid displeasing corporate managers, who they also care about,” Berger says. “Non-clients, who rely on earnings forecasts because they do not have access to the whole of an analysts’ work product, end up with skewed information, but this is not a primary concern for the analysts’ business.”

The researchers demonstrate that a simple strategy based on buying companies expected to beat earnings, using share price target revisions and the text of reports, yields significant abnormal returns, suggesting the market does not see through the analysts’ strategy for conveying information selectively. more>

Related>

Updates from Chicago Booth

How multinational companies help spread recessions
By Bob Simison – The Great Recession a decade ago was one example of how economic cycles across the world can move in parallel, a phenomenon that economists don’t fully understand. It could be that a common event, such as a surge in oil prices, affects many economies at the same time—or perhaps linkages between countries transmit economic shocks from one country to the world economy.

One such linkage is multinational corporations,  according to Marcus Biermann, a postdoctoral scholar at the Catholic University of Louvain, and Chicago Booth’s Kilian Huber, who explore the role of multinationals in spreading the global recession by analyzing the ripple effects of one German bank’s struggles during the 2008–09 financial crisis.

Commerzbank was Germany’s second-biggest commercial lender behind Deutsche Bank. Losses on trading and investments abroad hammered the bank, especially after Lehman Brothers collapsed in September 2008. Commerzbank’s capital fell by 68 percent between December 2007 and December 2009, which forced the bank to reduce its aggregate lending stock by 17 percent. Biermann and Huber find that this pullback in credit available to German parent companies affected subsidiaries in other countries, thus helping to transmit the economic contraction. more>

Related>

Updates from Chicago Booth

How Norway reduced the rich-poor earnings gap
By Dwyer Gunn – In the United States, vocational and technical education at the high-school level has long been controversial. Critics argue that vocational schools serve as warehouses for disadvantaged students, depriving them of the opportunity to attend college. Advocates maintain that vocational schools provide valuable labor-market skills and may better serve students who struggle with traditional academics or who can’t or don’t wish to attend college.

In recent years, however, a new vision has emerged, one that emphasizes increasing access to alternative educational models while ensuring that students who choose these pathways can still ultimately pursue higher education. Many states are exploring or have launched high-school apprenticeship programs, and there’s been renewed interest in the Career Academies education model, a 35-year-old approach aimed at restructuring high schools to create alternative pathways that lead to higher education or the workplace.

American reformers may find further inspiration in the results of a 25-year-old overhaul of vocational education in Norway. Research by Chicago Booth’s Marianne Bertrand and Jack Mountjoy, along with University of Chicago’s Magne Mogstad, suggests the reforms helped reduce the eventual earnings gap experienced by poor students, particularly boys, although not without some unintended consequences.

The sweeping changes, known as Reform 94, increased access to apprenticeships and altered the country’s vocational-track high-school degrees to allow graduates to attend college after a semester of supplemental academic courses. Before the changes, students in Norway who obtained vocational-track degrees had to restart high school and secure an academic diploma if they wanted to attend college. more>

Related>

Updates from Chicago Booth

The evolution of economics and Homo Economicus
By Richard H. Thaler – Early in my teaching career I managed to inadvertently get most of the students in my microeconomics class mad at me, and for once, it had nothing to do with anything I said in class. The problem was caused by a midterm exam.

I had composed an exam that was designed to distinguish among three broad groups of students: the stars who really mastered the material, the middle group who grasped the basic concepts, and the bottom group who just didn’t get it. To successfully accomplish this task, the exam had to have some questions that only the top students would get right, which meant that the exam was hard.

The exam succeeded in my goal—there was a wide dispersion of scores—but when the students got their results they were in an uproar. Their principal complaint was that the average score was only 72 points out of a possible 100.

What was odd about this reaction was that the average numerical score on the exam had absolutely no effect on the distribution of grades. The norm at the school where I was teaching was to use a grading curve in which the average grade was a B or B+, and only a tiny number of students received grades below a C. I had anticipated the possibility that a low average numerical score might cause some confusion on this front, so I had reported how the numerical scores would be translated into actual grades in the class.

Anything over 80 would get an A or A-, scores above 65 would get some kind of B, and only scores below 50 were in danger of getting a grade below C. The resulting distribution of grades was not different from normal, but this announcement had no apparent effect on the students’ mood. They still hated my exam, and they were none too happy with me either. As a young professor worried about keeping my job, I was determined to do something about this, but I did not want to make my exams any easier. What to do?

Finally, an idea occurred to me. On the next exam, I made the total number of points available 137 instead of 100. This exam turned out to be slightly harder than the first, with students getting only 70 percent of the answers right, but the average numerical score was a cheery 96 points. The students were delighted! No one’s actual grade was affected by this change, but everyone was happy. From that point on, whenever I was teaching this course, I always gave exams a point total of 137, a number I chose for two reasons.

First, it produced an average score well into the 90s, with some students even getting scores above 100, generating a reaction approaching ecstasy. more>

Related>

Updates from Chicago Booth

The real cost of discrimination: A case study from Nazi Germany
By Robin I. Mordfin -Policies such as the Trump administration’s ban on visitors from a string of majority-Muslim countries are likely to harm American companies, research suggests.

Chicago Booth’s Kilian Huber and University of Munich’s Volker Lindenthal and Fabian Waldinger draw their conclusion from a study of companies in Nazi Germany. Purging Jewish managers from German companies reduced the aggregate market valuation of all companies listed on the Berlin Stock Exchange by approximately 5 percent between 1933 and 1943, or nearly 2 percent of the German gross national product, they find.

The researchers collected data on 30,000 managerial positions at German companies that had been listed on the Berlin Stock Exchange in 1932, when Hitler was on the path to becoming the leader of the country. At the time, Jews held about 15 percent of senior management positions in these companies.

After the Nazis took power in 1933, those managers either left or were forced out of their positions. The share prices of these companies then declined relative to companies that had never employed Jewish executives. The share prices of companies that lost Jewish managers started falling in 1933 and remained persistently 10 percent lower than the share prices of peer companies that had never had Jews in senior positions. more>

Related>

Updates from Chicago Booth

How to develop a superstar strategy
By Ram Shivakumar – We live in an age of growing corporate inequality, with a few dominant companies and many underperformers.

The superstar archetype is Google, established in 1998 with the aim of rank-ordering web pages in what was then the nascent industry of search. By the beginning of the 21st century, Google had no revenues and no established business model. Fast-forward 18 years and a few hundred acquisitions, and Alphabet, Google’s parent company, has a market value in excess of US$750 billion.

In almost every industry, a small number of companies are capturing the lion’s share of profits. The top 10 percent of companies worldwide with more than $1 billion in revenues (when ranked by profit) earned 80 percent of all economic profits from 2014 to 2016, according to a recent study by the McKinsey Global Institute. The 40 biggest companies in the Fortune 500 captured 52 percent of the total profit earned by all the corporations on that list, according to an analysis of the 2019 ranking by Fortune.

This leaves less and less for the smaller fish to feed on. The middle 60 percent of businesses earned close to zero economic profit from 2014 to 2016, according to McKinsey, while each of those in the bottom 10 percent recorded economic losses of $1.5 billion on average.

Why do some companies succeed so categorically while the majority struggle? This question drives much of the management-consulting industry. It has also inspired a library’s worth of management books with varying explanations. Is it because successful companies have visionary and disciplined leaders, as management consultant Jim Collins argues in his best seller Good to Great? Is it because successful companies have superior management systems and organizational cultures? Is it because of positional advantages, as Harvard’s Michael Porter might argue? Or is it all down to timing and luck?

Concluding that luck is a big factor would be unlikely to sell many paperbacks in an airport bookstore; yet, undoubtedly, chance events have played an important role in many successes and failures. more>

Related>

Updates from Chicago Booth

Want to pay less tax? Improve your firm’s internal reporting
By Marty Daks – When companies engage in the great American pastime known as tax avoidance, many parse the Internal Revenue Code for loopholes to reduce their effective tax rate. But research suggests they should also scrutinize the quality of their internal reporting.

Internal information quality (IIQ), a term coined by Chicago Booth’s John Gallemore and University of North Carolina’s Eva Labro, encompasses computer reporting systems and any other resources that a company devotes to ensuring the quality and ease of access of information within a firm. The elements that constitute IIQ have been largely overlooked in tax-avoidance literature—perhaps because they are usually not observable, and are difficult for academics to measure.

Gallemore and Labro argue companies should pay more attention to these issues, which they define in terms of the accessibility, usefulness, reliability, accuracy, quantity, and signal-to-noise ratio of the data and knowledge within an organization. Their findings suggest that firms with high IIQ tend to enjoy lower effective tax rates and, all else being equal, a smaller tax bite.

Gallemore and Labro employed four publicly available variables, using data from 1994 to 2010, to rate firms’ IIQ: the speed at which management released an earnings announcement after its fiscal year closed, the accuracy of management’s earnings forecasts, the absence of material weaknesses in internal controls, and the lack of restatements due to errors.

The researchers used these measures to identify companies that released earnings more rapidly and forecasted them more accurately, and had fewer Section 404 citations and restatements due to errors. They assigned these firms higher IIQ ratings.

High-IIQ firms, they find, tend to exhibit some positive traits, including centralized and standardized business transaction processing, more-efficient reporting practices, and the ability to share data across business units and geographical locations. more>

Related>