A.I. is only human
By Jeff Cockrell – If you applied for a mortgage, would you be comfortable with a computer using a collection of data about you to assess how likely you are to default on the loan?
If you applied for a job, would you be comfortable with the company’s human-resources department running your information through software that will determine how likely it is that you will, say, steal from the company, or leave the job within two years?
If you were arrested for a crime, would you be comfortable with the court plugging your personal data into an algorithm-based tool, which will then advise your judge on whether you should await trial in jail or at home? If you were convicted, would you be comfortable with the same tool weighing in on your sentencing?
Much of the hand-wringing about advances in artificial intelligence has been concerned with AI’s effects on the labor market. “AI will gradually invade almost all employment sectors, requiring a shift away from human labor that computers are able to take over,” reads a report of the 2015 study panel of Stanford’s One Hundred Year Study on Artificial Intelligence. But whether AI ultimately creates massive unemployment or inspires new, as-yet-unknown professional fields, its perils and promises extend beyond the job market. By replacing human decision-making with automated processes, we can make businesses and public institutions more effective and efficient—or further entrench systemic biases, institutionalize discrimination, and exacerbate inequalities.
It’s an axiom of computing that results are dependent on inputs: garbage in, garbage out.
What if companies’ machine-learning projects come up with analyses that, while logical and algorithmically based, are premised on faulty assumptions or mismeasured data?
What if these analyses lead to bad or ethically questionable decisions—either among business leaders or among policy makers and public authorities? more>