By S. A. Applin – The systems we require for sustaining our lives increasingly rely upon algorithms to function. Governance, energy grids, food distribution, supply chains, healthcare, fuel, global banking, and much else are becoming increasingly automated in ways that impact all of us.
Yet, the people who are developing the automation, machine learning, and the data collection and analysis that currently drive much of this automation do not represent all of us, and are not considering all of our needs equally. We are in deep.
Most of us do not have an equal voice or representation in this new world order. Leading the way instead are scientists and engineers who don’t seem to understand how to represent how we live as individuals or in groups—the main ways we live, work, cooperate, and exist together—nor how to incorporate into their models our ethnic, cultural, gender, age, geographic or economic diversity, either.
The result is that AI will benefit some of us far more than others, depending upon who we are, our gender and ethnic identities, how much income or power we have, where we are in the world, and what we want to do.
This isn’t new. The power structures that developed the world’s complex civic and corporate systems were not initially concerned with diversity or equality, and as these systems migrate to becoming automated, untangling and teasing out the meaning for the rest of us becomes much more complicated. In the process, there is a risk that we will become further dependent on systems that don’t represent us.
Furthermore, there is an increasing likelihood that we must forfeit our agency in order for these complex automated systems to function. This could leave most of us serving the needs of these algorithms, rather than the other way around. more>