How to govern AI to make it a force for good

In the interview, Gasser identifies three things policymakers and regulators should consider when developing strategies for dealing with emerging technologies like AI.
Urs Gasser – “Everyone is talking about Artificial Intelligence and its many different applications, whether it’s self-driving cars or personal assistance on the cell phone or AI in health,” he says. “It raises all sorts of governance questions, questions about how these technologies should be regulated to mitigate some of the risks but also, of course, to embrace the opportunities.”

One of the largest challenges to AI is its complexity, which results in a divide between the knowledge of technologists and that of the policymakers and regulators tasked to address it, Gasser says.

“There is actually a relatively small group of people who understand the technology, and there are potentially a very large population affected by the technology,” he says.

This information asymmetry requires a concerted effort to increase education and awareness, he says.

“How do we train the next generation of leaders who are fluent enough to speak both languages and understand engineering enough as well as the world policy and law enough and ethics, importantly, to make these decisions about governance of AI?”

Another challenge is to ensure that new technologies benefit all people in the same way, Gasser says.

Increasing inclusivity requires efforts on the infrastructural level to expand connectivity and also on the data level to provide a “data commons” that is representative of all people, he says. more>

READ  FCC wants to know if Verizon is warehousing spectrum

Leave a Reply

Your email address will not be published. Required fields are marked *