Raising good robots

We already have a way to teach morals to alien intelligences: it’s called parenting. Can we apply the same methods to robots?
By Regina Rini – Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us.

Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

Plato’s student Aristotle disagreed. He thought that each sort of thing in the world – squirrels, musical instruments, humans – has a distinct nature, and the best way for each thing to be is a reflection of its own particular nature.

‘Morality’ is a way of describing the best way for humans to be, and it grows out of our human nature. For Aristotle, unlike Plato, morality is something about us, not something outside us to which we must conform. Moral education, then, was about training children to develop abilities already in their nature. more> https://goo.gl/cVSt0W

Related>

Leave a Reply

Your email address will not be published. Required fields are marked *