Explaining Ethics to Robots

Robots have always been fascinating to me. Whether it’s Johnny 5, or the Terminator, or Data from Star Trek: The Next Generation, I have always loved stories about robots.

And many tales of robots have a theme similar to an old tale about our friend, Pinnocchio, in his quest to be, “a real boy.”

Robots in fiction often have this quest to be more human; to understand our emotions and why we do the things we do. Will real robots also seek humanity?

In a few months I am giving a lecture on Artificial Intelligence and Ethics (we just launched a new A.I. program). Having to explain things to a robot is not an easy task. We have a difficult time describing things to each other as humans. What is love? What is art? Why do you feel the way you do?

Why is this RIGHT and that is WRONG?

To a robot, even a highly sophisticated one with artificial intelligence, understanding something to be true is easy, but understanding why something is true is very difficult.

We know as humans that it is universally wrong to do certain things. It is wrong to: kill, lie, cheat, steal, and otherwise hurt living beings. We know this from a deep feeling we have inside. We feel bad when we violate these universal ethical codes.

How does a robot feel about these ethical codes?
Can a robot really feel, if they can really think?
Is it wrong to harm a robot if they are a sentient being?
How does all this fit in to our own ethical codes?

These are difficult questions. When facing ethical dilemmas as a human, it’s often not black and white as to the correct choice to make. The circumstances create different contexts.

So, how can we teach ethics to robots? The answer alludes us at this point in time, but MIT is working on a potential path.

It may take us a long time to figure out this problem, but the pursuit of it may very well teach us something about ourselves, and that is a noble pursuit.

If you are interested in learning more about A.I., please check out the works of Ray Kurzwell: The Age of Spiritual Machines, How to Create a Mind, and The Singularity is Near.

A question to leave you with: If humans are fallible, how can we create a system, an intelligence, that is infallible?

“There are still many human emotions I do not fully comprehend – anger, hatred, revenge. But I am not mystified by the desire to be loved – or the need for friendship. These are things I do understand.” – Lt. Cmdr. Data

Landscape (1909-1914), Carl Newman

Create your website with WordPress.com
Get started