GPS Navigators, morality and the human utility function

GPS navigators used to be simple. You plug in the desired address, they calculate the quickest route. At some point, many started getting a feed of traffic information, so that they can route against big jams. Still pretty simple.

Enter Waze. Waze hyper-micro-optimizes for traffic conditions, using other users as roaming sensors and, probably, complicated AI on the back-end to predict what route would take how long. This means Waze learns from experience how fast people drive where, under what conditions — and, as far as I can tell, takes all this information into account. This means that on a residential street that is constantly being speeded to, Waze will learn that it is a fast path, and will start directing drivers through it. Drivers who are using Waze, which means they want to get to their destination as fast as possible.

There are a lot of valid reasons to want to get somewhere fast. Maybe you scheduled a meeting. Maybe someone is having an emergency. Maybe you just want to cuddle your kid before she goes to bed. Waze does not care. It tries to get users to their destination as fast as possible, no matter which residential streets they speed through. It has no concerns for safety, either the driver’s or pedestrian, neatly divulging itself of all responsibility by “just suggesting”. What if a nice street starts being higher-up on the Waze paths? And this lowers property values? Waze has no concerns for that.

I’m not picking on Waze especially. Their only fault was that they took the optimization criteria that all GPS navigators use (find fastest path) and used superior sensors and superior algorithms to implement them better. Whoever wins the “navigator wars” would have had to do at least as well, so in that sense, there is nothing special about Waze except hiring smart people who made good decisions.

But this does show how AI moves forward: smart engineers optimize the hell of whatever target function they are given. At some point, it starts having real costs that humans would understand, but an AI would not care about because the AI cares about optimizing the target function. The only way to solve it is to figure out what humans care about in terms computers can understand and make sure that any AI takes those into account — even if it is not smart enough to self-modify, it can do plenty of damage without it.

In other words, the Friendliness apocalypse, if we include not just existential risk but also a general raising of human risk factors, is not in some nebulous future until we figure out self-modification. We are in the middle of it, and we should make sure that we take it into account when building AI — even if that AI is limited to “just suggesting things”, because humans are suggestible, and have learned to trust computers’ suggestions.

About these ads

One Response to GPS Navigators, morality and the human utility function

  1. Miki Tebeka says:

    Another side of this issue if “filter bubble” (http://en.wikipedia.org/wiki/Filter_bubble). Where, for example, the Google News algorithm has no idea about what is ethic journalism.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 304 other followers

%d bloggers like this: