Text extracted via OCR from the original document. May contain errors from the scanning process.
This need to understand human actions and decisions applies to physical and
nonphysical robots alike. If either sort bases its decision about how to act on the
assumption that a human will do one thing but the human does something else, the
resulting mismatch could be catastrophic. For cars, it can mean collisions. For an AI
with, say, a financial or economic role, the mismatch between what it expects us to do
and what we actually do could have even worse consequences.
One alternative is for the robot not to predict human actions but instead just
protect against the worst-case human action. Often when robots do that, though, they
stop being all that useful. With cars, this results in being stuck, because it makes every
move too risky.
All this puts us, the AI community, into a bind. It suggests that robots will need
accurate (or at least reasonable) predictive models of whatever people might decide to do.
Our state definition can’t just include the physical position of humans in the world.
Instead, we’ ll also need to estimate something internal to people. We’ll need to design
robots that account for this human internal state, and that’s a tall order. Luckily, people
tend to give robots hints as to what their internal state is: Their ongoing actions give the
robot observations (in the Bayesian inference sense) about their intentions. If we start
walking toward the right side of the hallway, we’re probably going to enter the next room
on the right.
What makes the problem more complicated 1s the fact that people don’t make
decisions in isolation. It would be one thing if robots could predict the actions a person
intends to take and simply figure out what to doin response. But unfortunately this can
lead to ultra-defensive robots that confuse the heck out of people. (Think of human
drivers stuck at four-way stops, for instance.) What the intent-prediction approach misses
is that the moment the robot acts, that influences what actions the human starts taking.
There is a mutual influence between robots and people, one that robots will need
to learn to navigate. It is not always just about the robot planning around people; people
plan around the robot, too. It is important for robots to account for this when deciding
which actions to take, be it on the road, in the kitchen, or even in virtual spaces, where
actions might be making a purchase or adopting a new strategy. Doing so should endow
robots with coordination strategies, enabling them to take part in the negotiations people
seamlessly carry out day to day—from who goes first at an intersection or through a
narrow door, to what role we each take when we collaborate on preparing breakfast, to
coming to consensus on what next step to take on a project.
Finally, just as robots need to anticipate what people will do next, people need to
do the same with robots. This is why transparency is important. Not only will robots
need good mental models of people, but people will need good mental models of robots.
The model that a person has of the robot has to go into our state definition as well, and
the robot has to be aware of how its actions are changing that model. Much like the robot
treating human actions as clues to human internal states, people will change their beliefs
about the robot as they observe its actions. Unfortunately, the giving of clues doesn’t
come as naturally to robots as it does to humans; we’ve had a lot of practice
communicating implicitly with people. But enabling robots to account for the change
that their actions are causing to the person’s mental model of the robot can lead to more
carefully chosen actions that do give the right clues—that clearly communicate to people
about the robot’s intentions, its reward function, its limitations. For instance, a robot
100
HOUSE_OVERSIGHT_016903