Skip to main content
Skip to content
Case File
d-35120House OversightOther

Generic discussion on AI robot human‑interaction modeling

The text contains no specific individuals, organizations, transactions, or allegations. It is a theoretical overview of predictive modeling for robots and does not provide actionable leads or novel co Emphasizes need for robots to predict human actions and internal states. Notes mutual influence between humans and robots. Advocates transparency and mental‑model alignment.

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016903
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The text contains no specific individuals, organizations, transactions, or allegations. It is a theoretical overview of predictive modeling for robots and does not provide actionable leads or novel co Emphasizes need for robots to predict human actions and internal states. Notes mutual influence between humans and robots. Advocates transparency and mental‑model alignment.

Tags

predictive-modelinghumanrobot-interactionai-safetyhouse-oversight

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
This need to understand human actions and decisions applies to physical and nonphysical robots alike. If either sort bases its decision about how to act on the assumption that a human will do one thing but the human does something else, the resulting mismatch could be catastrophic. For cars, it can mean collisions. For an AI with, say, a financial or economic role, the mismatch between what it expects us to do and what we actually do could have even worse consequences. One alternative is for the robot not to predict human actions but instead just protect against the worst-case human action. Often when robots do that, though, they stop being all that useful. With cars, this results in being stuck, because it makes every move too risky. All this puts us, the AI community, into a bind. It suggests that robots will need accurate (or at least reasonable) predictive models of whatever people might decide to do. Our state definition can’t just include the physical position of humans in the world. Instead, we’ ll also need to estimate something internal to people. We’ll need to design robots that account for this human internal state, and that’s a tall order. Luckily, people tend to give robots hints as to what their internal state is: Their ongoing actions give the robot observations (in the Bayesian inference sense) about their intentions. If we start walking toward the right side of the hallway, we’re probably going to enter the next room on the right. What makes the problem more complicated 1s the fact that people don’t make decisions in isolation. It would be one thing if robots could predict the actions a person intends to take and simply figure out what to doin response. But unfortunately this can lead to ultra-defensive robots that confuse the heck out of people. (Think of human drivers stuck at four-way stops, for instance.) What the intent-prediction approach misses is that the moment the robot acts, that influences what actions the human starts taking. There is a mutual influence between robots and people, one that robots will need to learn to navigate. It is not always just about the robot planning around people; people plan around the robot, too. It is important for robots to account for this when deciding which actions to take, be it on the road, in the kitchen, or even in virtual spaces, where actions might be making a purchase or adopting a new strategy. Doing so should endow robots with coordination strategies, enabling them to take part in the negotiations people seamlessly carry out day to day—from who goes first at an intersection or through a narrow door, to what role we each take when we collaborate on preparing breakfast, to coming to consensus on what next step to take on a project. Finally, just as robots need to anticipate what people will do next, people need to do the same with robots. This is why transparency is important. Not only will robots need good mental models of people, but people will need good mental models of robots. The model that a person has of the robot has to go into our state definition as well, and the robot has to be aware of how its actions are changing that model. Much like the robot treating human actions as clues to human internal states, people will change their beliefs about the robot as they observe its actions. Unfortunately, the giving of clues doesn’t come as naturally to robots as it does to humans; we’ve had a lot of practice communicating implicitly with people. But enabling robots to account for the change that their actions are causing to the person’s mental model of the robot can lead to more carefully chosen actions that do give the right clues—that clearly communicate to people about the robot’s intentions, its reward function, its limitations. For instance, a robot 100

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.