Skip to main content
Skip to content
Case File
d-30532House OversightOther

Generic discussion of human‑AI ecosystems and algorithmic governance

The passage is a conceptual essay on AI development and governance with no specific names, transactions, dates, or actionable allegations involving powerful actors. It offers no investigative leads. Describes a theoretical human‑AI ecosystem and potential risks of algorithmic control. Mentions technical aspects of credit‑assignment in neural networks. Speculates on future AI architectures embeddin

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016356
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The passage is a conceptual essay on AI development and governance with no specific names, transactions, dates, or actionable allegations involving powerful actors. It offers no investigative leads. Describes a theoretical human‑AI ecosystem and potential risks of algorithmic control. Mentions technical aspects of credit‑assignment in neural networks. Speculates on future AI architectures embeddin

Tags

algorithmic-governanceai-ethicstechnology-policyhouse-oversight

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
Development of human-AI ecosystems is perhaps inevitable for a social species such as ourselves. We became social early in our evolution, millions of years ago. We began exchanging information with one another to stay alive, to increase our fitness. We developed writing to share abstract and complex ideas, and most recently we’ve developed computers to enhance our communication abilities. Now we’re developing AI and machine-learning models of ecosystems and sharing the predictions of those models to jointly shape our world through new laws and international agreements. We live in an unprecedented historic moment, in which the availability of vast amounts of human behavioral data and advances in machine learning enable us to tackle complex social problems through algorithmic decision making. The opportunities for such a human-AI ecology to have positive social impact through fairer and more transparent decisions are obvious. But there are also risks of a “tyranny of algorithms,” where unelected data experts are running the world. The choices we make now are perhaps even more momentous than those we faced in the 1950s, when AI and cybernetics were created. The issues look similar, but they’re not. We have moved down the road, and now the scope is larger. It’s not just AI robots versus individuals. It’s AI guiding entire ecologies. How can we make a good human-artificial ecosystem, something that’s not a machine society but a cyberculture in which we can all live as homans—a culture with a human feel to it? We don’t want to think small—for example, to talk only of robots and self- driving cars. We want this to be a global ecology. Think Skynet-size. But how would you make Skynet something that’s about the human fabric? The first thing to ask is: What’s the magic that makes the current AI work? Where is it wrong and where 1s it right? The good magic is that it has something called the credit-assignment function. What that lets you do is take “stupid neurons’”—little linear functions—and figure out, in a big network, which ones are doing the work and strengthen them. It’s a way of taking a random bunch of switches all hooked together in a network and making them smart by giving them feedback about what works and what doesn’t. This sounds simple, but there’s some complicated math around it. That’s the magic that makes current AI work. The bad part of it is, because those little neurons are stupid, the things they learn don’t generalize very well. If an AI sees something it hasn’t seen before, or if the world changes a little bit, the AI is likely to make a horrible mistake. It has absolutely no sense of context. In some ways, it’s as far from Norbert Wiener’s original notion of cybernetics as you can get, because it isn’t contextualized; it’s a little idiot savant. But imagine that you took away those limitations: Imagine that instead of using dumb neurons, you used neurons in which real-world knowledge was embedded. Maybe instead of linear neurons, you used neurons that were functions in physics, and then you tried to fit physics data. Or maybe you put in a lot of knowledge about humans and how they interact with one another—the statistics and characteristics of humans. When you add this background knowledge and surround it with a good credit- assignment function, then you can take observational data and use the credit-assignment function to reinforce the functions that are producing good answers. The result is an AI that works extremely well and can generalize. For instance, in solving physical 136

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.