Skip to main content
Skip to content
Case File
d-14808House OversightOther

Philosophical essay on AI rights and legal status, no concrete allegations

The passage consists of abstract commentary on artificial intelligence ethics and legal theory, without naming any individuals, institutions, transactions, or actionable allegations. It offers no inve Discusses AI as tools vs. conscious agents Mentions a seminar at Tufts University Cites Joanna J. Bryson's work on robot slavery

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016267
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The passage consists of abstract commentary on artificial intelligence ethics and legal theory, without naming any individuals, institutions, transactions, or actionable allegations. It offers no inve Discusses AI as tools vs. conscious agents Mentions a seminar at Tufts University Cites Joanna J. Bryson's work on robot slavery

Tags

ai-ethicslegal-theoryhouse-oversightphilosophy

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
is an ugly talent, reeking of racism or species-ism. Many people would find the cultivation of such a ruthlessly skeptical approach morally repugnant, and we can anticipate that even the most proficient system-users would occasionally succumb to the temptation to “befriend” their tools, if only to assuage their discomfort with the execution of their duties. No matter how scrupulously the AI designers launder the phony “human” touches out of their wares, we can expect novel habits of thought, conversational gambits and ruses, traps and bluffs to arise in this novel setting for human action. The comically long lists of known side effects of new drugs advertised on television will be dwarfed by the obligatory revelations of the sorts of questions that cannot be responsibly answered by particular systems, with heavy penalties for those who “overlook” flaws in their products. It is widely noted that a considerable part of the growing economic inequality in today’s world is due to the wealth accumulated by digital entrepreneurs; we should enact legislation that puts their deep pockets in escrow for the public good. Some of the deepest pockets are voluntarily out in front of these obligations to serve society first and make money secondarily, but we shouldn’t rely on good will alone. We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.'* One of the reasons for not making artificial conscious agents is that however autonomous they might become (and in principle, they can be as autonomous, as self-enhancing or self-creating, as any person), they would not—without special provision, which might be waived—share with us natural conscious agents our vulnerability or our mortality. I once posed a challenge to students in a seminar at Tufts I co-taught with Matthias Scheutz on artificial agents and autonomy: Give me the specs for a robot that could sign a binding contract with you—not as a surrogate for some human owner but on its own. This isn’t a question of getting it to understand the clauses or manipulate a pen on a piece of paper but of having and deserving legal status as a morally responsible agent. Small children can’t sign such contracts, nor can those disabled people whose legal status requires them to be under the care and responsibility of guardians of one sort or another. The problem for robots who might want to attain such an exalted status is that, like Superman, they are too invulnerable to be able to make a credible promise. If they were to renege, what would happen? What would be the penalty for promise- breaking? Being locked in a cell or, more plausibly, dismantled? Being locked up is barely an inconvenience for an AI unless we first install artificial wanderlust that cannot be ignored or disabled by the AI on its own (and it would be systematically difficult to make this a foolproof solution, given the presumed cunning and self-knowledge of the AI); and dismantling an AI (either a robot or a bedridden agent like Watson) is not killing it, if the information stored in its design and software is preserved. The very ease of digital recording and transmitting—the breakthrough that permits software and data to be, 1? Joanna J. Bryson, “Robots Should Be Slaves,” in Close Engagement with Artificial Companions, Y orick Wilks, ed., (Amsterdam, The Netherlands: John Benjamins, 2010), pp. 63-74; http://www.cs.bath.ac.uk/~jb/ftp/Bry son-Slaves-Book09. html. , Patiency Is Not a Virtue: AI and the Design of Ethical Systems,” https:/Awww.cs.bath.ac.uk/~jjb/ftp/Bryson-Patiency-AAAISS16.pdf. 47

Technical Artifacts (2)

View in Artifacts Browser

Email addresses, URLs, phone numbers, and other technical indicators extracted from this document.

Domainawww.cs.bath.ac.uk
URLhttp://www.cs.bath.ac.uk/~jb/ftp/Bry

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.