TECHNOLOGY

Liability When Nobody Decided Anything

Sudheesh Nair-Dec 13, 2025-5 min read
Liability When Nobody Decided Anything

A hospital procurement system, operating autonomously, selects a new supplier for surgical gloves based on cost, compliance certifications, and delivery reliability. The gloves pass inspection. Three months later, a manufacturing defect causes reactions in patients. The hospital faces lawsuits.

Who decided to use that supplier?

The procurement agent selected from a list of compliant vendors. It weighted criteria according to policy set by the procurement team. The policy was approved by hospital administration. The vendor met all stated requirements. No human reviewed the specific selection because the system was designed to handle routine procurement without human review. That was the point.

The plaintiff's attorney will want to depose someone who made the decision. There is no such person.

The Human in the Loop Was the Point

Liability frameworks assume decision-makers. Product liability assumes a manufacturer made choices about design and production. Professional liability assumes a professional exercised judgment. Negligence assumes someone failed to exercise reasonable care. The frameworks locate responsibility in humans who could have acted differently.

Corporate structures already diffuse responsibility across organizations, but legal doctrine developed tools to handle that diffusion. Respondeat superior holds employers liable for employee actions. Corporate officer liability pierces organizational shields in cases of sufficient culpability. The tools exist because the law evolved alongside organizations that were, ultimately, composed of people making decisions.

Agent-mediated decisions don't diffuse responsibility across humans. They remove the human decision entirely. The procurement agent didn't consult anyone about the glove supplier. It executed policy. The humans involved set policy, approved criteria, deployed the system, but the specific selection was made by software operating within parameters.

Candidates for Responsibility

The deploying organization. The hospital chose to use autonomous procurement. It set the policies the system followed. Under current doctrine, the hospital likely bears responsibility for outcomes of systems it deployed, regardless of whether humans reviewed specific decisions. This is the simplest resolution but raises the stakes of deployment decisions significantly.

The system vendor. The company that built the procurement agent designed how it evaluates options. If the weighting was flawed, if it should have caught signals the supplier was risky, the vendor's design choices caused the harm. Product liability doctrine could extend to software design, but the fit is awkward when the "defect" is a judgment call about how to weight criteria.

The model provider. If the procurement agent uses a foundation model for some decisions, is the model provider liable for outputs? Current Section 230 protections and model provider terms of service disclaim liability, but those disclaimers haven't been tested in cases where autonomous systems cause physical harm.

The policy setters. The humans who defined the criteria the system used. But they didn't know this specific decision would be made. They set general parameters. Holding them liable for every outcome within those parameters makes policy-setting personally risky in ways that could discourage delegation to autonomous systems.

Nobody. The harm happened. The supplier was negligent in manufacturing. The hospital's system correctly identified the supplier as compliant based on available information. Sometimes bad outcomes occur without actionable negligence. This is unsatisfying but may be accurate in some cases.

The Documentation Problem

Legal discovery assumes records of decisions. Emails, meeting notes, memos, the paper trail of human deliberation. Lawyers reconstruct what people knew, when they knew it, and why they chose as they did.

Agent decisions produce different records. Logs of inputs and outputs. Model weights that determined selections. Training data that shaped behavior. The records are voluminous and illegible to non-specialists. A procurement agent might log that it selected Supplier A over Supplier B because A scored 0.847 on the weighted criteria versus B's 0.831. What that means in human terms, what drove the 0.016 difference, requires expertise to interpret.

Explainability becomes a legal requirement, not just an engineering preference. Systems that cannot explain why they made a decision create liability exposure by making defense difficult. Organizations will demand audit trails that translate agent reasoning into human-understandable accounts.

The Insurance Question

Insurance pricing assumes actuarial models of human behavior. Professional liability insurance prices risk based on what professionals in a field typically do wrong. Product liability insurance prices risk based on historical defect rates and harm patterns.

Agent-mediated decisions have no actuarial history. Insurers don't know how often procurement agents select suppliers that later cause harm. They don't know the base rate of autonomous fleet decisions that lead to accidents. The absence of historical data makes pricing difficult.

Early deployments will face either expensive coverage or coverage gaps. Organizations may self-insure, accepting risk their insurers won't underwrite. The resulting exposure will shape deployment decisions and create pressure for industry data-sharing that allows actuarial modeling.

The Regulatory Lag

Regulators move slowly. Liability doctrine moves slower. The gap between deployment of autonomous systems and legal clarity about responsibility will be measured in years. During that gap, organizations will make decisions under uncertainty.

Some will wait for clarity, ceding competitive ground to earlier movers. Some will deploy and accept ambiguous risk, hoping case law develops favorably. Some will deploy in jurisdictions with more permissive or clearer rules, creating regulatory arbitrage.

The cases that establish precedent will be ugly. Serious harm, sympathetic plaintiffs, and defendants arguing that nobody actually made the decision that caused injury. Courts will fashion rules that fit the cases before them, which may or may not fit the broader universe of agent-mediated decisions.

What Organizations Should Expect

Deployers will likely bear primary liability in early cases. The organization that chooses to remove humans from the loop accepts responsibility for what happens in their absence. This is consistent with existing vicarious liability doctrine and requires the least legal innovation.

System vendors will face increasing pressure to warrant their systems' behavior. Contracts will shift risk allocation. Indemnification clauses will become contentious. The market for autonomous systems will stratify between vendors willing to stand behind their systems and vendors who disclaim all liability.

Documentation and explainability will become compliance requirements. Systems that cannot produce audit trails acceptable to regulators and courts will become undeployable in regulated industries. The cost of that documentation will be factored into the economics of automation.

The organizations navigating this uncertainty well will be those that treat liability as a design requirement from the start, not an afterthought to resolve when something goes wrong.

This is part of a series on the robotic web from TinyFish, which builds infrastructure for machine operation of the web.

← Back to all posts