If AI Replaces Radiologists, Who Owns The Outcome?
The idea that AI will replace radiologists is no longer fringe. It made headlines when the CEO of America’s largest municipal public health system suggested that regulation is the only thing keeping AI from replacing radiologists. The debate that followed focused on jobs. That is the wrong question. The right one is not whether radiologists will still be employed, they will, but who owns the outcome when the system gets it wrong. That question does not have an answer yet, and the gap it leaves is not a technical problem. It is a structural one.
Radiology is not simply image reading. It is ownership of the finding and everything that follows. Remove that layer and the system does not get simpler; it gets redistributed. Costs shift. Risk follows. In 2023, hospital administrative spending reached $687 billion — nearly twice what hospitals spent on direct patient care. The problem was never physician compensation. It is a system that absorbs more complexity without building the infrastructure to manage it. Automating interpretation without redesigning that infrastructure does not solve the cost problem; it adds to it.
In practice, the system is already showing where that complexity lands. Patients now receive results the moment they are finalized. During an emergency visit for my child, I saw that firsthand when results appeared in the patient portal before the care team had reviewed them. Findings no longer sit quietly in reports; they trigger questions and anxiety in real time, often before the ordering clinician has had a chance to review or explain the result for the patient. The expectation has changed faster than the system designed to support it.
For the patient, that moment is not abstract. It is immediate, and often confusing. What the patient is seeing in that moment is not raw data or an AI output but a radiologist’s judgment, arriving before it can be integrated into a broader clinical plan. That is not a failure of radiologists; it is a structural gap. The system was built for a world where interpretation and action were separated in time. That gap no longer exists.
The question, then, is not who reads the image but who makes sure the read turns into action. AI may accelerate detection and even highlight what matters. But it does not assume responsibility for what happens next. Radiology sits at that inflection point where interpretation meets consequence. That is the work that is expanding, not disappearing.
This shift does not reduce the need for responsibility; it increases it. Radiologists are accountable for communicating critical findings, ensuring urgent results reach the right clinician, and documenting recommendations that shape what happens next. A report is not the endpoint; it is the handoff into a chain of decisions that carry clinical and legal weight. That chain is where liability lives. Malpractice does not attach to isolated tasks; it attaches to failures across that sequence: a missed finding, a delayed communication, a breakdown in follow-up. Not to the read alone, but to what happens next.
This is where the replacement argument breaks down. The shift is not from human to machine but from static interpretation to a system that must absorb more information, faster, without losing ownership of the outcome.
Responsibility Does Not Break Into Tasks
Curtis Langlotz, MD, PhD, has put it simply : “radiologists who use AI will replace radiologists who don’t.” The core responsibilities in radiology do not break cleanly into tasks; this is why the replacement argument has persisted for years without materializing. Interpretation can be assisted, but responsibility does not fragment so easily. Radiology has held as a field not because the tasks are complex but because the responsibility is continuous. Interpretation, context, communication and follow-up are not independent steps; they form a single chain.
Work is automated when it can be broken into discrete tasks, while jobs persist when those tasks remain tightly bundled, as Luis Garicano describes . Radiology is a strong bundle. The output is not the read; it is what happens next because of the read. That structure explains why so many AI deployments feel misaligned in practice. As Ainsley MacLean, MD, FACR, a practicing neuroradiologist, advisor and investor who previously served as the first Chief AI Officer for Kaiser Permanente’s Washington, D.C. region, has seen this firsthand: “Health systems today are navigating a patchwork of narrow AI tools, each addressing a single finding, each requiring separate integration, validation, contracting and training. That fragmentation is fundamentally misaligned with how radiologists actually read studies, holistically, across many potential findings at once.” AI improves parts of the process, but the responsibility still has to be carried across the entire chain.
Efficiency Increases the Burden of Ownership
A recent preprint from Dr. Langlotz estimates that AI could reduce radiologist workload by roughly one third. That is not elimination; it is compression. And compression increases throughput, not redundancy. When work is compressible but not separable, efficiency gains do not remove the role. They increase the number of decisions that must be made within it. Imaging volumes continue to grow, more findings are detected, and more follow-up is required. What AI changes is not the existence of responsibility, but the scale at which it can and must operate.
The Risk Is Separation, Not Automation
The real risk in healthcare AI sits not with automation but rather the separation of responsibility from the work. Detecting a cancer is not the same as diagnosing it, and diagnosing it is not the same as making sure it is treated. At the end of that chain is a patient waiting for something to happen next. AI can accelerate pieces of that process and surface findings earlier and arguably more consistently , but it does not carry the consequence that follows. When a finding is missed, delayed, or misinterpreted, accountability does not disappear; it concentrates.
The question is not whether AI can generate an answer but whether the system can absorb, verify, and act on it, and who is responsible when it does not.
Policy Must Define Ownership
This is the policy problem. Separating the read from the responsibility does not remove the risk; it redistributes it. Policy cannot stop at model performance; it has to define who owns the outcome when the workflow is no longer in one person’s, or even one human's, hands. Radiology has already shown that AI works. But AI interventions have not reduced the need for clinical judgment. Instead, AI tools have shifted where that judgment is applied and increased the number of moments where it is essential.
Right now, when something goes wrong, responsibility is distributed across a chain that no single actor fully owns. In a functioning system, that distribution is a safety feature. When automation removes actors from that chain without replacing their accountability, it becomes a liability. The radiologist is not the inefficiency in that chain. The radiologist is the last point of accountable ownership in it.
AI can generate the signal. It cannot carry the consequence. Any system that treats those as the same thing is not efficient. It is fragile.
Loading article...