Here’s Why Doctors Are Needed Now In The Era Of AI
Artificial intelligence is rapidly transforming medicine. Multiple new studies suggest AI systems are outperforming emergency room physicians in diagnosing diseases as well as radiologist physicians in detecting cancers on imaging studies.
As an example, a study published in Science found an AI reasoning model outperformed two experienced emergency medicine doctors at diagnosing patients and making managing decisions about their care. The AI model was able to do this strictly by examining electronic medical health records and the limited information that had been available to the physicians at the time.
In another study published in Gut , an AI model was able to detect pancreatic cancers on routine CT scans up to three years before being diagnosed clinically. The model also outperformed experienced radiologists by two to three fold in terms of correctly diagnosing pancreatic tumors before being diagnosed clinically.
These breakthroughs could revolutionize healthcare in a powerful way. For instance, being able to diagnose pancreatic cancer could dramatically increase survival, since currently 85% of patients with pancreatic cancer receive a diagnosis after the tumor has spread to other parts of the body beyond the pancreas, with 5-year survival rates below 15%. Earlier diagnosis would mean being able to aggressively treat the tumor at a stage that could significantly increase survival.
Although promising, these advances in AI reveal something deeper and more important—doctors are needed now more than ever. Here’s why.
AI tools are showing incredible advances and get better with time, but are far from perfect. One of the greatest risks in healthcare AI is overreliance. Algorithms can produce false positive results, which means diagnosing an abnormality when none exists. As in example, in the aforementioned study in Gut on pancreatic cancer, the specificity of AI in detecting pancreatic cancer before clinical diagnosis was 81%. This also means that nearly 1 in 5 patients would get a wrongly positive result, which can lead to unnecessary follow-up testing, patient anxiety and extra healthcare costs.
Since AI tools are not perfect, physicians are needed to critically evaluate these outputs, recognize limitations and prevent harmful errors from reaching patients. Without physician oversight, more harm can be done to patients from AI errors and false-positives from unnecessary invasive tests like biopsies as well as the unnecessary anxiety that can hinder one’s mental health.
All AI software and algorithms are trained on datasets, which are inherently bias because all datasets have flaws. For example, some may not include an inclusive number of study participants of all races, which would bias results against an underrepresented group.
Even in the study published in Gut , the authors write , “the study was not designed to evaluate performance across different racial and ethnic groups, a critical consideration for future validation given known disparities in pancreatic ductal adenocarcinoma risk”.
In other words, if diagnostic decisions and treatment regimens from AI tools are applied to certain populations that are not represented in the data that the AI was trained on, the results could deepen and amplify healthcare disparities to certain ethnic and racial groups. Treatments for certain groups may not be generalizable among other different groups and could cause real harm if not tested for appropriately. This is where physician oversight is critical- to ensure the best evidence is available and applied to appropriate patients. Every individual is different, and the art of medicine allows for physicians to tailor the most appropriate care for each individual patient.
Medicine is not simply about pattern recognition, making diagnoses or offering the appropriate treatment for patients. Medicine ultimately is about connecting with human beings who need to be heard about their ailments.
AI may be able to identify abnormalities on imaging scans with astonishing precision, but patients are not CT scans, lab values or data points. A patient with abdominal pain is not just a probability calculation for pancreatic cancer. They are often frightened human beings with a family, emotions, financial concerns, cultural beliefs and unique medical circumstances that cannot be fully understood by an algorithm.
A radiologist does far more than detect lesions. They synthesize imaging findings with patient history, prior studies, clinical context and subtle nuances that often fall outside clean datasets. An emergency physician similarly balances competing diagnoses, social dynamics, limited information, patient preferences and rapidly evolving situations in real time. AI can assist with these decisions, but it cannot fully replicate the human judgement developed through years of clinical training and bedside experience.
The more advanced AI becomes, the greater the need for physicians capable of overseeing it responsibly. Ultimately, patients want to be heard and feel seen, and it is the rapport at the center of the patient-doctor relationship that will allow this to occur, not machines or chatbots.
Loading article...