Leah Goodridge and Oni Blackstock 

We must not let AI ‘pull the doctor out of the visit’ for low-income patients

Generative AI is being pushed into healthcare – and diagnostic risks may deepen the class divide
  
  

stethoscope next to hand on computer keyboard
‘Given the barriers that people who are unhoused and have low incomes face, it is crucial they receive patient-centered care ... ’ Photograph: Chris Rout/Alamy

In southern California, where rates of homelessness are among the highest in the nation, a private company, Akido Labs, is running clinics for unhoused patients and others with low incomes. The caveat? The patients are seen by medical assistants who use artificial intelligence (AI) to listen to the conversations, then spit out potential diagnoses and treatment plans, which are then reviewed by a doctor. The company’s goal, its chief technology officer told the MIT Technology Review, is to “pull the doctor out of the visit”.

This is dangerous. Yet it’s part of a larger trend where generative AI is being pushed into healthcare for medical professionals. In 2025, a survey by the American Medical Association reported that two out of three physicians used AI to assist with their daily work, including diagnosing patients. One AI startup raised $200m to provide medical professionals with an app dubbed “ChatGPT for doctors”. US lawmakers are considering a bill that would recognize AI as able to prescribe medication. While this trend of AI in healthcare affects almost all patients, it has a deeper impact on people with low incomes who already face substantial barriers to care and higher rates of mistreatment in healthcare settings. People who are unhoused and have low incomes should not be testing grounds for AI in healthcare. Instead, their voices and priorities should drive if, how, and when AI is implemented in their care.

The rise of AI in healthcare didn’t happen in a vacuum. Overcrowded hospitals, overworked clinicians and relentless pressure for medical offices to run seamlessly, shuttling patients in and out of a large for-profit healthcare system, set the conditions. The demands on healthcare workers are often compounded in economically disadvantaged communities where healthcare settings are often under-resourced and patients are uninsured, with a greater burden of chronic health conditions due to racism and poverty.

Here is where someone might ask, “Isn’t something better than nothing?” Well, actually, no. Studies show that AI-enabled tools generate inaccurate diagnoses. A 2021 study in Nature Medicine examined AI algorithms trained on large, chest X-ray datasets for medical imaging research and found that these algorithms systematically under-diagnosed Black and Latinx patients, patients recorded as female and patients with Medicaid insurance. This systematic bias risks deepening health inequities for patients already facing barriers to care. Another study, published in 2024, found that AI misdiagnosed breast cancer screenings among Black patients – the odds of false positives for Black patients screened for breast cancer was greater than for their white counterparts. Due to algorithmic bias, some clinical AI tools have notoriously performed worse on Black patients and other people of color. That’s because AI isn’t independently “thinking”; it relies on probabilities and pattern recognition, which can reinforce bias for already marginalized patients.

Some patients aren’t even informed that their health provider or healthcare system is using AI. A medical assistant told the MIT Technology review that his patients know an AI system is listening, but he does not tell them that it makes diagnostic recommendations. This harkens back to an era of exploitative medical racism where Black people were experimented on without informed consent and often against their will. Can AI help health providers by speedily giving them information that may allow them to move on to the next patient? Possibly. But the problem is that it might come at the expense of diagnostic accuracy and worsening health inequities.

And the potential impact goes beyond diagnostic accuracy. TechTonic Justice, an advocacy group working to protect economically marginalized communities from the harms of AI, published a groundbreaking report that estimates 92 million Americans with low incomes “have some basic aspect of their lives decided by AI”. Those decisions range from how much they receive from Medicaid to whether they are eligible for Social Security administration’s disability insurance.

A real-life example of this is playing out in federal courts right now. In 2023, a group of Medicare Advantage customers sued UnitedHealthcare in Minnesota, alleging they were denied coverage because the company’s AI system, nH Predict, mistakenly deemed them ineligible. Some of the plaintiffs are the estates of Medicare Advantage customers; these patients allegedly died as a result of the denial of medically necessary care. UnitedHealth sought to dismiss the case, but in 2025, a judge ruled that the plaintiffs can move forward with some of the claims. A similar case was filed in federal court in Kentucky against Humana. There, Medicare Advantage customers alleged that Humana’s use of nH Predict “spits out generic recommendations based on incomplete and inadequate medical records”. That case is also ongoing, with a judge ruling that the plaintiffs’ legal arguments are enough to move forward, surviving the insurance company’s motion to dismiss. While the final decision for these two cases remains pending, they indicate a growing trend of AI being used to decide the health coverage of people with low incomes – and its pitfalls. If you have financial resources, you can get quality healthcare. But if you are unhoused or have a low income, AI may bar you from even accessing the healthcare entirely. That’s medical classism.

We should not experiment on patients who are unhoused or have low incomes for an AI rollout. The documented harms are greater than the potential, unproven benefits promised by start-ups and other tech ventures. Given the barriers that people who are unhoused and have low incomes face, it is crucial they receive patient-centered care with a human healthcare provider who listens to their health-related needs and priorities. We cannot create a standard where we rely on a health system in which health practitioners take a backseat while AI – run by private companies – takes the lead. An AI system that “listens” in and is developed without rigorous evaluation by the communities themselves disempowers patients by removing their decision-making authority to determine what technologies, including AI, are implemented in their health care.

  • Leah Goodridge is a lawyer who worked in homeless prevention litigation for 12 years

  • Oni Blackstock, MD, MHS, is a physician, founder and executive director of health justice, and a Public Voices Fellow on technology in the public interest with The OpEd Project

 

Leave a Comment

Required fields are marked *

*

*