Hospitals Are Deploying AI Like Crazy. We Still Don’t Know If It Helps.

Hospitals Are Deploying AI Like Crazy. We Still Don’t Know If It Helps.

2 0 0

I don’t need to tell you AI is everywhere. You’ve heard it a thousand times. But it’s also

increasingly inside hospitals. Doctors use AI to take notes during appointments. Algorithms
trawl through patient records flagging people who might need certain treatments. Other tools
interpret X-rays and lab results. There’s a growing stack of studies showing many of these

tools are accurate. But that’s not the same thing as them being useful.

The bigger question is: do they actually make patients healthier? And we don’t have a good

answer yet.

That’s the core argument Jenna Wiens (University of Michigan) and Anna Goldenberg
(University of Toronto) make in a new paper published this week in Nature Medicine.

Wiens has spent years trying to convince clinicians to try AI. For the first decade, it was

an uphill battle. Then, she says, something flipped. Suddenly health-care providers are not

just interested—they’re deploying these tools at speed. The problem is they’re not

rigorously checking if they actually work in practice.

Take ambient AI scribes. These tools listen to doctor-patient conversations, then

transcribe and summarize them. Multiple companies sell them, and hospitals are adopting

them fast. A few months ago, a staffer at a major New York medical center told me doctors

are “overjoyed.” The tech lets them focus on the patient during appointments instead of

staring at a keyboard. Early studies back that up—it seems to reduce burnout. Great.

But what about patient outcomes? “Researchers have evaluated provider or clinician and

patient satisfaction, but not really how these tools are affecting clinical decision-

making,” Wiens says. “We just don’t know.”

The same goes for predictive tools that forecast patient trajectories or recommend

treatments. Even an accurate tool won’t necessarily improve health. Say an AI speeds up

chest X-ray interpretation. How much does the doctor actually rely on its analysis? How

does it change the way they talk to the patient or decide on next steps? And what does that

mean for the patient in the end? The answers probably vary by hospital, department, and

even by how experienced the doctor is.

There’s also a cognitive angle. Some research on AI in education suggests that relying on

these tools changes how people process information. Could AI scribes change the way a

medical student thinks about patient data? Wiens points out we need to explore that. “We

like things that save us time, but we have to think about the unintended consequences.”

A January 2025 study by Paige Nong at the University of Minnesota found that about 65% of

US hospitals used AI-assisted predictive tools. Only two-thirds of those hospitals

evaluated their accuracy. Even fewer checked for bias. The number of hospitals using these

tools has probably gone up since then. Wiens thinks it’s unlikely that the tools are making

patients worse off, but it’s very possible they’re not as beneficial as providers assume.

“I do believe in the potential of AI to really improve clinical care,” Wiens says. She’s

not arguing we should stop deploying AI. She just wants more data on what’s actually

happening to people. “I have to believe that in the future it’s not all AI or no AI. It’s

somewhere in between.”

Comments (0)

Be the first to comment!