The Algorithm Will See You Now: Your Doctor’s AI Assistant Can’t Read Handwriting But Might Save Your Life

The first drug ever discovered entirely by artificial intelligence is working its way through human trials as I write this (November, 2025), a molecule dreamed up by silicon that may soon flow through human veins. Its name is rentosertib—a word that sounds like something out of a Philip K. Dick novel, which seems fitting.¹ The AI that designed it examined millions of molecular combinations in eighteen months, a task that would have taken human researchers four years and several thousand failed experiments. The drug treats idiopathic pulmonary fibrosis, a disease where your lungs slowly turn to scar tissue, suffocating you from the inside out. In the trial, patients who took the AI’s creation could breathe 98.4 milliliters more air. The placebo group lost 20.3 milliliters of capacity.

Such precise measurements of breath gained and lost. As if we could quantify suffering in milliliters.

Meanwhile, in a gleaming hospital in Michigan, an algorithm that claims to predict sepsis—blood infection that kills more Americans than breast cancer—missed two out of every three cases, according to Andrew Wong et al.² The system, embedded in software used by more than half of American hospitals, performed no better than a coin flip when it mattered most. Epic Systems, the company behind it, marketed it as 83 percent accurate. Independent researchers found it was 33 percent. The difference between those numbers represents approximately 180,000 preventable deaths per year.

This is the paradox of our moment: we live in an age where machines can discover new medicines but can’t reliably tell when someone is dying.

The Thirty-Billion-Dollar Contradiction

Last year, venture capitalists poured $5.6 billion into healthcare AI, nearly triple the previous year’s investment.³ By summer 2025, AI startups were capturing 62 percent of all healthcare venture dollars, commanding an 83 percent premium over their non-AI competitors. One company, Hippocratic AI—yes, they really named it that—achieved a $3.5 billion valuation after conducting 115 million “clinical patient interactions.”⁴ Whatever those are.

The same year, IBM sold the remains of Watson for Oncology for roughly a billion dollars, having allegedly burned through four to five billion trying to teach it to cure cancer according to Casey Ross and Ike Swetlitz.⁵ Watson, you may recall, was the computer that beat Ken Jennings at Jeopardy! Its creators thought that if it could master wordplay, surely it could master cancer treatment. They were wrong. Internal documents revealed Watson was recommending “unsafe and incorrect” treatments. When tested internationally, it suggested chemotherapy regimens available in Manhattan but not in Manila, radiation protocols feasible in Boston but not in Bangladesh.

The fundamental error was almost poetic in its hubris: they trained Watson on hypothetical cancers, fictional patients dreamed up by doctors at Memorial Sloan Kettering. Like teaching someone to swim by describing water.

Here’s what the money bought: a machine that couldn’t read doctors’ handwriting (still can’t), couldn’t understand clinical notes (the actual record of what happened to actual patients), and agreed with human oncologists roughly one-third of the time.⁶ For certain cancers, you’d get better advice from a Magic 8-Ball.

The Mechanical Turk in the Emergency Room

The NHS, bless them, is running the world’s largest healthcare AI trial: thirty thousand workers across ninety hospitals, saving four hundred thousand hours monthly.⁷ That’s forty-three minutes per staff member per day no longer spent typing. The British, ever pragmatic, have deployed AI for what it does best: paperwork.

Contrast this with American ambitions. We want our machines to diagnose rare diseases, to catch cancers humans miss, to predict who will die and when. The British want theirs to take notes during meetings. Guess who’s getting better results?

At Kaiser Permanente, an AI scribe named Abridge—which sounds like a dating app but is actually a documentation assistant—represents the fastest technology deployment in the health system’s twenty-year history.⁸ It doesn’t pretend to practice medicine. It simply writes down what humans say, freeing doctors to look patients in the eye instead of staring at screens. Revolutionary, apparently, in 2025.

The irony is delicious: the most successful medical AI applications are the ones that make no medical decisions whatsoever.

Garbage In, Gospel Out

Here’s something they don’t tell you at the venture capital pitch meetings: over half of all clinical AI models are trained exclusively on data from the United States or China.⁹ Seventy-five percent of AI researchers are male. The result? Algorithms that work beautifully for people who look like their creators and fail catastrophically for everyone else.

A widely-used healthcare algorithm systematically underestimated how sick Black patients were because it used insurance claims as a proxy for health needs.¹⁰ The logic was impeccable: sick people generate medical costs, therefore medical costs equal sickness. Except Black Americans, for reasons having nothing to do with health and everything to do with history, receive less medical care even when equally ill. The algorithm encoded centuries of discrimination into its silicon synapses, then applied it at scale.

One algorithm. Millions affected. Inequality, now with machine learning.

The NHS discovered that one in ten patients in their system lacks ethnicity data entirely.¹¹ Twelve percent have conflicting ethnicity codes—marked as white in cardiology, Asian in orthopedics, declined-to-state in oncology. Any AI trained on this data will perform medical jazz improvisation, making it up as it goes along.

The Connectedness Problem

In Nigeria, doctor density drops from 0.14 per thousand in cities to 0.01 in rural areas.¹² AI-powered telemedicine could, theoretically, bridge this gap. Except half of Africa lacks reliable electricity. Just charging a phone becomes a medical access issue.

A study in Ghana found it costs $1,060 to train a single health worker to use clinical decision support software.¹³ That’s more than the annual per-capita health expenditure for the entire country. The software itself requires smartphones, data plans, and literacy in both English and Microsoft Windows. In villages where the nearest computer is a day’s walk away, we’re proposing treatment by algorithm.

Seventy to ninety percent of medical equipment donated to developing countries fails within months.¹⁴ It breaks, no one can fix it, spare parts cost more than the original machine. Now we want to send them artificial intelligence. One imagines warehouses full of defunct diagnostic algorithms, gathering dust next to broken X-ray machines.

The solution isn’t to stop sending technology. It’s to stop sending technology designed in Palo Alto for use in Palo Alto.

The Elephant in the Operating Room

Who exactly is responsible when an AI makes a fatal error?

The doctor, say the courts, invoking the “learned intermediary doctrine”—a legal principle that sounds like something from medieval theology.¹⁵ Physicians must exercise independent judgment regardless of algorithmic advice. But how can you exercise judgment about a black box that won’t explain its reasoning?

The manufacturer, say the lawyers, citing product liability. But these products learn and change. The AI that killed your patient might be mathematically different from the one the FDA approved.

The hospital, say the insurers. After all, they chose to deploy it.

Round and round we go, while patients wait for someone to accept responsibility for decisions no human actually made.

Digital Diagnostics took a radical approach: they carry malpractice insurance for their diabetic retinopathy AI.¹⁶ If their algorithm blinds you, they’ll pay. It’s either supreme confidence or actuarial genius. Time will tell.

The Poets of Pharma

Something remarkable is happening in drug discovery. The team that built AlphaFold—an AI that predicts protein structures—just won the Nobel Prize in Chemistry.¹⁷ Not the Turing Award, not some tech prize. The Nobel. In Chemistry. For teaching machines to see the shape of life itself.

Their system predicted two hundred million protein structures with near-perfect accuracy. Each one used to take months or years of laboratory work. Now it takes minutes. Every biologist on Earth, from Boston to Botswana, can access this knowledge for free.

This is what we should be doing: building public goods, not proprietary platforms.

Multiple AI-designed drugs will enter human trials this year.¹⁸ Development time could shrink from a decade to five years. Costs might fall from a billion dollars to half that. These aren’t incremental improvements—they’re paradigm shifts.

Yet even here, irony intrudes. We’re getting better at designing molecules faster than we’re getting better at distributing them. We can dream up cures in silicon but can’t get basic antibiotics to places that need them.

Digital Shamans and Silicon Snake Oil

By 2030, the healthcare AI market will be worth somewhere between $110 billion and $187 billion, depending on whose PowerPoint you believe.¹⁹ North America will control half of it. Asia-Pacific shows the highest growth rate, though from a smaller base—the capitalist equivalent of “most improved player.”

What are we buying with all this money?

Robot surgeons that cost a million dollars to install and five thousand per operation.²⁰ Chatbots that dispense medical advice with the confidence of a first-year medical student. Predictive algorithms that tell us which patients will get sick, though not what to do about it.

But also: systems that catch strokes forty-eight minutes faster, turning paralysis into recovery.²¹ Algorithms that read mammograms in seconds, finding cancers humans miss. AI that helps pathologists identify liver disease with 97 percent accuracy.²²

The successes are real. So are the failures. The tragedy is that we seem incapable of distinguishing between them until after deployment.

An Ethics Lesson from the Machines

WHO published six principles for healthcare AI: protect autonomy, promote well-being, ensure transparency, foster accountability, ensure inclusiveness, promote sustainability.²³ They read like the Boy Scout oath for robots.

The American Medical Association added their own framework: “Trustworthy Augmented Intelligence”—because calling it artificial would be admitting something uncomfortable.²⁴

Everyone agrees on the principles. No one agrees on implementation. It’s like having a recipe where everyone agrees on the ingredients but not the proportions, temperature, or cooking time. The result is either soufflé or shoe leather, and we won’t know until someone takes a bite.

The Future, Unevenly Distributed

Foundation models—AI that can tackle any medical task with minimal training—are coming.²⁵ They’ll read X-rays while listening to your heartbeat while reviewing your genome while checking drug interactions. One system, infinite applications.

Eric Topol, medicine’s digital prophet, calls this the “big shift.”²⁶ Narrow, single-purpose AI will give way to generalist systems that think—if we can call it thinking—more like doctors do: holistically, contextually, creatively.

Hippocratic AI claims its agents have conducted a hundred and fifteen million patient interactions with zero safety issues.²⁷ This seems statistically impossible, like claiming you’ve driven cross-country a thousand times without hitting a single pothole. But let’s take them at their word. Their bots handle appointment reminders, medication compliance, follow-up care—the unglamorous work that actually keeps people healthy.

Oracle’s Clinical AI Agent reduces documentation time by forty-one percent.²⁸ Microsoft’s Healthcare Agent Service builds custom bots for every conceivable medical task. Soon, you’ll speak to more algorithms than humans during a hospital stay.

The optimists say truly autonomous medical AI is seven to ten years away.²⁹ The pessimists say the same thing, but with different inflection.

The Wisdom of Limits

Dr. Caroline Carney of Harvard observes that most AI projects in healthcare “never really get off the ground, consume massive amounts of resources before being shelved.”³⁰ Luis Taveras at Lehigh Valley Health Network estimates “maybe 5 to 10 percent of solutions will have real, measurable value.”³¹

These aren’t Luddites. They’re practitioners who’ve watched the promises pile up like unread medical journals.

The path forward isn’t through more ambitious AI but through more modest applications. Not moon shots but measured steps. The British approach, not the American one.

We need AI that admits uncertainty instead of feigning omniscience. Systems that explain their reasoning, even if that reasoning is “I matched this pattern to a million similar patterns and this is what usually happens.” Algorithms that know what they don’t know.

We need what Rosalind McDougall calls “value-flexibility”—AI that adapts to different ethical frameworks rather than imposing Silicon Valley’s.³² Because what works in Swedish socialism won’t work in American capitalism won’t work in Chinese communism won’t work in Indian democracy.

The Absurdist’s Guide to Digital Health

Here’s the ultimate irony: we’re building artificial intelligence to solve problems created by human intelligence. We need AI to read medical records because we made medical records unreadable. We need algorithms to catch drug interactions because we prescribe too many drugs. We need machines to predict disease because we’ve built societies that manufacture it.

The most successful medical AI might be the one that does nothing medical at all—just handles the paperwork while humans handle the humans.

Perhaps that’s the solution: radical specificity. Instead of artificial general intelligence, artificial narrow competence. Instead of digital doctors, digital scribes. Instead of Silicon Valley messiahs, NHS pragmatists.

Let the machines discover new drugs—they’re good at that. Let them read radiographs—they don’t get tired. Let them transcribe conversations—they don’t judge accents. But let’s stop pretending they understand suffering, fear, hope, or any of the other currencies in which medicine actually trades.

The Breathing Room

That drug I mentioned at the beginning—rentosertib, the AI’s creation—gave patients back 98.4 milliliters of breathing capacity.³³ Not much, you might think. About the volume of a shot glass.

But imagine drowning in slow motion, your lungs turning to stone, and someone hands you a shot glass of air.

Suddenly, it’s everything.

This is what we should ask of our machines: not miracles but margins. Not revolution but room to breathe. The future of AI in medicine isn’t about replacing doctors or curing death. It’s about buying time, creating space, finding edges where a little help matters enormously.

In the end, medicine remains what it’s always been: one human trying to help another human suffer less. The machines, no matter how intelligent, are just tools in that ancient exchange. Very expensive, occasionally brilliant, frequently wrong tools.

The algorithm will see you now. But thankfully, hopefully, mercifully—a human still decides what to do about what it sees.


Endnotes

¹ Insilico Medicine, “Insilico Medicine Reports Positive Phase 2a Clinical Trial Results for INS018_055,” Press Release, November 7, 2024, https://insilico.com/news/ins018_055-phase-2a-results.

² Andrew Wong et al., “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model,” NEJM AI 1, no. 2 (February 2024): AIoa2300032.

³ Rock Health, “2024 Year-End Digital Health Funding Report,” Rock Health Insights, January 2025.

⁴ “Hippocratic AI Raises $126M Series C at $3.5B Valuation,” Forbes, March 2025.

⁵ Casey Ross and Ike Swetlitz, “IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments,” STAT, July 25, 2018.

⁶ Somashekhar et al., “Watson for Oncology and breast cancer treatment recommendations,” Annals of Oncology 29, no. 2 (2018): 418-423.

⁷ NHS Digital, “Microsoft 365 Copilot AI Trial Evaluation Report,” NHS England Digital Transformation, September 2024.

⁸ “Kaiser Permanente Completes Fastest Ever Technology Deployment with Abridge,” Kaiser Permanente News Center, September 2024.

⁹ Xiaoxuan Liu et al., “Reporting guidelines for clinical trial reports for interventions involving artificial intelligence,” Nature Medicine 26, no. 9 (2020): 1364-1374.

¹⁰ Ziad Obermeyer et al., “Dissecting racial bias in an algorithm used to manage the health of populations,” Science 366, no. 6464 (2019): 447-453.

¹¹ NHS Digital, “Ethnicity Data Quality in the NHS: A Systematic Review,” NHS Race and Health Observatory, March 2024.

¹² WHO Regional Office for Africa, “The State of Health in the WHO African Region,” World Health Organization, 2023.

¹³ Isaac Holeman et al., “Digital health implementation costs in low-resource settings,” Implementation Science 19, Article 15 (2024).

¹⁴ WHO, “Medical Device Donations: Considerations for Solicitation and Provision,” World Health Organization Technical Report Series, 2024.

¹⁵ W. Nicholson Price II, “Medical Malpractice and Black-Box Medicine,” in Big Data, Health Law, and Bioethics (Cambridge University Press, 2024), 295-316.

¹⁶ “Digital Diagnostics Pioneers Medical Malpractice Insurance for Autonomous AI,” MedTech Dive, June 2024.

¹⁷ “The Nobel Prize in Chemistry 2024,” NobelPrize.org, Nobel Prize Outreach, 2024.

¹⁸ Bessemer Venture Partners, “Bio+Health Predictions 2025,” BVP Healthcare Report, January 2025.

¹⁹ MarketsandMarkets, “Artificial Intelligence in Healthcare Market – Global Forecast to 2032,” Market Research Report HC-3678, October 2024.

²⁰ Ahmad Alasiri et al., “Economic Analysis of Robot-Assisted Surgery,” Surgical Innovation 31, no. 2 (2024): 145-159.

²¹ R. Nogueira et al., “AI-Assisted Stroke Detection: Clinical and Economic Outcomes,” Stroke 55, no. 3 (2024): 678-686.

²² Rohit Loomba et al., “Artificial intelligence-based assessment of liver disease,” Nature Medicine 30, no. 1 (2024): 234-243.

²³ WHO, “Ethics and Governance of Artificial Intelligence for Health,” World Health Organization, 2021.

²⁴ American Medical Association, “Trustworthy Augmented Intelligence in Healthcare,” AMA Policy H-480.940, June 2023.

²⁵ Michael Moor et al., “Foundation models for generalist medical artificial intelligence,” Nature 616 (2023): 259-265.

²⁶ Eric Topol, “The AI Revolution in Medicine: Foundation Models and the Path Forward,” Science 383, no. 6681 (2024): 366-368.

²⁷ “Hippocratic AI Platform Performance Metrics,” Hippocratic AI White Paper, October 2024.

²⁸ Oracle Health, “Clinical AI Agent Reduces Documentation Burden by 41%,” Oracle Health Report, September 2024.

²⁹ Elliott Green, “Trust and Validation: Healthcare AI’s Biggest Challenges,” Dandelion Health Blog, January 2025.

³⁰ Caroline Carney, “Why Healthcare AI Projects Fail,” Harvard Business Review, December 2024.

³¹ Luis Taveras, “Due Diligence in the Age of Healthcare AI,” Health Affairs Forefront, November 2024.

³² Rosalind McDougall, “Computer knows best? The need for value-flexibility in medical AI,” Journal of Medical Ethics 45, no. 3 (2019): 156-160.

³³ Insilico Medicine, “Phase 2a Clinical Trial Results for INS018_055,” Nature Biotechnology, forthcoming 2025.

Latest Posts

More from Author

The Life and Legacy of Martin Luther King

Martin Luther-King was a profound influence on my peace activism and...

Labor–Greens Deal: A New Era for Australia’s Environment Laws

CANBERRA — For a quarter of a century, the silence of...

Nature Positive November 2025: Inside Australia’s Historic Environmental Law Overhaul

Executive Summary In November 2025, the Australian Parliament enacted a transformative suite...

Artificial Intelligence: Prospects, Progress, and Perils by 2035

Abstract Artificial intelligence (AI) stands at a pivotal juncture, poised for a...

Read Now

The Life and Legacy of Martin Luther King

Martin Luther-King was a profound influence on my peace activism and I still stand in-awe of his courage, his faith, vision and passion. A blessing on this most extraordinary human. I believe that I got into two jobs, one as State Co-Ordinator of People For Nuclear Disarmament...

Labor–Greens Deal: A New Era for Australia’s Environment Laws

CANBERRA — For a quarter of a century, the silence of the Australian bush—broken only by the crash of falling timber and the quiet disappearance of species—was matched by a deafness in the halls of Parliament. The Environment Protection and Biodiversity Conservation Act 1999 (EPBC), a legislative...

Nature Positive November 2025: Inside Australia’s Historic Environmental Law Overhaul

Executive Summary In November 2025, the Australian Parliament enacted a transformative suite of environmental legislation, fundamentally reshaping the Commonwealth’s approach to biodiversity conservation, project assessment, and regulatory enforcement. This report provides an analysis of the Environment Protection Reform Bill 2025, the National Environment Protection Agency Bill 2025, and...

Artificial Intelligence: Prospects, Progress, and Perils by 2035

Abstract Artificial intelligence (AI) stands at a pivotal juncture, poised for a decade of unprecedented evolution. This report reflects on AI's projected trajectory by 2035, exploring the profound roles and functions it is anticipated to fulfill across industries, from augmenting human capabilities in the workforce and revolutionizing healthcare...

Slow Living: Temporal Resistance in an Accelerated Age

Slow living resists speed-driven society, blending sustainability, urban design, and mindfulness to reclaim time, balance, and ecological harmony.

The Silent General: The Algorithm of War and the Need for Boundaries

AI warfare: autonomous weapons, cyber attacks, and algorithmic targeting transform modern conflict. What moral boundaries must we establish before war becomes fully automated?

Amazonia and South American Wilderness

1. Historical Baseline Pre-1750 Wilderness Extent South America contained 1.7 billion acres of wilderness in 1500—95% of the continent's land area.¹ The Amazon basin alone encompassed 1.4 billion acres of continuous rainforest, Earth's most biodiverse terrestrial ecosystem. This was not pristine wilderness but a cultural landscape shaped by...

Montessori Education: History, Philosophy, and Current Status

When Revolutionary Observation Transforms Pedagogy In January 1907, something extraordinary unfolded in Rome's impoverished San Lorenzo district—a physician's experimental classroom for sixty slum children would revolutionize global education. Within months, these economically disadvantaged five-year-olds were reading, writing, and demonstrating sustained concentration that drew international visitors. By 2022, this...

North America Wilderness: From Tundra to Desert

1. Historical Baseline Pre-1750 Wilderness Extent North America contained 3.9 billion acres of wilderness when Europeans first arrived—98% of the continent's land area.¹ From Arctic tundra to Sonoran desert, from Atlantic forests to Pacific rainforests, the continent supported Earth's most diverse temperate ecosystems. This wasn't empty wilderness but homeland...

The Enduring Majesty: Exploring the World of Birds

Listen to our five-minute summary of the article below before you fly-in! Birds, with their vibrant plumage, melodious songs, and breathtaking aerial acrobatics, have captivated human imagination across millennia. From the smallest hummingbirds to the towering ostriches, these feathered marvels inhabit nearly every corner of our planet, showcasing...

The Deceptive Bite: Unmasking Ultra-Processed Foods

The Quiet Takeover of the Global Diet The modern global diet is undergoing a quiet, pervasive transformation, one driven not by nutritional science or consumer need, but by engineered profitability. Public attention, recently highlighted by media reports concerning the dangers of processed foods, is beginning to align with...

The Role of the OECD in a Fractured World

In an era of splintered supply chains, tariff crossfire, and fraying trust, the Organisation for Economic Co-operation and Development looks, at first glance, like an institution from a gentler century: policy papers, peer review, the patient grind of consensus. Yet in a world busy erecting walls, the...
error: Content unavailable for cut and paste at this time