What Doctors Are For: Responding to The New Yorker on A.I. and Medicine
Diagnosis is only the beginning. The real work of medicine comes after, and A.I. can help patients and doctors face it together.
Dhruv Khullar’s recent essay in The New Yorker asked a question that will define the next era of medicine: if A.I. can diagnose patients, what are doctors for? The piece described a live demonstration at Harvard in which an experimental system called CaBot went head-to-head with an expert physician. CaBot reviewed the records, synthesized the data, and presented a crisp argument for a rare diagnosis. It did in minutes what often takes humans days or weeks. The comparison to Kasparov versus Deep Blue was obvious. For many in the room, the medical frontier had shifted.
I know that feeling firsthand. Earlier this year, after months of confusion and conflicting opinions, I was diagnosed with a rare blood cancer. At one point I ran my pathology reports and clinical history through an ensemble of A.I. agents. Each agent was trained on a different perspective: hematology, oncology, immunology. They did not all agree on the underlying diagnosis, but they converged on one point: I urgently needed a Free Light Chains test. That test became the turning point. It settled the debate and revealed the true nature of my disease. A machine had not only noticed something my doctors had missed, it had orchestrated a debate that pointed the way forward.
Diagnosis as Wayfinding
The conversation around A.I. in medicine often treats diagnosis like a courtroom verdict: you walk in with symptoms, the doctor or the machine pronounces the disease, and the case is closed. That is not how diagnosis works. Real doctors build a differential. They list possibilities and then order tests that separate one from another. Diagnosis is wayfinding, not a final answer.
Large language models are powerful here not because they “scan” live case reports but because they have been trained on vast libraries of medical knowledge: journals, textbooks, case studies, clinical guidelines, and research from decades past. They can synthesize what medicine collectively knows and surface possibilities no single human could hold in mind at once. The key is not that they provide a definitive answer, but that they suggest the test that will illuminate the path forward. My Free Light Chains test is one example. It was not a magic bullet. It was a directional arrow, the right next question in a maze of uncertainty.
The Work Beyond Diagnosis
Even so, diagnosis is only the beginning. The harder work comes after. Once a patient knows the disease they face, they are quickly ushered into a series of treatment decisions. Should they choose an aggressive option with punishing side effects, or a conservative one that may leave the disease unchecked? Should therapy be sequenced one way or another? Should a clinical trial be considered?
Here is the secret many patients do not realize until they are living it: doctors often hand these choices back to you. They outline two or three options, sketch the trade-offs, and say, “You decide.” That is not because doctors are indecisive. It is because medicine at this frontier involves uncertainty and value judgments that only the patient can make. You are not just choosing a treatment. You are choosing what kind of life you want to preserve, what risks you are willing to accept, and what suffering you are willing to endure for the chance of more time.
That is why it is essential for patients to become experts in their own disease. Not to replace their doctors, not to order dangerous drugs off the internet, but to have better conversations in the clinic. Patients who understand their cytogenetics or their proteomic profiles can ask sharper questions. They can weigh the logic behind each option. They can partner in the decision rather than nod through jargon and sign whatever paperwork is handed over. A.I. is the tool that makes this possible. It translates the hieroglyphics of medicine into sentences, sentences into stories, and stories into choices.
The Limits of DIY Medicine
Khullar’s article also highlighted a worrying trend: patients turning directly to A.I. for treatment advice, sometimes with dangerous consequences. One man, concerned about his salt intake, asked ChatGPT for substitutes. The system recommended bromide, a compound once used in early anti-seizure medications but now known to cause profound neurological harm. He ordered it online. Within months, he was in the emergency room hallucinating, paranoid, and gravely ill. He had taken advice from a chatbot as if it were a doctor, and he nearly lost his life.
These stories are cautionary. They show the risk of treating A.I. as a replacement for doctors. The lesson is not that patients should avoid A.I. The lesson is that patients must use it wisely. A.I. is not a shortcut to safe medical practice. What it can do, when used properly, is equip patients to become experts in their own disease so they can walk into the clinic prepared. That means understanding why one drug is chosen over another, why one therapy is sequenced before another, and what trade-offs are on the table.
Doctors care about their patients, but each physician is responsible for hundreds of cases, and each day has only so many hours. Your doctors also want to get home to their family. You have just one case: yours. That math makes you the logical candidate to be the leading expert on your own disease. A.I. is the tool that makes it possible.
From Population Medicine to Precision Medicine
Modern medicine is still anchored in averages. Guidelines are designed for the median patient, but cancer and chronic disease splinter into thousands of subtypes. What matters is not what works for most, but what works for one specific case.
A.I. allows us to move from population medicine to precision medicine. It can compare a patient’s molecular profile to millions of records and identify which treatments have worked best for patients with similar signatures. It can forecast likely resistance paths and suggest interventions before resistance emerges. It can reveal when the standard of care is wrong for the outlier sitting in front of you. I lived this myself. A single translocation in my pathology report shifted me from a generic regimen to a targeted therapy that matched my biology. Without that clue, my treatment would have been standard. With it, it became precise.
The CureWise Model
This is the principle behind CureWise. Instead of relying on one model that pretends to know everything, CureWise runs multiple specialized agents and models. Each one is tuned to a different domain: genomics, hematology, immunology, oncology, pharmacology. They process the same case from different angles. Where they converge, confidence grows. Where they disagree, the disagreement becomes a map of the frontier.
The system does not stop at generating answers. It creates a chain of debate. Agents argue their reasoning, challenge one another, and refine their views before converging. This process mirrors the way the best clinicians think: not in isolation, but in dialogue. CureWise makes that dialogue visible to the patient. Instead of a black-box answer, you see the spectrum of reasoning, the trade-offs, and the gaps. The patient gains not just a recommendation but a map of uncertainty, a way to frame better questions for their doctors, and the tools to participate as an informed partner.
The Myth of Cognitive De-skilling
One of Khullar’s concerns is that A.I. may weaken doctors’ own diagnostic skills, a phenomenon known as cognitive de-skilling. The fear is that doctors who lean on machines will lose their edge. There is a kernel of truth here, but it is also a specious argument.
Every great tool in human history has replaced a set of skills. When calculators arrived, people lost fluency in long division, but they became much smarter at solving higher-order problems. When spreadsheets arrived, accountants stopped doing arithmetic by hand, but they became far more capable of modeling businesses and economies. When backhoes replaced shovels, humans lost the knack for digging perfect trenches, but they gained the ability to build cities.
The question is not whether a doctor could still do as well without A.I. The question is whether patients live longer, healthier lives when doctors and patients use A.I. together. That is the only benchmark that matters. There is no going back to a pre-A.I. world, and pretending otherwise only distracts from the real work.
Patients as Partners
The real danger is not that doctors will grow dependent on machines. It is that patients will remain dependent on a system designed for averages. Too often, shared decision making means a patient listening passively while a doctor sketches a plan. Real partnership looks different. It requires the patient to understand the disease well enough to question assumptions, to recognize the logic behind competing strategies, and to insist on exploring the edges of knowledge when the stakes demand it. A.I. is what makes this level of partnership possible.
What Doctors Are For and The Future We Need
So what are doctors for, if A.I. can diagnose patients? They are for judgment, for empathy, for holding the ethical line, for helping patients weigh risks and values. They are for knowing when less treatment is better than more, when comfort matters more than intervention, when the human being in front of them outweighs the data. A.I. can sharpen their thinking and extend their reach, but it cannot replace the human relationship at the heart of medicine.
I have lived both sides: the missed diagnosis by skilled physicians, and the life-saving insight surfaced by A.I. What I built for myself, a swarm of agents debating my case, has become CureWise, so no patient has to fight for their life without access to that intelligence.
The real question is not what doctors are for. The real question is how we combine human wisdom and machine intelligence so that every patient receives the care their biology demands. Diagnosis is the start of the story. The future of medicine is what comes after.


Save another life?
Hi Steve, I've become a big fan of your work and, as you say "time is of the essence" when it comes to Cancer. So, I'm reaching out in hopes you'll consider using your model to look at my case, even before it’s being offered publicly. Worst case, it's more data for your development effort. Best case, you save a life - mine.
I’ve been living with a (relatively) rare and aggressive cancer for over 2.5 years. Standard-of-care therapies have now failed, which is unfortunately the most common outcome with this diagnosis. So, with the full support of my medical team, I’ve now chosen to step away from chemotherapy to preserve quality of life while exploring innovative approaches.
A few months into this process, I’ve deepened my acceptance of my own mortality, while managing to create a vigorous and buoyant sense of health and wellness; and I have used this time to pursue the relationships and experiences that matter most. Luckily, that path has led me to you.
I recently retired from a decades-long career in software development, so I completely understand the logic and value of your approach, that using a series coordinated specialized AI agents to synthesize global medical knowledge could be transformative in cases like yours and mine, where conventional paths are exhausted but actionable insights may still exist in scattered data.
I’m asking you to consider using my complete case history—including EPIC MyChart records, ctDNA quantitative and genetic-profile results, and my personal input—to test and advance your platform.
My oncology team at University of Miami Sylvester Cancer Center fully supports my efforts to pursue alternatives and will be available, and of course I’m ready to facilitate seamless access.
This is an opportunity to greatly enhance the quality of my remaining life, perhaps even extend it, while helping refine and operationalize CureWise, and, hopefully, assist in demonstrating its real-world impact.
Thank you for your time and attention. I look forward to hearing from you.
—Fred Cooper
Innovation that will save lives!