Will AI Cure Cancer?
“Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.” Sam Altman
When Sam Altman suggested that with “ten gigawatts of compute” AI could figure out how to cure cancer, he wasn’t hedging. He was stating what abundant intelligence will make possible. Build enough compute, and AI figures it out.
That’s a bold claim because curing cancer has defied every breakthrough before it. Antibiotics conquered infection. Vaccines eliminated polio. Statins reined in heart disease. Cancer has shrugged them all off. It adapts, evolves, hides. It’s not one disease but millions, each with its own molecular fingerprint, each requiring its own strategy.
The question isn’t whether Altman is being provocative. He is. The question is: Could he be right? And if so, what would it take to get there?
I think he’s right because I’ve seen what happens when you point the most advanced AI at the exact problem oncologists face every day: too much data, too little time, and lives hanging on patterns humans cannot connect fast enough. The path from here to there isn’t mysterious. It’s hard, expensive, and demands infrastructure at a scale we’ve never built.
Here’s what that path looks like.
Why Oncology Breaks
Cancer hides millions of diseases under a single label. Two patients with “breast cancer” on their charts can have entirely different molecular drivers, vulnerabilities, and destinies. One might carry a BRCA mutation, the other HER2 amplification, the third a TP53 deletion with chromosomal instability. And countless more we haven’t even found. Same organ, same code, different biology.
This is where the twentieth-century model of medicine is hitting a wall. We built our evidence system for uniform diseases: strep throat, hypertension, diabetes. Randomize thousands of patients, average the outcomes, write guidelines. That logic works when the disease is fundamentally the same across patients. Cancer breaks that logic.
The more you divide by subtype, the smaller each group becomes, until no trial has enough power to mean anything. So we compromise. We force patients into broad categories and treat them based on what worked for the average. Some respond. Many don’t. And we call it the standard of care.
A patient sits in the exam room, asking what will happen. The oncologist points to statistics. “This drug works in thirty percent of patients with your diagnosis.” But to the person in the chair, that number means nothing. For them, it’s binary: it works or it doesn’t. They don’t need a percentage; they need a plan for their cancer, not someone else’s.
The system isn’t cruel. It’s overwhelmed. Last year, 1.3 million papers were published in PubMed, over 3,500 a day. No human can keep up. No tumor board can synthesize that flood of knowledge. The information exists. The intelligence to deploy it does not.
What AI Sees That We Can’t
Feed a frontier model thousands of patients’ genomic data, molecular profiles, imaging scans, treatment histories, and outcomes, and it begins to connect faint signals—mutations, translocations, protein-level correlations invisible to humans—across scales beyond human comprehension.
This isn’t hypothetical. Multi-agent AI systems already match cytogenetic profiles to treatment responses with a precision population guidelines can’t touch. They can infer from patterns in case studies once dismissed as anecdotal but en masse form a new kind of knowledge: Others with your exact fingerprint were treated this way, and here’s what happened.
The transformation isn’t about AI making diagnoses, though it can catch what humans miss. It turns overwhelming complexity into something navigable. What looks like noise to a clinician working from memory and heuristics resolves into pathways, vulnerabilities, and strategic options.
The Architecture Of Precision
The key lesson I learned building AI for oncology is simple: No single model can capture everything inside a large-model black box. Cancer’s complexity overwhelms any one agent or prompt. It takes a mixture of agents and models working together, each with a different view of the problem, to find the pathways hidden within the intelligence at every layer.
A genomics agent parses mutations, a pathology agent reads tumor cells, an immunology agent profiles immune evasion, a pharmacology agent maps drug interactions, and a clinical trials agent finds experimental therapies.
Then comes a synthesis layer, an orchestrator that fuses specialized insights into one coherent strategy. It doesn’t average their outputs; it reasons across them. It allows agents to prompt each other. If the genomics agent spots a driver mutation, the immunology agent sees an immune phenotype, and the pharmacology agent flags a metabolic weakness, the system asks: Which combination gives the best odds?
This is architecture, not science fiction. The models are improving. The compute is coming. What doesn’t exist yet is the application layer to deploy this intelligence to patients and clinicians, learn from each response, and feed that back to improve the next prediction.
Anticipating Evolution
Even when treatments work initially, tumors evolve. They develop resistance. Patients know the rhythm: shrinking scans, a season of hope, then the follow-up that shows growth again. It feels like betrayal.
But tumor evolution isn’t random. Block one pathway and backups activate. Target one driver mutation and resistant subclones emerge. Apply immune pressure and the tumor adapts.
AI will learn to model escape routes before they appear. By studying thousands of cancers with similar molecular profiles—how they responded, relapsed, and resisted—the system will figure out which defense mechanisms are most likely for any given starting point.
Strategy shifts from reaction to anticipation. Treatment will no longer be a sequence of guesses but a plan mapped several moves ahead: An initial drug paired with one that blocks the likeliest escape route, every step modeled to look forward.
Patients won’t just endure treatment and hope. They’ll receive adaptive plans that evolve as their cancer evolves, drawn from the experience of thousands who came before.
Evidence That Learns
The current evidence system treats individual cases as noise: an N-of-1 patient doesn’t count until they’re blended into a cohort of thousands, years later, after a formal trial. That makes no sense when AI can synthesize individual outcomes at scale.
Every patient becomes a learning event. Their molecular profile, treatments, responses, and resistance mechanisms all feed back into the system. Not as an anecdote, but as structured data to refine the model’s predictions for the next patient with a similar fingerprint.
This inverts the traditional relationship between individual and population. Population trials told us what works on average, and individuals had to accept that their response might differ. AI-driven evidence says: Based on everyone who came before you with cancers like yours, here’s what’s most likely to work for you.
The more patients the system sees, the more precise it will become. Not because it finds simpler patterns, but because it learns to navigate higher-dimensional complexity. It’s not reducing diversity. It’s thriving on it.
For patients, the experience changes completely. You won’t be alone with a rare mutation that no one understands. You become part of a global learning system. Your case matters immediately, not years from now when the trial results finally publish. Your lived experience of what worked, what didn’t, and what you endured will shape treatment for others.
Why This Requires Planetary Scale
None of this works without compute at planetary scale. It demands multimodal integration across genomics, imaging, clinical notes, labs, and trial data. Generating hypotheses for new drug combinations. Evolutionary simulation across thousands of possible resistance pathways. Continuous learning from millions of patient trajectories.
Training frontier models already costs tens of millions. Running evolutionary simulations demands infrastructure most institutions lack. Maintaining a global learning system that updates with every new outcome isn’t just a new model of research; it’s critical infrastructure.
This is what Altman meant by ten gigawatts. Curing cancer isn’t just biology. It’s energy, compute, system architecture, and intelligent applications. It won’t be solved by a clever model on a desktop. The problem demands an intelligence infrastructure built for the scale of the problem: Millions of unique diseases, billions of molecular interactions, trillions of possible treatment combinations.
The infrastructure exists in fragments. Cloud compute, genomic databases, clinical registries, drug interaction databases, trial networks. What’s missing is the intelligence layer that connects, reasons, learns, and delivers that knowledge to every patient and clinician.
Building that layer is the project. That’s our mission at CureWise.
What Stands In The Way
The barriers are real but not insurmountable. Data is fragmented and siloed, locked in proprietary databases and incompatible formats. Incentives are misaligned: pharmaceutical companies, health systems, and researchers don’t share by default. Regulation lags behind systems that learn and adapt in real time. Even when predictions prove accurate retrospectively, clinical validation moves glacially.
And biology is genuinely hard. Tumors evolve in ways no model perfectly predicts. Immune responses vary with genetics and environmental exposures we don’t fully understand. Drug interactions cascade through metabolic pathways still being mapped.
These are design problems, not physical limits. Policy and incentives can break data silos. Regulation will evolve, as it always does when forced. Validation can accelerate when intelligent systems synthesize real outcomes in real time, not years later.
Biological complexity is the only immovable constraint. With AI that’s no longer a reason to slow down. It’s the reason to build intelligence systems sophisticated enough to handle it.
Precision At Scale
Will AI cure cancer? Not with one discovery, one drug, or one insight. AI is the tool that will turn millions of unique diseases into millions of precise treatments.
It sees patterns across dimensions no human can. It can match molecular fingerprints to therapeutic strategies with accuracy population guidelines will never approach. It can anticipate resistance before it emerges. It can synthesize evidence from every patient journey into predictions. It can finally operate at the scale of biology itself.
Altman’s claim wasn’t hype. He recognized that cancer won’t fall to averages but to precision delivered at planetary scale. AI is the first tool that can meet that challenge.
The cure will not come in a moment but in millions of precise moves: patient by patient, mutation by mutation, insight by insight. For the first time, the path forward looks clear. What’s needed is the will to build the infrastructure, break the silos, align the incentives, and deploy the intelligence where it matters most: at the bedside, for the person in the chair, whose life depends on whoever or whatever connects the dots faster than cancer can evolve.
That future is ours to build. The question is how fast we choose to begin.

