You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation
unholy alliance

New Nuclear Plants Won’t Solve A.I.’s Energy Problem

Jennifer Granholm has floated the idea of giving A.I. data centers their own nuclear power plant.

Jennifer Granholm speaks at a podium.
Thomas Kronsteiner/Getty Images
Jennifer M. Granholm, U.S. secretary of energy, speaks at the 67th annual Regular Session of the General Conference of the International Atomic Energy Agency, on September 25, 2023, in Vienna.

The Biden administration seems to want nuclear energy to solve some of A.I.’s thorniest problems while A.I. solves some of nuclear energy’s thorniest problems—catapulting both technologies into a glorious techno future. Speaking to Axios on Monday, Energy Secretary Jennifer Granholm explained that the Department of Energy is looking into the construction of small-scale nuclear power plants at data centers—despite the fact that the construction of an all-new nuclear power plant is so difficult that it hasn’t happened in the United States since 1996.

Evaluating Granholm’s interview in the context both of tech companies’ current initiatives and the Biden administration’s broader push toward nuclear energy, it appears Granholm may consider A.I. to be a potential force multiplier for the push toward a nuclear energy resurgence. But optimism about this high-tech pairing requires the public to ignore the ways the two pieces of technology could also intensify one another’s flaws—with potentially disastrous results.

A.I. and nuclear energy are seen by certain tech gazillionaires, notably OpenAI founder and CEO Sam Altman, as a sort of high-tech chocolate and peanut butter—just the flavor combo humanity needs to kick off some kind of golden age. Altman is chairman of the board of a nuclear energy company called Oklo, and told CNBC last year, “My whole view of the world is the future can be radically better and the two things that we really need for that are to lower the cost of energy and lower the cost of intelligence.”

In fairness, a crop of small nuclear power plants humming along, providing climate-friendly energy to the A.I. sector would be greener than powering ChatGPT and its ilk with fossil fuels. Training A.I. systems with vast cloud computing arrays and running them on similar hardware are notoriously energy-intensive operations, with potentially devastating climate consequences if they’re adopted widely in the coming years.

But “A.I. itself isn’t a problem,” Granholm said, “because A.I. could help to solve the problem.” The Axios story is frustratingly light on detail about what Granholm means here, but the clear implication is that A.I. will apply its infinitude of concentrated computer power to the issues plaguing nuclear power—and perhaps the U.S. electrical grid and power generation in general—and then everything will be hunky-dory. It’s unlikely Granholm came to believe this on her own. According to a report in The Wall Street Journal, last year, a team of Microsoft researchers spent at least six months training a large language model on U.S. nuclear regulations and bureaucracy, in an attempt to design an A.I. system that can zip through the nuclear approvals process.

Using A.I. to fast-track nuclear power plants sounds almost like something A.I. critics would make up to smear the technology as dangerous, but Microsoft is actually doing it, and apparently the Biden administration wants more.

“The nuclear regulatory process is not bureaucratic for the sake of it, as nuclear power plants are highly complex cyber physical systems that indeed take years to not only design, but to construct and to verify and validate,” Heidy Khlaaf, an expert in A.I. safety and safety-critical systems, told me. “Even the most minute of failures in a plant can cascade into a catastrophic or high-risk event. Claiming that there is an A.I. magic wand is a misunderstanding of both nuclear safety engineering, and how A.I. systems fundamentally behave,” Khlaaf continued.

Biden campaigned on the idea that he would “identify the future of nuclear energy,” but thus far, this president has mostly just poured money into it without moving the needle on actual energy production all that much. (For what it’s worth, a new reactor at an existing plant went online last year after about 14 years of construction.)

Granholm’s comments came, however, amid a sort of nuclear power victory lap. She was speaking in Michigan days after her administration announced a $1.52 billion loan guarantee aimed at putting a shuttered nuclear power plant in that state back online.

Viewed as one small step to a nuclear-powered future, this probably sounds promising. More likely, however, it will prove a costly detour on the way to the collapse of nuclear energy—an energy category being desperately propped up by the wishful thinking of a small group that happens to include Altman, the current president, and Bill Gates. (A Gates-founded company called Terrapower claims that a plant it’s constructing in Wyoming will go online in 2028.)

“Reopening closed nuclear plants with subsidies, like what is occurring in Michigan, or keeping existing plants going with subsidies, like what is occurring in California and New York and elsewhere, is a waste of taxpayer money and increases CO2, air pollution, and costs to consumers,” Mark Jacobson, a Stanford University civil and environmental engineer, told me. Past research from Jacobson has clearly demonstrated that at least one use of tax subsidies for nuclear energy was economically and environmentally disastrous compared to simply subsidizing renewable energy. For all its concentrated, reliable, round-the-clock power generation, a nuclear plant is orders of magnitude more complex and controversial than even the ugliest and most expansive solar or wind project—thus the wind and solar construction boom we’re seeing right now.

So if the plan is to not just subsidize nuclear power in pursuit of more A.I. but also to encourage energy companies to use A.I. to slash red tape in pursuit of more nuclear power, what might that look like exactly? Terra Praxis co-CEO Eric Ingersoll did—sort of—explain this to the Journal: “What we’re doing here is training a [large language model] on very specific highly structured documents to produce another highly structured document almost identical to previous documents.”

If you’ve spent much time tinkering with ChatGPT, you’ve probably figured out that even if it’s producing a nice document, it’s not doing so through “reasoning or factual evidence,” as Khlaaf explained. Instead, LLMs use probabilities to fill in gaps with whatever seems likely to go in a given gap, without worrying about issues like, what if nothing should go there? This tendency to make things up is what an A.I. “hallucination” is.

“This is precisely why A.I. algorithms are notoriously flawed, with high error rates observed across applications that require precision, accuracy, and safety-criticality,” Khlaaf told me. You’ve probably seen what these systems can do and, more to the point, what their limits are. There aren’t secret A.I. systems out there that can be trusted with nuclear safety, “even if an A.I. were to only be specifically trained on nuclear documentation,” Khlaaf explained. “Producing highly structured documents for safety-critical systems is not in fact a box-ticking exercise. It is actually a safety process within itself.”