How America Lost the Atomic Age

Genevieve Martin/The High Flux Isotope Reactor at Oak Ridge National Laboratory

To understand the possibilities that nuclear fission has opened for industrial civilization, we must first understand how industrialization became possible to begin with: the burning of vast amounts of coal for energy physically enabled nineteenth-century steam technology and economies of mass production. The reason for this is simple. Coal is around twice as energy-dense as wood, and in 1770 a typical coal mine could produce as much energy as a 5,500-hectare forest every year.

By the mid-nineteenth century, annual English coal consumption had grown to the energy equivalent of a forest covering 96 percent of English farmland. To this day, coal remains critical to the world’s energy infrastructure, accounting for nearly 28 percent of global primary energy as of 2021. Meanwhile, the total energy generated by coal in 2021 was 42-fold greater than in 1860. But if access to more energy-dense fuels is the driving force behind industrial progress, we should view the prevalence of coal as evidence that something has gone very wrong.

Ever since December 20th, 1951, when nuclear fission was first used to generate electricity, humanity has had the technical capacity to harness uranium. The quality of uranium ore varies, but typical ores are over fifty times more energy dense than coal.

Seventy years later, nuclear energy accounts for 19 percent of the U.S. electrical grid. We have barely begun to utilize its enormous potential. Only three new nuclear plants have been built since 1993 while many more have been decommissioned, and global nuclear power output peaked in 2006. Today, it can be difficult to imagine that the fate of nuclear energy in America once looked much brighter.

The Nuclear Industry Before Three Mile Island

America’s first nuclear power plant at Shippingport, Pennsylvania achieved criticality and started producing electricity in 1957. At the time, Admiral Rickover, known as the “Father of the Nuclear Navy,” remarked that the construction of nuclear power plants was “an art, not a science. We are trying to make it a science.” As far as experimental art projects go, Shippingport was fairly expensive—the relatively small 60-megawatt power plant cost a little over $750 million in inflation-adjusted 2022 dollars.

Pricey demonstration projects quickly gave way to a period of declining costs in nuclear plant construction. By the late 1960s, overnight (sans interest) construction costs were down to around $1000 per kilowatt (KWe) for the less expensive reactors, which took three to five years to build. This is slightly cheaper than building a combined-cycle natural gas plant today. Moreover, fuel costs for a nuclear plant are only a few percent of its total cost, compared to coal and gas plants where fuel accounts for at least half of total cost.

But in the early 1970s, something strange began to happen. In defiance of all principles of industrial organization, the price of a nuclear power plant began to rise. New plants became 102 percent more expensive in real terms per doubling of total capacity, and construction costs for reactors completed in the last few years before the Three Mile Island meltdown in 1979 were closer to $2500/KWe. Even so, this figure underestimates the true escalation in costs, as interest rates were hitting record highs in 1973 and ’74. 

Meanwhile, lengthy construction delays became commonplace. Despite the fact that none of the nuclear accidents which we are all familiar with had happened yet, orders for new reactors in the U.S. fell to zero by 1978. While the years after the Three Mile Island meltdown were characterized by stratospheric construction costs and even longer construction delays, we cannot reasonably point to the accident as the ultimate source of nuclear power’s economic plight.

What remains to be explained then, is why nuclear power plants stopped getting cheaper and have now spent the last half century getting more expensive. The answer comes in three parts: bad science, bad ideology, and bad incentives.

A Closer Look at the Tail Risk

While it is increasingly common knowledge that nuclear is among the safest methods of generating electricity, it is at least as common to assert that nuclear comes with significant tail risk. The spiraling cost of construction is then rationalized as a necessary evil to mitigate this tail risk down to an acceptable level, either in terms of probability or severity of failure. This proposition contains two errors. 

The first error is one of bad policy. Utilitarian life-year calculation is limited in general, but it has some very good applications: in a rationally regulated energy marketplace, a statistical life-year should be valued the same regardless of how it is threatened. There is no good justification for regarding a year of life lost due to thyroid cancer from the Chernobyl accident differently than a year lost due to coal-driven air pollution in northern India, dangerous conditions at a polysilicon factory in Xinjiang, or the rupture of a hydroelectric dam. But our regulatory frameworks treat each of these differently. If we take this principle seriously, it must be that either fossil fuels, and in particular coal, are very underregulated, that nuclear is very overregulated, or some combination of both.

We might consider that coal is simply underregulated, but we should also consider that per-capita energy use is highly and positively correlated with life expectancy and other measures of prosperity. The Industrial Revolution could not have happened if health boards in England had treated the risks from coal the way we treat perceived nuclear risks today.

The second error in rationalizing construction cost increases is that the reasoning stems from bad health physics; in fact, the way we perceive the risks of nuclear power is largely incorrect. Radiation is everywhere. Background radiation dose rates vary from place to place but tend to fall between 1.5 and 3.5 millisieverts (msV) per year. A few localities feature annual background doses of dozens of millisieverts, while some residents of Ramsar, Iran receive as many as 260 msV per year. For comparison, the most radioactive places in the Chernobyl exclusion zone receive about 2600 millisieverts per year, but most of the exclusion zone is below 6 millisieverts per year. Even at the upper end of the dose spectrum, studies continue to find no convincing link between background radiation and cancer incidence. Meanwhile, while it is plausible, no association between any level of radiation exposure and hereditary defects has been observed in humans.

The linear no-threshold model (LNT) of radiation dose-response, which is used by regulators the world over to assess the health risks of ionizing radiation, has never actually been substantiated. This model holds that any amount of radiation exposure whatsoever increases the lifetime risk of cancer, with the risk being a linear function of the dose received. Because a key assumption of the model is that all damage is linear and cumulative, dosages are measured over a lifetime, and the length of time over which a given dose is received is considered irrelevant. However, LNT was conceived before the discovery of DNA repair mechanisms.

Initially conceived of as a possible mechanism for inducing mutation, the first supposed experimental evidence for this model appeared in 1927, when the geneticist Hermann Mßller bombarded fruit flies with radiation. He then measured transgenerational phenotypic changes at various dose rates, the lowest of which was nearly a hundred million times higher than the background rate. Unfortunately, his data was not included in his 1927 Science publication, and even the 1928 Genetics Conference proceedings paper did not include a control group or discuss his methods. More consequentially, Mßller linearly extrapolated the dose response from the lowest dose given to zero. Despite the thin justification for his findings, he was given a Nobel Prize. 

After the Second World War, the shock of Hiroshima and Nagasaki generated considerable impetus to eliminate nuclear weapons, and many people previously involved in nuclear research turned against it. The Rockefeller Foundation, which had been intimately involved in the development of the bomb, had a public relations problem on its hands. Following the death and several injuries that resulted from the Castle Bravo hydrogen bomb test in 1954, the public mood was ripe for someone to step in and provide a solid rationale to ban atmospheric testing.

In 1956, the Foundation played a critical role in funding and convening the National Academy of Science’s Biological Effects of Ionizing Radiation (BEAR) committee, and the committee’s chairman was vice president of the Foundation’s scientific branch. Several other committee members were recipients of Rockefeller grants.

According to transcripts forgotten for decades, there was no debate at the event. The committee immediately coalesced around LNT, even though no human data was used to substantiate the committee’s conclusions. The day after the committee released its findings, the headline “Scientists Term Radiation a Peril to Future of Man” appeared on the front page of The New York Times. As it happens, the publisher of the NYT in 1956, Arthur Hays Sulzberger, was also a trustee of the Rockefeller Foundation. 

While Jack Devanney, author of Why Nuclear Power Has Been a Flop, has pointed out that the documentation points to the foundation attempting to absolve itself of guilt for its involvement in the Manhattan Project, the exact motivation of the Foundation in pursuing the entrenchment of LNT is debated. Prominent nuclear advocate Rod Adams argues that since the vast majority of the Rockefeller Foundation’s funding came from oil dividends, they would have had strong incentives to quash any potential threat to this revenue stream. However, direct evidence of their motivations remains elusive.

A year after the BEAR committee’s article, an American geneticist named Edward Lewis continued the LNT tradition. His paper “Leukemia and Ionizing Radiation” examined the epidemiological data from the survivors of the Hiroshima bombing. While the dose of radiation received from the A-bomb slowly tapered off with greater distance from ground zero, Lewis lumped in all the nearby “Not-in-the-City” subjects, who received varying low doses, to create a control. Since these subjects actually displayed rates of cancer that were lower than other unaffected areas of Japan, the control set an artificially low baseline risk and thus inflated the risk of the rest of the subjects. 

The National Council on Radiation Protection and Measurements adopted the LNT model in 1958 with the EPA following suit in 1975, citing another study that was found to have, once again, an errant control nearly a quarter-century later. Stripped of this error, LNT is not supported by the paper. No changes were made to radiation risk assessments in response to the discovery of this error.

Assessing the risks of nuclear power using more rigorous science yields some unexpected conclusions. The partial meltdown at Three Mile Island exposed no one to more than a millisievert of radiation. This is about a hundred times less than the smallest acute dose for which even a tentative link to any increase in cancer incidence has been found. Most people in the surrounding area received closer to one-hundredth of a millisievert. Chernobyl, on the other hand, was one of the few high-power channel-type reactor (RBMK) reactors ever built, all of which were in the Soviet Union. The disaster was also not a meltdown, but an explosion that could only happen due to its less stable reactor design.

Even so, using LNT to assess the damage done by Chernobyl leads to an overestimation of the harm. According to the director of the Chernobyl Tissue Bank, Gerry Thomas, the Chernobyl accident is likely to have resulted in no more than 160 deaths once all latent thyroid cancers are accounted for. Estimates using LNT routinely conjure numbers in the thousands or tens of thousands. 

Similarly for Fukushima, the way the incident is discussed quickly begins to look bizarre considering the actual damage inflicted. Only 167 people received doses over 100 millisieverts, and only nine of them received more than 200 mSv. All of them were plant workers or contractors. One worker who received among the highest doses also received a beta burn, which is like a sunburn but caused by electron radiation. While workers may have had their lifetime risk of cancer increased marginally, the real human impact of the disaster was in the chaotic evacuations of the hospitalized and the elderly, which caused the deaths of around 1600 people. Though high-energy processes like nuclear fission can certainly be dangerous in the right context, the very energy density that creates this danger also makes it easier to confine.

The erroneous use of LNT has been the cause of many misunderstandings since gaining wide adoption. Analogous models are used to perform risk assessments for other carcinogens, but they do not tend to evoke the same guttural sense of fear. To understand this psychological asymmetry, it’s necessary to examine the history of the anti-nuclear movement.

Twentieth-Century Malthusianism and its Corollaries

The intellectual currents of the mid-twentieth century were shaped not only by the unprecedented scale of death wrought by the Second World War, but also by the unprecedented scale of life made possible by the technological advances of the industrial age. To some, the latter just was as alarming as the former. The first indications of this concern entering the zeitgeist can be found in the 1948 best-sellers Our Plundered Planet by Fairfield Osborn and Road to Survival by William Vogt; these works portrayed apocalyptic visions of resource depletion and environmental devastation inflicted by unsustainably large human populations. Both books often made their way onto mandatory reading lists at institutions of higher learning and left their mark on a large cohort of budding post-war intellectuals. 

In 1968, butterfly population biologist Paul Ehrlich published his seminal book The Population Bomb, which had an even greater impact on public discourse. The book argued that within a few years, the world’s rapid population growth would outstrip food supplies, leading to shortages that would threaten the very existence of civilization. Though this hypothesis had been around—and had failed to materialize—since Thomas Malthus first proposed it in 1798, Ehrlich and his fellow travelers thought this time was different. 

Much more so than in Malthus’s time, the new movement had a strong emphasis on the environmental consequences of an unsustainably growing population as it overshot its limits and collapsed. Nuclear power was seen as enabling this capacity to overshoot, so this served as the intellectual foundation of the environmental movement’s hostility to it. But environmentalists then were not aware of the “demographic transition” in which developed countries are moving toward lower fertility rates. Though the benefits of nuclear power, which were obvious to the public of the 1950s and ‘60s, appealed to some segments of the environmentalist community, the generation of the movement educated after the war generally gravitated towards the new anti-nuclear position rooted in neo-Malthusianism.

Though environmentalists saw combating the expansion of nuclear power as a way of containing runaway industrialization and population growth, there seems to have been an understanding that the general public would not adopt this cause on its own terms. Instead, they would need to use concerns about reactor safety, nuclear waste, and weapons proliferation as a public motivation.

We know this because activists had a proclivity for saying so when they believed they were speaking to a sympathetic audience. “Our campaign stressing the hazards of nuclear power will supply a rationale for increasing regulation…and add to the cost of the industry,” declared then-Sierra Club Executive Director, Michael McCloskey in 1974. In an even more candid admission about the nature of the group’s strategy, another Sierra Club employee named Martin Litton opined “I really didn’t care [about possible nuclear accidents] because there are too many people anyway…I think that playing dirty if you have a noble end is fine.”

Throughout the 1970s, lawsuits, protests, and an increasingly onerous regulatory environment dogged efforts to build nuclear plants across the country. The National Environmental Policy Act (NEPA), passed in 1969, made lawsuits easy to deploy as a weapon against large infrastructure projects, turning nuclear power plants into prime targets. In 1970, section 309 of the Clean Air Act looped the EPA into the nuclear plant licensing process via review of the environmental impact statement. The 1971 Calvert Cliffs’ Coordinating Committee, Inc. v. Atomic Energy Commission decision, which held that the Atomic Energy Commission (AEC) was obliged by NEPA to consider the environmental impacts of a nuclear plant irrespective of whether a challenge had been issued to it, led the AEC to suspend all licensing of new plants while it revised the process to comply with the new ruling.

The ruling would leave the success of nuclear power vastly more dependent on the cooperation of the EPA, which counted many anti-nuclear personnel among its ranks. Less than a year after leaving his position as head of the EPA from 1973 to 1977, Russell E. Train advertised his support for “the phasing out and eventual elimination of nuclear power” which had “not been expressed while he served in former President Ford’s cabinet.” Given the intellectual climate that prevailed in the environmental movement, it would be naïve to assume that Train’s views spontaneously generated upon leaving the EPA or that these views were exceptional within the institution.

The Nuclear Industry Against Nuclear Power

Like nuclear power, many industries have public relations problems. Weapons, tobacco, fossil fuels, clothing, alcohol, and chemical companies all reckon with activists, regulators, or both to maintain their businesses. Nonetheless, they have generally avoided the problems of the nuclear industry. It would seem strange then, that nuclear power has failed so spectacularly to fulfill its potential. 

Unlike most of these industries, however, nuclear power is part of the electricity sector. In the 1970s, this meant plants were operated and often built by a regulated utility monopoly. Since the cost to generate each kilowatt-hour of electricity had been falling for decades, and inflation had generally been modest, public utilities had ample room to increase profits by decreasing costs. After all, decreasing costs with constant prices generates increasing profits. 

Unfortunately, the high inflation of the early 1970s would eliminate and reverse this incentive. As utility companies’ costs began to increase irrespective of improvements in efficiency, rate hikes became necessary to maintain profitability. At this point, utilities engaged in gold plating, seeing increases in the service charges as the only route to greater profits. The timing of this sea change in incentives aligns closely with the increase in construction costs for nuclear power plants. 

The regulatory environment also included the Nuclear Regulatory Commission (NRC) by this time, established under the Energy Reorganization Act of 1974. Its predecessor organization, the Atomic Energy Commission (AEC), had been tasked with promoting and regulating the use of nuclear energy in the United States. The NRC, on the other hand, was tasked solely with regulating the technology, as the dual mandate of the AEC was thought to be a conflict of interest. By 1978, nuclear developers had little prospect of selling any more plants to utility companies. Years of increasing costs had slowed down consumer demand significantly, dampening the utilities’ enthusiasm for expanding asset bases. As such, orders for new nuclear plants dropped to zero in that year.

The new regulatory framework still created a very profitable niche into which nuclear power plant developers could step. This consisted of selling equipment and services, generally mandated by new regulations, to utilities operating existing nuclear plants. Without the countervailing mandate to promote nuclear power, the NRC would prove indifferent to whether new regulations were conducive to the flourishing of the technology. Without the prospect of selling new plants, the developers could grow their bottom line only by maximizing the expenditure necessary to maintain existing plants. This cocktail of incentives made it impossible for nuclear power to regain momentum, even once inflation, so important in governing utility behavior, had been tamed. By the time of the 1978 Three Mile Island accident, there were no serious commercial or political interests defending the expansion of nuclear power—including the industry’s own developers.

However, not all bad incentives contributing to the deceleration of nuclear power development operated from within the nuclear industry. The fossil fuel industry has been intimately involved in campaigns to suppress its atomic rival for at least half a century. While the Rockefeller Foundation’s dogmatic support for LNT could have been motivated by other concerns, some of the industry’s later acts appear more transparently mendacious.

For instance, when Sierra Club board member David Brower resigned from the organization in 1969 over its willingness to compromise on the construction of the Diablo Canyon nuclear power plant, oil tycoon Robert Orville Anderson stepped in with half a million dollars to fund his anti-nuclear alternative, Friends of Earth. The Brown family too, which had financial interests in Indonesian oil, not only quashed Chevron’s plan to refine Alaskan oil in California but also vigorously opposed plans to build nuclear plants in the state.

Where Are We Now?

With several interest groups all propagating similar narratives about the dangers of nuclear power for their own reasons, fear of nuclear power has become an easy product to sell. As nuclear construction projects grow ever more expensive, much of the cost is taken up by anything other than the construction itself. Some of the money goes to creditors or lawyers. The nuclear services industry also continues to benefit from the steady introduction of new regulations. After the Fukushima incident, the Nuclear Energy Institute (NEI), an industry trade group, lobbied congress to hand the NRC a fresh mandate to regulate. Exelon, a generation company operating around a fifth of the American nuclear fleet, estimated that it would cost about $400 million dollars to comply with the new requirements. 

The linear no-threshold model is still considered the gold standard for regulators. As recently as 2015, a group of scientists filed a petition to the NRC, requesting that the LNT model be replaced with a more data-informed model. It was rejected. In their response, the NRC tacitly admits that it has little concern for the possibility of overregulation, claiming that the model meets their mandate of “adequate protection.” Also disconcerting is that the NRC’s response repeatedly leans on the recommendations of other regulatory bodies and NGOs to deflect any responsibility for using an unsubstantiated model. The NRC asserts that it should use LNT because the International Council on Radiation Protection (ICRP) uses and recommends it. ICRP also has the fortune of passing the buck. Their website informs us that their recommendations stem from the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) and Biological Effects of Atomic Radiation (BEAR) publications. UNSCEAR, being part of the U.N., is composed of representatives from dozens of countries. The U.S. delegate is an employee of the NRC. The pattern is similar for other countries. Hence, UNSCEAR is largely composed of representatives from bodies it advises, or the advisees of those bodies in turn. Tracking the chain of recommendations back to the source reveals that its links form a circle.

Perhaps this isn’t an issue. It’s not unreasonable to suppose that, if the staff composing these organizations are generally competent and acting in good faith, the regulations arising from this process should be sound. However, assuming radiation is harmful at very low doses and that doses can accumulate over a lifetime naturally results in larger budgets and greater power for regulators. Once again, perverse incentives may be preventing us from considering other assumptions.

In addition to regulatory ratcheting, any attempts to build nuclear power plants in the U.S. must reckon with the loss of tacit knowledge that has resulted from decades of neglect. Everyone who participated in building an inexpensive nuclear plant in the United States is now either retired or deceased. Surely, important knowledge has been accumulated by the team just now finishing the two over-time, over-budget AP1000 reactors in Vogtle, Georgia. However, moving down the learning curve of construction costs can take many iterations. Even after having their design approved, any company seeking to build a nuclear plant must then attain NRC approval for each individual plant, first for construction, then to commence operation once built. Hence, in one recent example, half a billion dollars is still well shy of the full cost the nuclear power company NuScale will have to pay to realize their ambition of building a fully operational power plant.

These problems are compounded, in many states, by electricity markets that do not value reliability and have no capacity for long-term planning. While incentives were often perverse under the cost-of-service regulated utility model, the deregulated frameworks currently employed come with problems of their own. Intermittent wind and solar, initially proposed as a solution to our supposed Malthusian trap, can often bid in at negative prices due to technology-specific subsidies. This can become a serious problem for a nuclear plant as it makes little economic sense to produce at less than full capacity, and turning a plant off and on again takes time. All the while, wind and sunlight can come and go in an instant. Hence, artificially unprofitable nuclear plants can be pushed into retirement, leaving the grid with an unstable foundation. 

There is also no single party that is firmly responsible for keeping the lights on. In the event of a blackout, the generation company, the utility, and the regional transmission organization can all point their fingers at one another. These market characteristics have serious effects on investment decisions. Nuclear engineer and consultant Mark Nelson has even described the current state of electricity markets as possibly an even more serious obstacle to a nuclear renaissance than the regulatory environment that developed in the 1970s.

Recovering the Atomic Age

For America to enter the atomic age in earnest, serious changes to the regulatory environment would have to be made. This would likely include radically reforming or replacing the NRC. In some states, serious reforms to electricity markets or a return to some version of the vertically integrated utility model would need to occur. While a few politicians have recently signaled that they understand some of the issues at hand, the path of least resistance likely begins outside the U.S. 

Some of the most promising American founders in the nuclear space have reached the conclusion that it is easier to build better institutions from scratch than to reform dysfunctional ones, and have begun focusing the development of their businesses abroad. One of these businesses is Last Energy. Its founder, Bret Kugelmass, left his job working on autonomous flight technology and opened the Energy Impact Center (EIC), a research institute dedicated to generating innovative solutions to climate change. EIC quickly recognized the unique potential of nuclear power and created OPEN100, an open-source platform dedicated to providing reference plant schematics and compiling ongoing research and design work. Their for-profit project Last Energy is developing OPEN100 power plants in Poland, Romania, and the U.K.

Thorcon is another perfect example of a nuclear startup with a strong focus on inexpensive nuclear power that has been deterred from attempting to develop its technology in the U.S. Instead, they are working with the Indonesian government to develop and site their plant model, which is a straightforward scale-up of Oak Ridge National Laboratory’s Molten-Salt Reactor Experiment from the late 1960s. Their plan to reduce the costs of the industry hinges on using the economies of shipbuilding, an industry in which their founders have a background, to produce these plants in large numbers. As such, they intend to source their construction in South Korea, where both the shipbuilding industry and the nuclear industry have a history of success.

Much like in South Korea, state-driven finance and the development of nuclear power have enjoyed some success even in the current age. In this arena, China and Russia are the largest players, with their state nuclear corporations’ projects accounting for over half of all the reactors currently under construction in the world. However, this model is unlikely to work in Western countries for now, as they largely lack the state capacity to match these achievements. Even if they did, the nuclear plants being built by China and Russia are unlikely to be the cheapest possible iteration of the technology, as they incorporate much of the plant decisions made by U.S. developers during the period of regulatory ratcheting and spiraling costs.

The stakes of getting nuclear power right are high—not because of the consequences of a meltdown, but because of the opportunity cost of non-use. The status quo does not only entail deadly air pollution but economic stagnation; the Henry Adams curve documents per capita power use in the U.S. Its concurrent flatlining during the slowdown in real median household income in the early 1970s is unlikely to be a coincidence. Much of the lost growth in living standards over the past half-century has been the result of energy-intensive technologies not yet invented, and that is not for a lack of innovative engineers. In rediscovering that the first nuclear era’s promise of “too cheap to meter” was not so frivolous after all, perhaps we will make similar discoveries about the other promises of that more optimistic age.

Benjamin Leopardo studies economics at the Stockholm School of Economics. He writes about energy, economics, and history.