Thursday, February 28, 2008

Jesse H. Ausubel: Renewable and Nuclear Heresies

Introduction: Jesse Ausubel presents us with a grand nuclear plan. Ausubel understands what is wrong with the grand plans for renewables. His case rests on land use issues. In terms of land use renewables are anything but green. What Ausubel understands is the process of electrification. His weakness is his belief that a hydrogen economy can be made to work. The technological challenges to developing a hydrogen economy are very powerful. The problem is not our ability to generate hydrogen, it is the problem of storing it. Unless and until that problem is solved, realistic visions of the future cannot count on it.

But Ausubel has heard the voice of Alvin Weinberg, and Weinberg spoke words of truth. The future belongs to nuclear power.

Plenary Address
Canadian Nuclear Association
10 March 2005, Ottawa, Ontario CA
Renewable and Nuclear Heresies
Jesse H. Ausubel
Director, Program for the Human Environment
The Rockefeller University
1230 York Avenue, New York NY 10021
http://phe.rockefeller.edu
(Figure 1)

Heretics maintain opinions at variance with those generally received. Putting heretics to death, hereticide, is common through history. In 1531 the Swiss Protestant heretic Huldreich Zwingli soldiering anonymously in battle against the Catholic cantons was speared in the thigh and then clubbed on the head. Mortally wounded, he was offered the services of a priest. His declination caused him to be recognized, whereupon he was killed and quartered, and his body parts mixed with dung and ceremonially burned. Recall that the first heresy against the Roman Church in Switzerland in 1522 was the eating of sausages during Lent, and the signal heresy was opposition to the baptism of children. As nuclear experts know deeply, humans are not rational in their beliefs, actions, or reactions.

I will offer both renewable and nuclear heresies. I trust you will not commit hereticide. Because culture defines heresies, something easy for Canadians – French, Anglo, and other – to appreciate, you, mostly coming from a nuclear tribe, will probably applaud my renewable heresies and grumble about the nuclear. You will have to decide whether my heresies rival favoring polygamy or sharing all worldly goods. My main heresies are that renewables are not green and that the nuclear industry should make a product beside electricity.

Decarbonization
The dogma that gives me conviction to uphold heresies is decarbonization, which I accept as the central measure of energy evolution. Consider our hydrocarbon fuels as blends of carbon and hydrogen, both of which burn to release energy. Molecules of the main so-called fossil fuels, coal, oil, and natural gas, each have a typical ratio of carbon to hydrogen atoms. Methane, CH4, is obviously 1 to 4. An oil such as kerosene is 1 to 2. A typical coal’s ratio of C:H is about 2 to 1. Importantly, coal’s precursor, wood, has an even more primitive C:H ratio, 10:1, once the moisture is removed. Carbon blackens miners’ lungs, endangers urban air, and threatens climate change. Hydrogen is as innocent as an element can be, ending combustion as water.

Suppose we placed all the hydrocarbon fuels humanity used each year since about 1800, when British colliers first mined thousands of tons of coal, in a blender, mixed them, and plotted the yearly ratio of carbon to hydrogen (Figure 2). While the trend may waver for a decade or two, over the long term H gains in the mix at the expense of C, like cars replacing horses or color tv substituting for black-and-white. The consequent decarbonization is the single most important fact from 30 years of energy studies.

When my colleagues Cesare Marchetti, Nebojsa Nakicenovic, Arnulf Grubler, and I discovered decarbonization in the 1980s, we were pleasantly surprised. When we first spoke of decarbonization, few believed and many ridiculed the word. Everyone “knew” the opposite to be true. Now prime ministers and presidents speak of decarbonization. Neither Queen Victoria nor Abraham Lincoln decreed a policy of decarbonization. Yet, the energy system pursued it. Human societies had been pursuing decarbonization for 170+ years before anyone noticed.

By the way, another of my heresies, a dangerous one here within view from Canada’s Parliament Hill, is that I believe for the most part politicians are pulling on disconnected levers.

Returning to carbon, if world economic production or all energy rather than all hydrocarbons form the denominator, the world is also decarbonizing, that is using less carbon per dollar of output or kilowatt (Figure 3). Moreover, China and India as well as France and Japan decarbonize. The slopes are quite similar though China and India lag by several decades, as they do in diffusion of other technologies beside energy. Economically and technically, carbon seems fated to fade gradually over this century. By 2100 we will feel nostalgia for carbon as some do now for steam locomotives. Londoners have mythologized their great fogs, induced by coal as late as the 1950s, and Berliners already reminisce about the "East Smell" of burnt lignite whose use collapsed after the fall of The Wall in 1989.

The explanation for the persistence of decarbonization is simple and profound. The overall evolution of the energy system is driven by the increasing spatial density of energy consumption at the level of the end user, that is, the energy consumed per square meter, for example, in a city. Finally, fuels must conform to what the end user will accept, and constraints become more stringent as spatial density of consumption rises. Rich, dense cities accept happily only electricity and gases, now methane and later hydrogen. These are the fuels that reach consumers easily through pervasive infrastructure grids, right to the burner tip in your kitchen.

My city, New York, by the way, already consumes in electricity alone on a July day about 15 watts per square meter averaged over its entire 820 square kilometers of land, including Central Park.

A few decades ago, some visionaries dreamed of an all-electric society. Today people convert about 35-40% of all primary fuel to electricity. The fraction will rise, but now even electricity enthusiasts (as I am) accept that finally not much more than ½ of all energy is likely to be electrified. Reasons include the impracticality of a generating system geared entirely to the instant consumption of energy and lack of amenability of many vehicles to reliance on electricity. Surrendering the vision of an all-electric society is a minor nuclear heresy.
So it goes. Ultimately the behavior of the end user drives the system. Happily, this does mean the system can be rational even when individuals are not. When the end user wants electricity and hydrogen, over time the primary energy sources that can produce on the needed scale while meeting the ever more stringent constraints that attend growth in turn will win. Economies of scale are a juggernaut over the long run. Think, for better or worse, of Walmart.

Appropriately, the historical growth of world primary energy consumption over the past 150 years shows rises in long waves of 50-60 years, each time formed around the development of a more desirable source of energy that scaled up readily. Coal lifted the first wave and oil the second. A new growth wave is underway, lifted by methane, now almost everyone's favorite fuel and a subject to which I will return later.

According to the historical trend in decarbonization, large-scale production of carbon-free hydrogen should begin about the year 2020. So how will we keep lifting electricity production while also introducing more H2 into the system to lift the average above the norm of methane? The obvious competitors are nuclear and the so-called renewables, the false and minor, yet popular, idols.

Renewable heresies
Let's consider the renewable idols: hydro, biomass, wind, and solar. As a Green, I care intensely about land-sparing, about leaving land for Nature. In fact, a Green credo is “No new structures.” Or, in milder form, “New structures or infrastructures should fit within the footprint of the old structures or infrastructures.” So, I will examine renewables primarily by their use of land.

In the US and much of the rest of the world, including Canada, renewables mean dammed rivers. Almost 80% of so-called US renewable energy is hydro, and hydro generates about 60% of all Canada’s electricity.

For the USA as a whole, the capacity of all existing hydropower plants is about 97,500 MWe, and their average production is about 37,500 MWe. The average power intensity – the watts divided by the land area of the USA – is 0.005 watts per square meter, that is, the approximate power that can be obtained from a huge tract of land that drains into a reservoir for a power station.

Imagine the entire province of Ontario, about 900,000 square km, collecting its entire 680,000 billion liters of rain, an average annual rainfall of about 0.8 m. Imagine collecting all that water, every drop, behind a dam of about 60 meter height. Doing so might inundate half the Province, and thus win the support of the majority of Canadians. This comprehensive “Ontario Hydro” would produce about 11,000 MW or about 4/5ths the output of Canada’s 25 nuclear power stations, or about 0.012 watts per square meter or more than twice the USA average. In my flood Ontario scenario, a square kilometer would provide the electricity for about 12 Canadians.

At such a low density, we easily understand why the trend has already shifted from dam building to dam removal for ecological and other reasons. About 40% of Canada’s immense total land area is effectively dammed for electrons already. The World Commission on Dams issued a report in November 2000 that essentially signaled the end of hydropower development globally. While the Chinese are constructing more dams, few foresee even ten thousand megawatts further growth from hydropower.

Though electricity and hydrogen from hydro would decarbonize, the idol of hydro is itself dammed. Hydro is not green.

In the US, after hydro's 80% comes biomass' 17% of renewables. Surprisingly, most of this biomass comes not from backyard woodsmen or community paper drives but from liquors in pulp mills burned to economize their own heat and power. In terms of decarbonization, biomass of course retrogresses, with 10 Cs or more per H.

If one argues that biomass is carbon-neutral because photosynthesis in plants recycles the carbon, one must consider its other attributes, beginning with productivity of photosynthesis. Although farmers usually express this productivity in tons per hectare, in the energy industry the heat content of the trees, corn, and hay instead quantify the energy productivity of the land. For example, the abundant and untended New England or New Brunswick forests produce firewood at the renewable rate of about 1200 watts (thermal) per hectare averaged around the year. The 0.12 watts per square meter of biomass is about ten times more powerful than rain, and excellent management can multiply the figure again ten times.

Imagine, as energy analyst Howard Hayden has suggested, farmers use ample water, fertilizer, and pesticides to achieve 12,000 watts thermal per hectare. Imagine replacing a 1,000 MWe nuclear power plant with a 90% capacity factor. During a year, the nuke will produce about 7.9 billion kWh. To obtain the same electricity from a power plant that burns biomass at 30% heat-to-electricity efficiency, farmers would need about 250,000 hectares or 2,500 square kilometers of land with very high productivity. Harvesting and collecting the biomass are not 100% efficient; some gets left in fields or otherwise lost.

Such losses mean that in round numbers a 1,000 MWe nuclear plant equates to more than 2500 square kilometers of prime land. A typical Iowa county spans about 1000 square kilometers, so it would take at least two and a half counties to fire a station. A nuclear power plant consumes about 10 hectares per unit or 40 hectares for a power park. Shifting entirely from baconburgers to kilowatts, Iowa's 55,000 square miles might yield 50,000 MWe. Prince Edward Island might produce about 2000 MWe.

The US already consumes about 10 and the world about 40 times the kilowatt hours that Iowa's biomass could generate. Prime land has better uses, like feeding the hungry. Plowing marginal lands will require ten or twenty times the expanse and increase erosion. One hundred twenty square meters of New Brunswick or Manitoba might electrify one square meter of New York City.

Note also that pumping water and making fertilizer and pesticides also consume energy. If processors concentrate the corn or other biomass into alcohol or diesel, another step erodes efficiency. Ethanol production yields a tiny net of 0.05 watts per square meter.

As in hydro, in biomass the lack of economies of scale loom large. Because more biomass quickly hits the ceiling of watts per square meter, it can become more extensive but not cheaper. If not false, the idol of biomass is not sustainable on the scale needed and will not contribute to decarbonization. Biomass may photosynthesize but it is not green.

Although or because wind provides only 0.2% of US electricity, the idol of wind evokes much worship. The basic fact of wind is that it provides about 1.2 watts per square meter or 12,000 watts per hectare of year round average electric power. Consider, for example, the $212 million wind farm about 30 kilometers south of Lamar, CO, where 108 1.5 MWe wind turbines stand 80 meters tall, their blades sweeping to 115 meters. The wind farm spreads over 4,800 hectares. At 30% capacity, peak power density is the typical 1.2 watts per square meter.

One problem is that two of the four wind speed regimes produce no power at all. Calm air means no power of course, and gales faster than 25 meters per second (about 90 kilometers per hour) mean shutting down lest the turbine blow apart. Perhaps 3-10 times more compact than biomass, a wind farm occupying about 770 square kilometers could produce as much energy as one 1,000 MWe nuke. To meet 2005 US electricity demand of about 4 million MWhr with around-the-clock-wind would have required wind farms covering over 780,000 square kilometers, about Texas plus Louisiana, or about 1.2 times the area of Alberta. Canada’s demand is about 10% of the USA and corresponds to about the area of New Brunswick.

For linear thinkers, a single file line of windmills has a power density of about 5 kilowatts per meter. If Christo could string windmills single file along Rocky Mountain ridges half way from Vancouver to Calgary, about 200 km, the output would be about the same as one of the four Darlington CANDU units.

Rapidly exhausted economies of scale stop wind. One hundred windy square meters, a good size for a Manhattan apartment, can power a lamp or two, but not the clothes washer and dryer, microwave oven, tvs or computers or dozens of other devices in the apartment, or the apartments above or below it. New York City would require every square meter of Connecticut to become a windfarm if the wind blew in Hartford as in Lamar. The idol of wind would decarbonize but will be minor.

Although negligible as a source of electric power today, photovoltaics also earn a traditional bow. Sadly, PVs remain stuck at about 10% efficiency, with no breakthroughs in 30 years. Today performance reaches about 5-6 watts per square meter. But no economies of scale inhere in PV systems. A 1,000 MWe PV plant would require about 150 square kilometers plus land for storage and retrieval. Present US electric consumption would require 150,000 square kilometers or a square almost 400 kilometers on each side. The PV industry now makes about 600 meters by 600 meters per year. About 600,000 times this amount would be needed to replace the 1,000 MWe nuclear plant, but only a few square kilometers have ever been manufactured in total.

Viewed another way, to produce with solar cells the amount of energy generated in one liter of the core of a nuclear reactor requires one hectare of solar cells. To compete at making the millions of megawatts for the baseload of the world energy market, the cost and complication of solar collectors still need to shrink by orders of magnitude while efficiency soars.
Extrapolating the progress (or lack) in recent decades does not carry the solar and renewable system to market victory. Electrical batteries, crucial to many applications, weigh almost zero in the global energy market. Similarly, solar and renewable energy may attain marvelous niches, but seem puny for providing the base power for 8-10 billion people later this century.
While I have denominated power with land so far, solar and renewables, despite their sacrosanct status, cost the environment in other ways as well. The appropriate description for PVs comes from the song of the Rolling Stones, "Paint It Black." Painting large areas with efficient, thus black absorbers evokes dark 19th century visions of the land. I prefer colorful desert to a 150,000 km2 painted black. Some of the efficient PVs contain nasty elements, such as cadmium. Wind farms irritate with low-frequency noise and thumps, blight landscapes, interfere with TV reception, and chop birds and bats. At the Altamont windfarm in California, the mills kill 40-60 golden eagles per year. Dams kill rivers.

Moreover, solar and renewables in every form require large and complex machinery to produce many megawatts. Berkeley engineer Per Petersen reports that for an average MWe a typical wind-energy system operating with a 6.5 meters-per-second average wind speed requires construction inputs of 460 metric tons of steel and 870 cubic meters of concrete. For comparison, the construction of existing 1970-vintage US nuclear power plants required 40 metric tons of steel and 190 cubic meters of concrete per average megawatt of electricity generating capacity. Wind’s infrastructure takes 5-10 times the steel and concrete as nuclear’s. Bridging the cloudy and dark as well as calm and gusty weather takes storage batteries and their heavy metals. Without vastly improved storage, the windmills and PVs are supernumeraries for the coal, methane, and uranium plants that operate reliably round the clock day after day.

Since 1980 the US DOE alone has spent about $6 billion on solar, $2 billion on geothermal, $1 billion on wind, and $3 billion on other renewables. The nonhydro renewable energy remains about 2% of US capacity, much of that the wood byproducts used to fuel the wood products industry. Cheerful self-delusion about new solar and renewables since 1970 has yet to produce a single quad of the more than 90 quadrillion Btu of the total energy the US now yearly consumes. In the 21 years from 1979 to 2000 the percentage of US energy from renewables actually fell from 8.5% to 7.3%. Environmentally harmless increments of solar and renewable megawatts look puny in a 20 or 30 million megawatt world, and even in today's 10 million megawatt world. If we want to scale up, then hydro, biomass, wind, and solar all gobble land from Nature. Let's stop sanctifying false and minor gods and heretically chant Renewables are not Green.

Nuclear heresies
How then can we meet more stringent consumer demands and stay on course for decarbonization? The inevitable reply is nuclear energy. I should mention that I am not naïve about nuclear. Privileged to work with Soviet colleagues who participated in the Chernobyl clean-up, I saw the Dead Zone in 1990 with my own eyes. In Figure 4 I stand in front of the concrete sarcophagus encasing the blasted reactor with employees of the site management enterprise and the contaminated car of then Soviet prime minister Ryshkov. But I know that the members of the Canadian Nuclear Association know more than I about safety, waste disposal, and proliferation, and I will not offer heresies about them, though important heresies exist, particularly about waste disposal.

Rather, my first nuclear heresy is that nuclear must ally with methane. Electric utilities that operate nuclear plants often embrace another source of power, for example, coal or hydro. Yet, importantly, it will be under the wing of methane that nuclear grows again. The biggest fact of the energy system over the next twenty-thirty years will be massive expansion of the gas system, methane for the present. Many people may feel more comfortable with the addition of nuclear power plants if they know that methane, a very attractive fuel in many ways, is taking the overall energy lead. To stay on track in decarbonization, methane must and will prevail. Were I a businessman, I would want to ally with a winner, and methane will prosper in the market.

Here I will offer a pair of heresies. One is that the popular specter of resource exhaustion has played little or no role in the long-run evolution of the energy system. Plenty of wood and hay remained to be exploited when the world shifted to coal. Coal abounded when oil rose. Oil abounds now as methane rises. Advocates of nuclear energy and so-called renewables foolishly point to depletion of oil and natural gas as reasons for their own fuels to win. Oil and natural gas use may peak in coming decades but not because Earth is running out of them.

Not only do I reject the doctrine of resource exhaustion, I also reject the very notion of fossil fuels. The prevailing theory among Western scientists is that petroleum derives from the buried and chemically transformed remains of once-living cells. This theory relies on the long unquestioned belief that life can exist only at the surface of Earth. In fact, as the late Thomas Gold of Cornell University showed, a huge, deep, hot biosphere of microbes flourishes within Earth’s crust, down to the deepest levels we drill.

Consider instead an upwelling theory. Primordial, abiogenic carbon which we know abounds on other planetary bodies enters the crust from below as a carbon-bearing fluid such as methane, butane, or propane. Continual loss of hydrogen brings it closer to what we call petroleum or coal. Oil is very desirable to microbes, and the deep hot biosphere adds bioproducts to the hydrocarbons. These have caused us to uphold the false belief that the so-called fossil fuels are the stored energy of the Sun. They are not the stored energy of the Sun but primordial hydrocarbons from deep in Earth. And they keep refilling oil and gas reservoirs from below. The alternate theory of the origins of gas, oil, and coal will revolutionize Earth sciences over the next 2-3 decades, lift estimates of resource abundance, and reveal resources in unexpected places.

By the way rejection of the fossil dogma explains why by far the greatest human contribution to radioactive pollution is not leakage from the wastes and cooling water of nuclear power plants but uranium-rich plumes from the smokestacks of coal-fired power stations. Terrestrial plants do not concentrate uranium, but an underground charcoal filter for upwelling gases carrying trace elements would.

Anyway, for business to continue as usual, by 2020 the reference point for the world's energy will be CH4, methane. Still, energy’s evolution should not end with methane. The completion of decarbonization ultimately depends on the production and use of pure hydrogen. In the 1970s journalists called hydrogen the Tomorrow Fuel, and critics have worried that hydrogen will remain forever on the horizon, like fusion. For hydrogen tomorrow is now today. Hydrogen is a thriving young industry. World commercial production in 2002 exceeded 40 billion standard cubic feet per day, equal to 75,000 MW if converted to electricity, and US production, which is about 1/3 of the world, multiplied tenfold between 1970 and 2003 (Figure 5). Over 16,000 kilometers of pipeline transport H2 gas for big users, with pipes at 100 atmospheres as long as 400 kilometers from Antwerp to Normandy. High pressure containers such as tube trailers distribute the liquid product to small and moderate users throughout the world. With production experience, the hydrogen price is falling (Figure 6).

The fundamental question then becomes, from where will large quantities of cheap hydrogen come? Methane and water will compete to provide the hydrogen feedstock, while methane and nuclear will compete to provide the energy needed to transform the feedstock.

Steam reforming of methane to produce hydrogen is already a venerable chemical process. Because methane abounds, in the near term steam reforming of methane, using heat from methane, will remain the preferred way to produce hydrogen. Moreover, because much of the demand for hydrogen is within the petrochemical industry, nepotism gives methane an edge. Increasingly, as new applications such as fuel cells demand hydrogen, nuclear's chance to compete as the transformer improves.

My next heresy is that the production of hydrogen will revolutionize the economics of nuclear power much more than standardizing plants or building plants quicker. First, hydrogen manufacture allows nukes to address the half of energy demand that will not be electricity. Second, it gives nuclear power plants the chance to make valuable product 24 hours per day. Recall that a great problem the electric power industry faces is that, notwithstanding the talk of the “24/7 society,” electric power demand remains asymmetrical. Users demand most electricity during the day. So, immense capital sits on its hands between about 9 o’clock at night and 6 or 7 o’clock in the morning. Turning that capital into an asset is incredibly valuable. Like the hotel and airline industries, the power industry would rather operate at 90% capacity than 60% capacity. The nuclear industry is limited to providing baseload electric power unless it reaches out to hydrogen to store and distribute its tireless energy.

While I stated earlier that methane and nuclear compete, they can also cooperate in the hydrogen market. Let's accept that in the near term steam reforming of methane will dominate hydrogen making. Nuclear power as well as methane can provide the energy for the reforming. Here let me share a big technological idea, methane-nuclear-hydrogen (MNH) complexes, first sketched by Cesare Marchetti. An enormous amount of methane travels through a few giant pipeline clusters, for example, from Russia through Slovakia. These methane trunk routes are attractive places to assemble MNH industrial complexes. Here, if one builds a few nuclear power plants and siphons off some of the methane, the nuclear plants could profitably manufacture large amounts of hydrogen that could be re-introduced into the pipelines, say up to 20% of the composition of the gas. This decarbonization enhances the value of the gas. Meanwhile, the carbon separated from the methane becomes CO2 to be injected into depleted oil and gas fields and profitably help with tertiary recovery. The hydrogen mixture could be distributed around Europe, or the world, getting users accustomed to the new level of decarbonization.

Over the next 10-15 years, I will keep my eye on the places where much gas flows and see whether these regions initiate this next generation energy system. Alberta is an obvious locale, especially when methane from the Mackenzie Delta flows through it. The experience of working with hydrogen from methane will benefit the nuclear industry as it put nukes at the nodes of the webs of hydrogen distribution, anticipating the shift from CH4 to H20 as a feedstock. The methane-nuclear-hydrogen complexes can be the nurseries for the next generation of the energy system.

The surprising longevity of nuclear power plants, observed by Alvin Weinberg, spurs me to look beyond the imminent methane era to complete decarbonization. Nuclear energy’s long-range potential is unique as an abundant, scalable source of electricity and for water-splitting while the cities sleep.

It may no longer qualify as a heresy but I am convinced the thermochemical processes have more promise than electrolysis for producing hydrogen because of the large plant areas required for electrolysis, especially if the plants have very low power density, like photovoltaics. The power density of the machinery and thus the space required for a plant makes the use of electrolysis for large-scale production of hydrogen problematic. Economies of scale again.

At about 950°C core outlet temperature, a high temperature reactor could successfully drive, for example, a sulfur-iodine thermochemical process. High-temperature reactors with coated-particle or graphite-matrix fuels promise a particularly high-efficiency and scalable route to combined power and hydrogen production. A consortium of Chinese companies led by Huaneng proposes to have the first commercially operated pebble bed reactor producing electricity within five years. Thermochemically, such nuclear plants could nightly make H2 on the scale needed to meet the demand of billions of consumers. In Canada questioning CANDU reactors is heresy, but I wonder whether they can reach temperatures good for hydrogen production.

With appropriate reactors, hydrogen production can draw the nuclear industry to a scale of operation an order of magnitude larger than today, meeting future demand for hydrogen and electricity in immense dense cities.

Here let me introduce a big technological concept, the continental SuperGrid to deliver electricity and hydrogen in an integrated energy pipeline. Championed by Chauncey Starr of EPRI, the Supergrid is doubly super: first because it is the apex, and second because it employs superconductivity. Specifically, the SuperGrid would use a high-capacity, superconducting power transmission cable cooled with liquid hydrogen produced by advanced nuclear plants. The fundamental design is for liquid hydrogen to be pumped through the center of an evacuated energy pipe (Figure 7). Thus, the SuperGrid would not only transmit electricity but also store and distribute the bulk of the hydrogen ultimately used in fuel cell vehicles and generators or refreshed internal combustion engines.

By continental, I mean coast-to-coast, indeed all of North America, making one market for electricity. SuperGrids should thrive on other continents, of course, but as an American I hope North America builds first and dominates the market for these systems, which in rough terms might cost $1 trillion, or $10 billion per year for 100 years. The continental scale makes the electric power system much more efficient by flattening the electricity load curve which still follows the sun. Superconductivity solves the problem of power line losses. By high capacity, I mean 40,000-80,000 MW. The latent hydrogen storage capacity of the SuperGrid, combined with fuel cells, may allow electricity networks to shift to a delivery system more like oil and gas, away from the present, costly, instant matching of supply to demand.

Technical choices and challenges abound, about cryogenics and vacuums, about dielectric materials under simultaneous stress from low temperature and high fields, about power control and cable design. Engineers need to improve Supercable design and demonstrate performance of high temperature superconducting wire at commercial electrical current levels. The next step, achievable over 2-3 years, might be a flexible 100 meter Supercable, 10 centimeters overall diameter, 5000 volts, 2000 amperes, 10 MW direct current, with a 3 centimeter diameter pipe for 1 meter per second H2 flow, using magnesium diboride or other wire demonstrating constant current under variable load and low ripple factor. Looking forward, joints and splices are tough problems, emblematic of the general problem of making parts into a system that works, a problem that challenges engineers to their greatest achievements.

For ultimate safety, security, and aesthetics, let’s put the Supergrid, including its cables and power plants, underground. The decision to build underground critically determines the cost of the SuperGrid. But, benefits include reduced vulnerability to attack by chance or sabotage, fewer right-of-way disputes, reduced surface congestion, and real and perceived reduced exposure to accidents and fallout. US Department of Energy laboratories including Fermi have profound experience with tunneling from building particle colliders. Since 1958 Russia has operated underground nuclear reactors near Zheleznogorsk in Central Siberia. Wes Myers and Ned Elkins of Los Alamos National Lab have suggested that the region near Carlsbad, New Mexico, which has enormous caverns from potash mining, and thus a rail and highway system, water supply network, and electrical power distribution might be well-suited for the first US underground nuclear park. The SuperGrid multiplies the chances to site reactors that produce hydrogen far from population concentrations and pipe their products to consumers. One could imagine a region like Idaho, where the US may build its first high temperature reactor, becoming the Kuwait of hydrogen.

Magic words for the SuperGrid are hydrogen, superconductivity, zero emissions, and small ecological footprint, to which we add high temperature reactors, energy storage, security, reliability, and scalability. The long road to the continental SuperGrid begins with the first 10 to 20 km segment addressing an actual transmission bottleneck, and I hope members of the Canadian Nuclear Association will help to build it.

By now, I have revealed my final heresy, that nuclear is green. An American now yearly emits about 5 tons of carbon per year or 14 kg per day, while with uranium we deal in grams per capita per year. Globally each year we already produce carbon waste measuring about 15 cubic kilometers, a very large refuse bag. Nuclear wastes are usually measured in liters. A 1,000 MWe light water reactor that produces energy for one million typical homes produces approximately 1080 kg of fission products per year, 4 milligrams per person. The 500 people who attend this meeting would produce annually high-level radioactive waste equal to a small jar of aspirin tablets.

Over 500 years, in a fully nuclear world the high level radioactive wastes might amount to 700 million tones, less than the 800 million tons of coal Americans burn in one year to produce 1/2 our electricity. Hayden calculates all the reactors from 500 years of production of 100% of the world's energy could be stacked one high in an area of a little over 250 square kilometers, about the land area for a solar farm to provide 1,000 MW of power. I recur to scale. Compact enough to grow, nuclear is green.

Conclusion
Let me return to the heart of energy evolution, decarbonization. Because hydrogen is much better stuff for burning than carbon, the hydrocarbons form a clear hierarchy (Figure 8). Methane tops the ranking, with an energy density of about 55 megajoules per kilo, about twice that of black coal and three times that of wood.

The energy density of nuclear fuel is 10,000 or even 100,000 times as great as methane (Figure 9). While the full footprint of uranium mining might add a few hundred square kilometers, the dense heart of the atom still has much to offer. The extraordinary energy density of nuclear fuel allows compact systems of immense scale, and finally suits the ever higher spatial density of energy consumption at the level of the end user, logically matching energy consumption and production.

During the past 100 years motors have grown from 10 kilowatts to more than 1,000 megawatts, scaling up an astonishing 100,000 times, while shrinking sharply in size and cost per kilowatt. A mere 1.5% per year growth of total energy demand during the 21st century, about two-thirds the rate since 1800, will multiply demand for primary energy to make the electricity and hydrogen from the 13 million MW years in 2002 to 50 million in 2100. If size and power, of individual machines or the total system, grow in tandem, use of materials and land and other resources becomes unacceptably costly. Technologies succeed when economies of scale form part of their conditions of evolution. Like computers, to grow larger, the energy system must now shrink in size and cost. Considered in watts per square meter, nuclear has astronomical advantages over its competitors.

You might well wonder whether we need DO anything. Decarbonization appears automatic. At one level this is true. Yet, we also know that the trend of decarbonization is the outcome of all the blood, sweat, and tears of persistent workers, engineers, managers, investors, regulators, and consumers. If people stop bleeding, sweating, and crying, the game producing decarbonization could just stop. Without heretics, there are no schisms.
And energy offers ample room for heresies. I have mentioned several, some large like reading the Bible in one’s own tongue, and some small like sausages on lent:
--Decarbonization has proceeded for almost two centuries and without a policy for it,
--Renewables are not Green,
--Resource exhaustion is irrelevant,
--Hydrocarbons are not the stored energy of the Sun,
--Utilities should embrace nuclear together with methane,
--Nuclear plants must diversify to make hydrogen as well as electricity, and
--Nuclear is Green.

I hope you will not toss offending documents I have written on a public bonfire or worse yet quarter and immolate me like the Swiss heretic Zwingli. Rather, I hope you will keep in mind the meaning of the Greek word from which heresy derives. The word means to take for oneself or choose.

Received, widely held doctrines may be wise and right. But history, including the history of science, is littered with doctrines discarded as delusions. At present, my conviction is that our best energy doctrine is decarbonization, and let us complete it within one hundred years or sooner. Wishful thinking holds that the way is by returning to a renewable Eden. Resisting wishing thinking requires courage. Even the courageous Zwingli wrote in the margin of his copy of St. Augustine’s City of God, “Ah God, if only Adam had eaten a pear.”

Thanks to Cesare Marchetti, Perrin Meyer, Chauncey Starr, Nadejda Makarova Victor, Paul Waggoner.

Bibliography
Ausubel, J.H., Decarbonization: The Next 100 Years, 9th Alvin M. Weinberg lecture, Oak Ridge National Laboratory, 5 June 2003
text http://phe.rockefeller.edu/PDF_FILES/oakridge.pdf
slides: http://phe.rockefeller.edu/PDF_FILES/oakridgePPT.pdf
Ausubel, J.H., Chernobyl After Perestroika: Reflections on a Recent Visit, Technology in Society 14:187-198, 1992.

Ausubel, J. H., Energy and Environment: The Light Path, Energy Systems and Policy 15:181-188, 1991.

Bryan, R.H., and I.T. Dudley. Estimated quantities of materials contained in a 1000-MW(e) PWR Power Plant, ORNL-TM-4515, prepared for the U.S. Atomic Energy Commission, Oak Ridge National Laboratory, Oak Ridge TN, 1974.

Electric Power Research Institute (EPRI), High Temperature Gas-Cooled Reactors for the Production of Hydrogen: An Assessment in Support of the Hydrogen Economy (1007802),
EPRI, Palo Alto, CA, 2003.

Freidrich, O., The End of the World: A History, Fromm, 1986, New York.

Gold, T., Power from the Earth: Deep Earth Gas, Energy for the Future, Dent, London, 1987.

Gold, T., The Deep Hot Biosphere, Copernicus Springer, New York, 1999

Grant, P. The Energy Supergrid website,
http://www.w2agz.com/PMG%20SuperGrid%20Home.htm

Hayden, H. C., The Solar Fraud: Why Solar Energy Won't Run the World, Vale Lakes, Pueblo West CO, 2001; see also Hayden's monthly newsletter, The Energy Advocate, POB 7595, Pueblo West CO 81007.

International Atomic Energy Agency, Power Reactor Information System, http://www.iaea.org/programmes/a2/index.html, accessed 20 May 2004.
Marchetti, C. Nuclear Plants and Nuclear Niches, Nuclear Science and Engineering 90:521-526, 1985.
Marchetti, C., How to Solve the CO2 Problem without Tears, International Journal of Hydrogen Energy 14:493-506, 1989. http://www.cesaremarchetti.org/archive/scan/MARCHETTI-013.pdf
Meier, P.J., Life-Cycle Assessment of Electricity Generation Systems and Applications for Climate Change Policy Analysis, UWFDM-1181, Fusion Technology Institute, U of Wisconsin, Madison WI, 2002.

Mining Chemical Association (MCA), Zheleznogorsk (Krasnoyarsk-26), A Production Association of the Ministry of Atomic Energy of the Russian Federation (MINATOM),
http://www.jccem.fsu.edu/Partners/MCA.cfm, accessed 20 May 2004.

Moore, T., Supergrid Sparks Interest, EPRI Journal, November 2002,
http://www.epri.com/journal/details.asp?id=511&doctype=features accessed 21 May 2004.

Myers, W. and N. Elkins, Concept for an Underground Nuclear Park and National Energy Supply Complex at Carlsbad, New Mexico, LA-14064, Los Alamos National Laboratory, Los Alamos NM, August 2003.
Nakicenovic, N. and A. Grübler, Technological progress, structural change, and efficient energy use: Trends worldwide and in Austria: International part. International Institute for Applied Systems Analysis, Laxenburg, Austria, 1989.
Nuclear Energy Institute. Nuclear data. http://www.nei.org/index.asp?catnum=1&catid=5, accessed 20 May 2004.
Overbye, T. and C. Starr, convenors, Report of the National Energy Supergrid Workshop, Palo Alto CA, 6-8 November 2002, http://www.energy.ece.uiuc.edu/SuperGridReportFinal.pdf accessed 21 May 2004.
Peterson, P. F., Will the United States Need A Second Geologic Repository? The Bridge 33(3): 26-32, 2003.

Simbeck, D., Data on hydrogen markets and infrastructure, SFA Pacific Inc., Mountain View, CA 94041 http://www.sfapacific.com

Weinberg, A.M., On "Immortal" Nuclear Power Plants. Technology in Society 26(2/3):447-453, 2004.
World Commission on Dams, http://www.dams.org/report/wcd_overview.htm

Figure captions

Figure 1: Title slide

Figure 2: Decarbonization as the evolving C:H ratio. The evolution is seen in the ratio of hydrogen (H) to carbon (C) in the world fuel mix, graphed on a logarithmic scale, analyzed as a logistic growth process and plotted in the linear transform of the logistic (S) curve. Although data begin in 1860, the process inevitably began with the rise of coal mining in the late 18th century in Britain, when wood and hay began to lose market share. Progression of the ratio above natural gas (methane, CH4) requires production of large amounts of hydrogen fuel with non-fossil energy.
Figure 3: Decarbonization as falling global carbon intensity of total world primary energy
Source: N. M. Victor and J. H. Ausubel
Data sources: IIASA, BP (1965-2001), CDIAC, http://cdiac.esd.ornl.gov/trends/emis/em_cont.htm
Figure 4: USA hydrogen shipments growth to 2003
Data source: Dale Simbeck/SFA Pacific.
Figure 5: In front of the Chernobyl sarcophagus, 1990
The author is third from right wearing a beret.
Figure 6: Falling hydrogen price with production (learning curve)
Source: N. M. Victor and J. H. Ausubel
Figure 7: Supergrid energy pipe for electricity and hydrogen
Source: http://www.epri.com/journal/details.asp?id=511&doctype=features
Figure 8: Energy density ranking of hydrocarbon fuels
Source: N. M. Victor and J. H. Ausubel
Figure 9: Energy density of nuclear and hydrocarbon fuels
Source: N. M. Victor and J. H. Ausubel
Figure 10: Repeat of title slide

Shiver in the dark

The whole renewables idea sounds great as long as you don't look at the details. The details stink. Solar and wind both cost more by kWh of generated electricity than nuclear power, and the renewables still require fossil fuel backup. If we want to solve the CO2 problem, we go with nuclear or we shiver in the dark on windless nights.   "Blogging About the Unthinkable" tells us that might be happening in San Francisco real soon.

Carbon-Carbon Composites in Molten Fluoride Salt Reactors

I have begun a review of material input into molten salt reactors. I began a review of metals, but quickly realized that carbon-carbon composites represented a viable and interesting alternative to the nickel based Hastelloy H usually assumed to be the best material for building MSRs. Carbon-Carbons are quite expensive, Hastelloy H is also expensive, but less so than Carbon-carbon composits. Hastelloy H is mainly nickel, and the nickel is one of those materials thatr has undergone rapid inflation during the last few years. Thus plans for liquid fluoride reactors must consider the future cost of Hastelloy H as well as the advantages of alternative materials.

Carbon Carbon composites have been identified as having high potential for use in reactors. L.M. Manocha, A. Warrier, S. Manocha and D. Sathiyamoorthy report that carbon-carbon composites possess high thermal conductivity and ability to retain mechanical properties even at extremely high temperatures carbon-carbon materials can survive neutron bombardment. Manocha, Warrier, Manocha and Sathiyamoorthy state that the effects of “neutron interaction and Wigner energy [on] their microstructure has to be properly controlled through proper choice of fibrous materials, matrix precursor and processing route.”

Charles W. Forsberg, and associates envisioned reactors operating at temperatures as hjgh as 2300 K. Such reactors must be “built entirely from carbon-based materials that use salts (liquid or gas) as the heat transfer medium between the reactor and power-generation equipment and/or heat rejection systems to create reactor systems with very high power-to-mass ratios.”

Forsberg, et al report, “Based on theoretical considerations and the developments in carbon-
carbon technologies over the last 20 years, such machines appear to be potentially viable.
However, significant research is required to demonstrate feasibility and a major long-term
development program would be required to build such machines. “

They further report:

"Only two classes of fluids are chemically compatible with carbon-based materials: inert gases
(e.g., helium-xenon mixtures) and fluoride-based salts. Liquid metals are incompatible with
carbon-based materials."

Furthermore, “Carbon-carbon composites1 can operate at higher temperatures than other materials (Fig. 1), retain their room-temperature strength at temperatures up to >2500K (>2225°C), and have much higher strength-to-weight ratios than other candidate materials. The theoretical peak temperatures of carbon-based materials are limited by the carbon sublimation temperature of ~3350°C, which is close to the 3400°C melting point of tungsten. Carbon-carbon composites have been demonstrated to (1) maintain reproducible strength at 1650°C, (2) withstand large thermal gradients, (3) have low coefficients of thermal expansion and thus the potential to minimize thermal stresses, (4) have tolerance to impact damage, and (6) be manufacturable.”

Additionally: “Carbon-carbon composites have been developed for fusion2 and fission3 applications with short-term operating temperatures up to 1600°C. Graphite and carbon-carbon composites are used in a variety of high-temperature nuclear reactors. The characteristics of these materials under neutron radiation have been extensively studied and are dependent upon the specific material, the type of neutron damage, and the temperatures. In most cases, radiation damage is reduced as temperatures increase.”

Also: "Carbon-carbon composites have been developed for fusion and fission applications with short- term operating temperatures up to 1600°C."

Forsberg, et al note, “Two major challenges exist for very high temperature reactor applications of carbon-based materials.

* Radiation damage. Carbon-carbon composites and graphites are used in many nuclear
reactors; however, these materials degrade when subjected to in-core radiation levels. At
the same time, it is known that most types of radiation damage are reversed by treatment
at temperatures near 2000°C. Theoretical considerations and indirect experimental
evidence suggest that at very high temperatures there may be sufficient self-healing of the
carbon materials to enable extreme fuel burnups and long-term operations at temperatures
between 1500 and 2000°C
* Permeability. Unlike metals, the permeability of composites is not necessarily zero.
Development of very low permeability carbon-carbon composites for operation at
extreme temperatures for very long time periods may be a major challenge. Current
successful techniques include infusion of carbon and sometimes other materials into the
matrix by a variety of techniques. However, none of these methods have been tested for
very high temperatures and very long time periods.”

L.M. Manocha, A. Warrier, S. Manocha and D. Sathiyamoorthy have report that the effects of “neutron interaction and Wigner energy [on] their [carbon-carbon] microstructure has to be properly controlled through proper choice of fibrous materials, matrix precursor and processing route.”

It would thus appear that Carbon-Carbon composites are extremely well suited for liquid fluoride salt cooled reactors. And would be ideally suited for heat exchange materials in a liquid fluoride salt to helium heat exchange. Such a reactor would have the potential of capable of operating at a temperature limit set by the boiling point of fluoride salts – a little over 1400 C. Thus would enable a liquid fluoride salt reactor to operate at a remarkably high level of thermal efficiency.

The principle disadvantages of carbon-carbon composites might be their cost, and the problem of Wigner energy, the deformation of graphite structures by exposure to neutron radiation. If Manocha, Warrier, S. Manocha and Sathiyamoorthy are correct, the problem of Wigner energy on carbon-carbons can be avoided by materials selection and adopting certain processing approaches. Cost penalties, if any for the use carbon-carbon materials can be recouped through the higher electrical output from the more efficient system. But the case has been made that Carbob-barbons will actually lower manufacturing costs. It should be noted that liquid metal reactors are incompatable with carbon-carbon composites, and thus unlike reactors that use fluoride salts coolants, liquid metal reactors cannot take advantage of CCC's high heat tolerence and the benefits of thermal efficiency that it brings.

Charles W. Forsberg, Per F. Peterson, and HaiHua Zhao note that "Liquid silicon infiltrated carbon-carbon compositesmposites provide a potentially very attractive construction material for high-temperature heat exchangers, piping, pumps, and vessels for MSRs, because of their ability to maintain nearly full mechanical strength at high temperatures (up to 1400°C), the simplicity of their fabrication, their low residual porosity, their capability of operating with
high-pressure helium and molten fluoride salts, and their low cost."

There you have it, you can build a fluoride salts reactor on the cheap with carbon-carbon technology. Industrial production would not be difficult, "Chopped carbon fiber can provide a particularly attractive material that can be readily formed by pressing with dies, machined using standard milling tools, and assembled into complex parts. "

There you go.

Forsberg, Peterson and Zhao conclude: "Three technological developments since then (Brayton power cycles, compact heat exchangers, and carbon-carbon composites) have (1) eliminated or significantly reduced several major technical issues associated with MSRs, (2) created the potential for major improvements in performance, 2400 MW(t) Power Conversion System Point and (3) significantly reduced costs. These major technological advances and changing goals for nuclear reactors strongly support a major investigation and assessment of MSRs as future GenIV reactors for deployment. "

We have then clear roadmap to the future of energy for the entire world. Carbon goes into the reactor as building material, not as fuel. This solution will lower reactor building costs, and produce a reactor that is a virtually a renewable source of electrical energy.

Clearly then the flawed and politically motivated judgement of WASH-1222 should be reversed. In comming post I intend to layout the numerous advantages of the Molten Salt Reactor concept. These include the superior safety - no China syndrome with an always molten core - elimination of the problem of nuclear waste, a reactor that can breed new nuclear fuel without the risk of nuclear proliferation, and ability to use plentiful, virtually renewable thorium far more efficiently than uranium is now used as a nuclear fuel. And, of course, carbon-carbon liquid fluoride salt reactors can be mass produced.

Not only do I think that this is what should happen, I think that this is what will happen.  

Wednesday, February 27, 2008

Hillary wants to follow the Germany Energy Model


The Democratic candidates prove again in Ohio last night that they don't understand the picture. Hillary is all for ill thought out solutions including solar panels on roofs, wind turbines, geothermal (in Ohio), and biofuels. Hillary's solutions are a bunch of crap! CRAP! Hillary wants to follow the model of Germany. How idiotic! Does Hillary have the slightest idea what is going on Germany? So Hillary wants to spend billions of dollars, and create millions of jobs for efforts that will not curb CO2 emissions, and will not solve what will become in the next few years increasingly urgent national energy problems.

Obama was only marginally better. Although he talked about education, science and technology, the money words were windmills, alternative fuels, and energy efficiency. Young people are going to find jobs, but will they be able to heat their homes in the winter?

"I helped to pass legislation to begin a training program for green collar jobs. I want to see people throughout Ohio being trained to do the work that will put solar panels on roofs, install wind turbines, do geothermal, take advantage of biofuels, and I know that if we had put $5 billion into the stimulus package to really invest in the training and the tax incentives that would have created those jobs as the Democrats wanted, as I originally proposed, we would be on the way to creating those.

You know, take a country like Germany. They made a big bet on solar power. They have a smaller economy and population than ours. They've created several hundred thousand new jobs, and these are jobs that can't be outsourced. These are jobs that have to be done in Youngstown, in Dayton, in Cincinnati. These are jobs that we can create here with the right combination of tax incentives, training, and a commitment to following through. So I do think that at least 5 million jobs are fully capable of being produced within the next 10 years."

-- Hillary Clinton

"We're going to have to invest in infrastructure to make sure that we're competitive. And I've got a plan to do that. We're going to have to invest in science and technology. We've got to vastly improve our education system. We have to look at energy and the potential for creating green jobs that can not just save on our energy costs but, more importantly, can create jobs in building windmills that will produce manufacturing jobs here in Ohio, can put rural communities back on their feet by working on alternative fuels, making buildings more energy efficient. We can hire young people who are out of work and put them to work in the trade."

-- Barack Obama
We are still in the stage where politicians offer words, not solutions. Words are safe in uncertain times.   Words don't loose candidates votes as long as they stay away from the wrong words.  Neither candidate mentioned the word nuclear.  Surely nuclear is part of any solution to the problem of climate change and keeping energy coming to producers and consumers.  I guess that the word was regarded as to dangerous to mention by the candidates spin advisors.  

Disinformation, Part I: Seitz

Al Gore's 2006 film,  "An Inconvenient Truth," resinated with me. I had known about global warming for most of my adult life, but since my departure from ORNL in August of 1971 to attend graduate school in Memphis, I had not focused a great deal of attention on the Issue. The Gore movie did not so much inform me, as to wake me up. My health had deteriorated to the point where I felt I could no longer work, so I retired. With Nothing to occupy my time but blogging, I began to write about AGW. As I surfed the Internet looking for material to write about, I began to encounter Global Warming skeptics. For a year I focused on global warming skepticism in my blog, bartoncii. The debate on global warming was an artificial one. Industries like EXXON, paid people with scientific credentials or quasi scientific credentials scientists to create a public front of scientific opposition to the idea of Anthropogenic Global Warming. My research eventually lead to a series of posts on the business of global warming skepticism. And a business it was. I was able to the trace the flow of money from American businesses and right wing foundations to right wing think tanks, skeptics front organizations, and people whose credentials were for sale to the highest bidder. Businesses like EXXON had consciously patterned their anti-global warming efforts after the tobacco health public relations disinformation campaign of the tobacco companies. Tobacco companies had spent millions of dollars on the creation of disinformation with the intention of sowing public doubt and confusion about the scientific evidence concerning health problems caused by smoking.

Documents existed showing that fossil fuel based businesses like EXXON had made the conscious decision to engage in the same sort of disinformation campaign that the tobacco companies had used. Furthermore, in many cases the same people who had participated in the tobbacco Industries Project Whitecoat Scientific Witness Project disinformation campaign, were willing to play the same game of EXXON and other energy concerns for money.

One such figure was Frederick (Fred) Seitz, a former President of the National Academy of Sciences, and of Rockefeller University. After retiring from what had seemed a distinguished scientific career, Seitz was hired as a consultant by RJR Tobacco. Seitz responsible for distributing $45,000,000 in grant money for scientists, ostensively intended to find scientific evidence concerning the relationship of smoking to health problems. The actual purpose of the program was to create a scientific disinformation campaign. At least $585,000 of RJR money ended up in Seitz's personal bank account, even though Seitz did not generate any peer reviewed papers. For 10 years seitz served as the tobacco Industry's poster boy. "They didn't want us looking at the health effects of cigarette smoking," Seitz recently acknowledged in "Vanity Fair." Finally in 1989, Bill Hobbs of RJR decided that Seitz was no longer of use to RJB. In an internal memo, Alexander Holtzman, another RJR executive explained Holtzman's decision:

"I spoke to Bill Hobbs about arranging an appointment for you with Dr. Fred Seitz, former head of Rockefeller University and the principal scientific advisor to the R. J. Reynolds medical research program. Bill told me that Dr. Seitz is quite elderly and not sufficiently rational to offer advice. Bill said that he would strongly recommend your speaking to Dr. Alfred G. Knudson Jr. of the CTR Scientific Advisory Board."

Seitz continued, however, to prove useful as a tobacco industy front man. Only a month after Reynolds CEO Bill Hobbs had declared Seitz to not rational enough to offer advice, Seitz's name was attached to a report from RJR denying problems with second hand smoke.

Seitz acknowledged his willing to take money from the tobacco interest, "as long as it was green." "I'm not quite clear about this moralistic issue," he added. Seitz moved on to the George C. Marshall Institute, a front for right-wing foundations, and recepiant of Exxon largess. There in 1994 Seitz allegedly authored a 1994 report entitled "Global warming and ozone hole controversies. A challenge to scientific judgment." That study ostensively written by Seitz concluded, "there is no good scientific evidence that passive inhalation is truly dangerous under normal circumstances."

It was questionable if Seitz, whose rationality RJB's Bill Hobbs had questioned five years before, was rational enough to assess the judgment of other scientist. Despite his advancing years Seitz remained active in the energy industry's global warming disinformation campaign. The money was, after all, green. In 1995, Seitz's name appeared on top of an Op-Ed, in the Wall Street Journal, whose right-wing editorial staff was loyal to the Republican Part line, that global warming was not really happening. Seitz had lent his name, irony of ironies, to an attack on the integrity of a 1995 IPCC report.

Seitz greed, his prostitution of his scientific credentials, as long as the money was green, and his judgement, which former employer RJR found questionable, were ignored by the press, which treated Seitz as a legitimate figure in a supposed scientific debate. For example, in 1998 Seizt was reported by the New York Times, as believing that CO2 emissions did not cause climate problems, instead they were "a wonderful and unexpected gift from the Industrial Revolution."

In 2006, Seitz was still regarded as a credible source by the media. That year he was interviewed by the PBS program Frontline. The Frontline interviewer, did what journalists had not done in the past, gently buy critically tore Seitz's statements apart, and exposed Seitz's incompetence and mendacity. Seitz name no longer appears in the news, but others of his ilk, Fred Singer and Tim Ball, still give interviews and appear on television. Journalists are just as willing to give them free rides for their fossil fuel industry paid for views, just as Seitz received in the past.

Alvin Weinberg warned Congress of the dangers CO2 related climate change in 1975, but the EXXON disinformation campaign succeeded in delaying public recognition of the problem until 2006. The era of denial may be over, but confusion reigns. No where is that confusion more evident, that in the statements of political leaders.

Tuesday, February 26, 2008

The Second Nuclear Age Begins

The initiation of the construction of the first ever Westinghouse AP-1000 reactor in Sanmen City, Zhejiang Province, China today, the curtain was run up on the second nuclear age. Three more AP-1000 are on order in China. TVA has applied for a NRC license to build and operate two AP-1000's and several other prospective operators plan to apply for licenses for over a dozen more AP-1000s this year. In addition to the 4 AP-1000's on order, China has indicated that it intends to make the AP-1000 the center of its plan to develop nuclear powered electrical sources.

Photo taken on Feb. 26, 2008 shows the construction of China's first third-generation nuclear plant and also the world's first AP1000 nuclear plant, the AP1000 Sanmen nuclear plant, in Sanmen, east China's Zhejiang Province. The Sammen Nuclear Power Project kicked off its excavation construction on Tuesday. (Xinhua Photo)

Photo taken on Feb. 26, 2008 shows the excavation kick-off ceremony of the AP1000 Sammen Nuclear Power Project, in Sanmen, east China's Zhejiang Province. The AP1000 Sanmen nuclear plant, China's first third-generation nuclear plant, would also become the world's first AP1000 nuclear plant.(Xinhua Photo)

Construction of the AP-1000 is expected to be take as little as 36 months. Because the NRC licensing of TVA's initial reactors is expected to take 42 months, the construction of America's first AP-1000 is expected to commence in 2012. The first TVA AP-1000, located at TVA’s Bellefonte, Alabama nuclear site s is expected to be completed about 2015, followed by a second reactor being completed in 2o17. In addition TVA expects to complete the construction of another reactor at the Watts Bar, Tennessee nuclear plant in 2012.

The AP-1000 is rated from 1100 to 1250 MWs. AP-1000 generated electricity is expected to costs below 3.5 cents per KWh, and AP-1000s ae expected to opeate for 60 year. TVA is the lead licenses applicant for the AP-1000. The numerous other applications for the construction of AP-1000s in the United States will be based on the approval of the TVA application,

TVA is the only national utilirty to have continued building reactors during the last 2o years. The first Watts Bar reactor, begun in 1972 was completed in 1996. Following the completion of the Watts Bar reactor, TVA undertook to rebuild the Browns Ferry Unit 1 reactor, damaged by a fire in 1975. That reactor was rebuilt beginning in 2002, and emerged as a virtually new unit, in 2007. Both reactors are currently contributing power to the TVA system. In addition, TVA began a program to complete a second Watts Bar reactor in late 2007. Construction of the reactor had begun in the 1970's and had been suspended early in 1985. Construction on the second Watts Bar reactor is expected to be completed in 2012. Because TVA has recent experience with reactor construcion, it was considered a good candidate to lead off the construction of Westinghouse's recently designed AP-1000.

In 2012 when construction of TVA's first AP-1000 begins, Westinghouse anticipates that it will have broungt its first AP-1000 project in China to a sucessful completion, while TVA will have completed the construction of 2 reactors in the five previous years. By 2017 TVA anticipates the completion of its 4th reactor in a decade. By that time no doubt, TVA will have more nuclear projects on the drawing board.

Earl Killian and the Texas Grid

Earl Killian, a chronic renewables advocate, and anti-nuk fanatic, complained yesterday in a Climate Progress comment about the autonomy of the Texas grid.  For those of you who don't know, the Texas grid, managed by the Electricity  Reliability Council of Texas (ERCOT),  is not hooked up to the rest of the country.   ERCOR was set up during World War II, to insure that Gulf Cost refineries and other industries had enough electrical power.  There was enough electrical generation capacity in Texas to insure that this was the case.  ERCOT has been well managed, and has worked well for Texas, which has significant electrical needs, especially in the summer, that are not well matched to other parts of the country.

Killian, however, thinks that it is an "issue" that the Texas grid is not connected to the rest of the country, because, "[t]hat makes it difficult for it to export its wind energy, or to import other renewable energy from other states."

I responded to Killian:
Earl you should ask T. Boone Pickens to pay you for your support. The Texas grid is stable, and Texas has enough peak generating capacity to take care of needs. Wh have not had system wide blackouts, or rolling blackouts, so interconnection with the rest of the country would not make our system reliability. Texas utilities are beginning to replace fossil fuel technology with reliable nuclear plants.

Texas wind generators tend to produce power when it is not needed, and not produce it when it is needed.. Wind speed drops all over Texas, during the summer. During summer days it drop even more. The Texas wind capacity factor during the summer is under 17%,, but at mid day during July and August is is significantly lower. Mid-day is when people start turning on their air-conditioners. In Texas the unreliability of wind generated electricity is used to argue the case of who want to keep fossil fuel plants running for a long time to come. The wind generation people are in cahoots with the coal interests.

Interconnection would not then benefit Texas rate payers. It would be in the interest of wealthy investors like oil man T. Boone Pickens, who would love to be able to export his unneeded wind generated electricity out of state on connections paid for by Texas rate payers. Let Pickens pay for the wires he uses to export electricity, not me.

Killian's optimism concerning the future of renewables may not be entirely realistic. Renewables are at present a very limited and expensive stopgap solution to the control of CO2 emissions. Solar electrical generation provides power on average five and a half hours a day. Wind power, even in Texas, offers limited and irregular power production, and is least productive, during typical periods of peak electrical demand. Thus renewables can never be more than a fully replace fossil fuel generating capacity.

In other to stop producing CO2 during the generation of electricity, we must replace fossil fueled plants with nuclear generators.

In addition, the price of basic construction materials including steel, concrete and copper is rising rapidly. The use per MW of generating capacity by renewables of these materials is far more intensive than the use in the construction of nuclear facilities. Thus the future costs of increasing renewable generation facilities is likely to rise more quickly than the future cost of building nuclear facilities. This problem is compounded by the lower capacity factor of renewables, which necessitates the installation of up to 5 times the generation capacity of a nuclear plant in order to equal its output.

bigTom, I am far less optimistic about renewables than I was two years ago. Even without energy storage, building renewable generating facilities is going to be at least as expensive as nuclear plants, while the facilities will produce between 25% and 45% of electricity of that the nuclear plant will. The most significant cost of solar power in not the price of PV modules, it is the cost of installation.

A PV facility, recently completed in Spain officially cost130 million euros, but unofficial cost estimates places the cost closer to a quarter billion Euros with cost over runs. 400 workers worked for 11 months to build the facility, which is officially rated at 20MWs. In fact the facility can be expected to produce 4MWh per day, on the basis of the power output of other PV facilities in southern Spain. Hence the capitol cost of PV facilities can be expected to run as high as $18.5 billion per name plate GW, with capacity factors running around 20%.

TVA has recently stated that it expects to pay no more than $3 billion each for its first two new 1 GW+ AP-1000 reactors. Even taking the most pessimistic estimates of $8 billion per GW costs with reactors, the cost of nuclear power seems positively bargain basement compared to the cost of PV installations. If we look at the cost by capacity factor, rather than name plate power, wind is also far more expensive than nuclear power.

The world's largest PV installation will typically generate as much electricity as much electricity in a year as an AP-1000 can generated in a day.

The advocates of renewable energy need to take a long hard look at cost and performance in the real world.

Sunday, February 24, 2008

Some concluding remarks about WASH-1222

I have argued that WASH-1222 was a bureaucratic hatchet job. It was designed as part of Milton's Shaw's concerted program to kill off the Molten Salt Reactor. I would also like to note that WASH-1222 should be read in light of a closely related event, the firing of Alvin Weinberg as Director of ORNL. I believe that the WASH-1222 and the firing of Weinberg were part of a single bureaucratic move by Shaw to gain control of the American nuclear establishment, and to control the future direction of nuclear technology and of the nuclear industry in the United States. Indeed I believe that despite his 1973 firing by Dixie Lee Ray, Shaw largely achieved his objectives. The United States Nuclear Industry still largely carries Milton Shaw's imprint. Unfortunately the legacy of problems left by Shaw's fundamentally flawed vision is still with us. The mistrust of nuclear safety and of the governmental regulation of the nuclear industry is still widespread in American society, and represents a significant handicap in the fight against global warming.

The 1970 Bureau of Mines Mineral Yearbook reported:

"[The] AEC also requested proposals for a design study of a 1,000-megawatt molten-salt
breeder reactor (MSBR). There was also a significant increase in private efforts involving this concept. The Molten Salt Breeder Reactor Associates, an association of five electric utility companies and a consulting engineering firm, completed Phase I of their study of the MSBR. In addition, 15 utility companies and six major industrial companies formed the Molten Salt Group, which will jointly study MSBR technology, including the feasibility of thorium
as a fuel.12"

The footnote cited "Wall Street Journal. V. 176, No. 29, Aug. 10, 1970, p. 17."

Contrary to WASH-1222 there was by 1970 considerable industrial interest in MSR technology.

Among the reports on the MSR by private industrial groups and their consultants were:

Molten-Salt Breeder Reactor Associates Staff, Final Report, Phase I Study—Project for
Investigation of Molten-Salt Breeder Reactor, Black & Veatch Consulting Engineers,
Kansas City, Mo. (1970).

Evaluation of a 1000-MWe Molten-Salt Breeder Reactor, Technical Report of the
Molten-Salt Group, Part II, Ebasco Services, Inc., October 1971.

Molten-Salt Reactor Technology, Technical Report of the Molten-Salt Group, Part I,
Ebasco Services, Inc., December 1971.

1000-MW(e) Molten-Salt Breeder Reactor Conceptual Design Study, Final Report—
Task I, Ebasco Services, Inc., New York, February 1972.

Shaw had managed to abort an important industrial development and his destructive action had profoundly negative implications for the energy future of the United States.

Shaw's vision was also flawed by his failure to recognize that problems like "nuclear waste" and "nuclear proliferation" could and should be solved by a radical change in reactor design. The MSR possessed the potential to resolve these issues. Failure to move forward on MSR technology meant that the best chance to address major public concerns about nuclear technology was ignored.

In 2008 MSR technology remains potentially the best single too for responding to the challenge posed by global warming, and peak fossil fuel energy. Yet the molten salt reactor is today little known and almost entirely ignored by decision makers. This should not be its fate, considering the potential that it brings.

WASH-1222 with Comments: Part 1

Kirk Sorensen, in response to my posting on Milton Shaw, suggested that I review WASH-1222, a document by which Shaw hopped to bury the MSR. I have begun to do that. The purpose of this review will be to assess the extent to which the decision to shut down the development of the MSR in the late 1960's and early 1970's was an irrational political decision. I am going to post my results on section at a time. - CB

AN EVALUATION OF THE MOLTEN SALT BREEDER REACTOR

I. INTRODUCTION

The Division of Reactor Development and Technology, USAEC, was assigned the responsibility of assessing the status of the technology of the Molten Salt Breeder Reactor (MSBR) as part of the Federal Council of Science and Technology Research and Development Goals Study. In conducting this review, the attractive features and problem areas associated with the concept have been examined; but more importantly, the assessment has been directed to provide a view of the technology and engineering development efforts and the associated government and industrial commitments which would be required to develop the MSBR into a safe, reliable and economic power source for central station application.

The MSBR concept, currently under study at the Oak Ridge National Laboratory (ORNL), is based on use of a circulating fluid fuel reactor coupled with on-line continuous fuel processing. As presently envisioned, it would operate as a thermal spectrum reactor system utilizing a thorium-uranium fuel cycle. Thus, the concept would offer the potential for broadened utilization of the nation's natural resources through operation of a breeder system employing another fertile material (thorium instead of uranium).

The long-term objective of any new reactor concept and the incentive for the government to support its development are to help provide a self-sustaining, competitive industrial capability for producing economical power in a reliable and safe manner. A basic part of achievement of this objective is to gain public acceptance of a new form of power production. Success in such an endeavor is required to permit the utilities and others to consider the concept as a viable option for generating electrical power in the future and to consider making the heavy, long-term commitments of resources in funds, facilities and personnel needed to provide the transition from the early experimental facilities and demonstration plants to full-scale commercial reactor power plant systems.

Consistent with the policy established for all power reactor development programs, the MSBR would require the successful accomplishment of three basic research and development phases:
  • An initial research and development phase in which the basic technical aspects of the MSBR concept are confirmed, involving exploratory development, laboratory experiment, and conceptual engineering. 

  • A second phase in which the engineering and manufacturing capabilities are developed. This includes the conduct of in-depth engineering and proof testing of first-of-a-kind components, equipment and systems. These would then be incorporated into experimental installations and supporting test facilities to assure adequate understanding of design and performance characteristics, as well as to gain overall experience associated with major operational, economic and environmental parameters. As these research and development efforts progress, the technological uncertainties would need to be resolved and decision points reached that would permit development to proceed with necessary confidence. When the technology is sufficiently developed and confidence in the system was attained, the next stage would be the construction of large demonstration plants. 

  • A third phase in which the utilities make large-scale commitments to electric generating plants by developing the capability to manage the design, construction, test and operation of these power plants in a safe, reliable, economic, and environmentally acceptable manner.
Significant experience with the Light Water Reactor (LWR), the High-Temperature Gas-cooled Reactor (HTGR) and the Liquid Metal-cooled Fast Breeder Reactor (LMFBR) has been gained over the past two decades pertaining to the efforts that are required to develop and advance nuclear reactors to the point of public and commercial acceptance. This experience has clearly demonstrated that the phases of development and demonstration should be similar regardless of the energy concept being explored; that the logical progression through each of the phases is essential; and that completing the work through the three phases is an extremely difficult, time consuming and costly undertaking, requiring the highest level of technical management, professional competence and organizational skills. This has again been demonstrated by the recent experience in the expanding LWR design, construction and licensing activities which emphasize clearly the need for even stronger technology and engineering efforts than were initially provided, although these were satisfactory in many cases for the first experiments and demonstration plants. The LMFBR program, which is relatively well advanced in its development, tracks closely this LWR experience and has further reinforced this need as it applies to the technology, development and engineering application areas.

[This paragraph reflects Milton Shaw's views, but Shaw clearly over estimated the relative maturity of the reactor technologies referred to by the paragraph. Developmental problems with light water reactor reactor technology were to cost reactor owners tens of billions of dollars during the next two decades. Reactor scientists had told Shaw about the problems, but Shaw discounted the warnings. Again Shaw's belief that the LMFBR had reached an advanced stage of development was far from reality in 1972, and remains questionable in 2008. Shaw's demonstrably mistaken beliefs thus appear to lie at the heart of the WASH-1222 assessment of the potential of MSR technology. - CB]

It should also be kept in mind that the large backlog of commitments and the shortage of qualified engineering and technical management personnel and proof-test facilities in the government, in industry and in the utilities make it even more necessary that all the reactor systems be thoroughly designed and tested before additional significant commitment to and construction of, commercial power plants are initiated.

[In fact the his was not the case when Shaw joined the AEC in 1964. Shaw immediately proceeded to destroy the the research and development units that were needed to carry such a project out. Hence "the shortage of qualified engineering and technical management personnel and proof-test facilities" was a problem which Shaw had created. Thus as we shall see, not only does WaSH-1222 commit egregious errors in logic, as well as misstatement of facts, it covers up the fact that Shaw himself had destroyed the resources that were required to complete the development of the MSR. Statements like this must be counted as duplicitous. - CB]

With regard to the MSBR, preliminary reactor designs were evaluated in WASH-1097 (“The Use of Thorium in Nuclear Power Reactors”) based upon the information supplied by ORNL. Two reactor design concepts were considered—a two-fluid reactor in which the fissile and fertile salts were separated by graphite and a single fluid concept in which the fissile and fertile salts were completely mixed. This evaluation identified problem areas requiring resolution through conduct of an intensive research and development program.

[The two fluid MSR was an auto breeder. That meant it produced at least enough U233 to keep working until it ran out of thorium to breed. As long as a reactor produces as much fuel as it consumes, it is a successful breeder. Thus WASH-1222 should have considered the advantages of 2 fluid MSRs. - CB]

Since the publication of WASH-1097, all efforts related to the two-fluid system have been discontinued because of mechanical design problems and the development of processes which would, if developed into engineering systems, permit the on-line reprocessing of fuel from single fluid reactors. At present, the MSBR concept is essentially in the initial research and development phase, with emphasis on the development of basic MSBR technology. The technology program is centered at ORNL where essentially all research and development on molten salt reactors has been performed to date. The program is currently funded at a level of $5 million per year. Expenditures to date on molten salt reactor technology both for military and civilian power applications have amounted to approximately $150 million of which approximately $70 million has been in support of central station power plants. These efforts date back to the 1940's.

[ORNL chose a one fluid approach, because Shaw' demanded a higher breeding ratio than the two fluid approach could achieve in order to bump up theoretical breeding ratios. This choice was made to meet Shaw's demands. These sums of $150 million and $70 million seem quite paltry by the standards of 2008. Even if dollars from the 1950’s and 1960’s are translated into 2008 terms, the amount spent seems trivial in comparison to say the cost of military weapons systems. In retrospective we can say that ORNL provided a whole lot of information about a promising technology very inexpensively. - CB]

In considering the MSBR for central station power plant application, it is noted that this concept has several unique and desirable features; at the same time, it is characterized by both complex technological and practical engineering problems which are specific to fluid-fueled reactors and for which solutions have not been developed. Thus, this concept introduced major concerns that are different in kind and magnitude from those commonly associated with solid fuel breeder reactors. The development of satisfactory experimental units and further consideration of this concept for use as a commercial power plant will require resolution of these as well as other problems which are common to all reactor concepts.

[This paragraph shifts from obvious facts, to unwarrented conclusions. The facts involve “complex technological,” and “practical engineering problems” for which “solutions have not been developed.” Now had solutions been developed already, then there would be no purpose for the development program which has been proposed. The next statement does not follow from the stated issues. “Thus, this concept introduced major concerns that are different in kind and magnitude from those commonly associated with solid fuel breeder reactors.” Why are concerns about the developmental problems of the MSR different in kind and magnitude?” Given what we know today, the AEC had not only underestimated the problems associated with the development of the LMFBR, they had seriously underestimated the developmental problems associated with the LWR, a technology which Shaw and the AEC in 1972 incorrectly believed to be mature. - CB]

As part of the AEC's Systems Analysis Task Force (AEC report WASH-1098) and the "Cost-Benefit Analysis of the U.S. Breeder Reactor Program" (AEC reports WASH-1126 and WASH-1184), studies were conducted on the cost and benefit of developing another breeder system, "parallel" to the LMFBR. The consistent conclusion reached in these studies is that sufficient information is available to indicate that the projected benefits from the LMFBR program can support a parallel breeder program. However, these results are highly sensitive to the assumptions on plant capital costs with the recognition, even among concepts in which ample experience exists, that capital costs and especially small estimated differences in costs are highly speculative for plants to be built 15 or 20 years from now. Therefore, it is questionable whether analyses based upon such costs should constitute a major basis for making decisions relative to the desirability of a parallel breeder effort. Experience in reactor development programs in this country and abroad has demonstrated that different organizations, in evaluating the projected costs of introducing a reactor development program and carrying it forward to the point of large-scale commercial utilization, would arrive at different estimates of the methods, scope of development and engineering efforts, and the costs and time required to bring that program to a stage of successful large scale application and public acceptance.

[The statement of risk considerations in the proceeding paragraph is sound. Future cost estimates for projects in developmental stages represent risky conjectures. This would seem to be an argument for rather than against parallel programs. Given the cost uncertainties attendant to taking a single line approach to a technological development, it is always wise to have an alternative solution at hand, in case cost start to run away. Developmental costs for the LMFRB “Clinch River Breeder Reactor” project did run away in the 1970’s and early 1980’s. Since all of the contentions about cost risk apply equally to both the LMFBR and the MSR, the argument in the last paragraph is incoherent. That is it supports contradictory conclusions. We are being set up by this paragraph for an attempt to block further development of the MSR on the basis of cost. WASH-1222 has already made the judgment that development of the LMFBR would proceed. It appears to have assessed that the AEC’s 1970’s LMFB project could fail, as it did. The possibility of project failure is a risk. Any comparative cost/benefits study, should assess relative risks of failure. - CB]

Based upon the AEC's experience with other complex reactor development programs, it is estimated that a total government investment up to about 2 billion dollars in undiscounted direct costs could be required to bring the molten salt breeder or any parallel breeder to fruition as a viable, commercial power reactor. A magnitude of funding up to this level could be needed to establish the necessary technology and engineering bases, obtain the required industrial capability, and advance through a series of test facilities, reactor experiments, and demonstration plants to a commercial MSBR, safe and suitable to serve as a major energy option for central station power generation in the utility environment.

[Looking at this statement today with the benefit of hindsight, I would have to say that a development cost of $2 billion 1972 dollars was trivial. The Apollo Moon program cost $25.4 Billion 1969 dollars, arguably the nation would have been far better off if 10% of that money had been diverted to MSR development. In 1984 the GAO reviewed the Clinch River Breeder Reactor project, which was the AEC’s LMFBR project. In 1971 the AEC had estimated that the project would cost $400 million, of which $257 was to have come from private sources. By 1972, when WASH-1222 was written the cost estimate had risen to $700 million. By 1981 after $1 billion had been spent, the estimated cost of completion was 3 to 3.2 Billion more, with an estimated further project cost of $1 billion for a plutonium processing facility. By 1984 project cost had risen to $8 billion. And this was only a proof of concept reactor. Other proof LMFBR proof of concept reactors have had a very mixed history. Even today, a good case can be made that LMFBT technology has not been proven either safe, reliable or cost effective. - CB]
Part 2
Part 3
Part 4
Part 5
Part 6
Some Concluding Remarks about WASH-1222

WASH-1222 with Comments: Part 2

Introduction: I have, in three previous posts discussed the effects that Milton Shaw's beliefs, managerial style, and policies had both on nuclear research at some and probably all National Laboratories. Shaw's ruthless methods of imposing his views even lead to the firing of Alvin Weinberg over disagreements about nuclear safety. In my last post, on the introduction to WASH-1222, I suggested that not only did Shaw have mistaken beliefs about the maturity of reactor technology, but that these beliefs cost reactor owners tens of billions of dollars. I may elaborate on this at a further time. I also argued that Shaw's mistakes about technology extended to breeder reactors, He held the mistaken belief that the maturity of LMFBR's had reached a level of maturity similar to that of LWRs.

In my last post, I began a review of a WASH-1222, a document prepared under Shaw;s direction. In the introduction to the document i found evidence that Shaw's mistaken beliefs, coupled with the consequences of his own bureaucratic decisions for which he failed to acknowledge responsibility, and shocking errors in logic, led Shaw to discount a promising new reactor technology, the Molten Salt Reactor. At the same time, Shaw was implicitly pushing other technologies, even though many the shared many of the problems of Molten Salt Reactor technology, while for other problems with technology Shaw favored, were solved by using the molten salt approach. Shaw claimed that unanticipated costs might incurred during the corse of development
. My intention at the moment is to to post the next three sections of WASH-1222. There are a few points that might require further comments, so I might add them to morrow.

AN EVALUATION OF THE MOLTEN SALT BREEDER REACTOR

II. SUMMARY

The MSBR concept is a thermal spectrum, fluid-fuel reactor which operates on the thorium-uranium fuel cycle and when coupled with on-line fuel processing, has the potential for breeding at a meaningful level. The marked differences in the concept as compared to solid-fueled reactors make the MSBR a distinctive alternate. Although the concept has attractive features, there are a number of difficult development problems that must be resolved; many of these are unique to the MSBR while others are pertinent to any complex reactor system.

The technical effort accomplished since the publication of WASH-1097 and WASH-1098 has identified and further defined the problem areas; however, this work has not advanced the program beyond the initial phase of research and development. Although progress has been made in several areas (e.g., reprocessing and improved graphite), new problems not addressed in WASH-1097 have arisen which could affect the practicality of designing and operating a MSBR. Examples of major uncertainties relate to materials of construction, methods for control of tritium, and the design of components and systems along with their special handling, inspection and maintenance equipment. Considerable research and development efforts are required in order to obtain the data necessary to resolve the uncertainties.

Assuming that practical solutions to these problems can be found, a further assessment would have to be made as to the advisability of proceeding to the next stage of the development program. In advancing to the next phase, it would be necessary to develop a greatly expanded industrial and utility participation and commitment along with a substantial increase in government support. Such broadened involvement would require an evaluation of the MSBR in terms of already existing commitments to other nuclear power and high priority energy development efforts.

III. RESOURCE UTILIZATION

It has long been recognized that the importance of nuclear fuels for power production depends initially on the utilization of the naturally occurring fissile 235U; but it is the more abundant fertile materials, 238U and 232Th, which will be the major source of nuclear power generated in the future. The basic physics characteristics of fissile plutonium produced from 238U offer the potential for high breeding gains in fast reactors, and the potential to expand greatly the utilization of uranium resources by making feasible the utilization of additional vast quantities of otherwise uneconomic low grade ore. In a similar manner, the basic physics characteristics of the thorium cycle will permit full utilization of the nation's thorium resources while at the same time offering the potential for breeding in thermal reactors.

The estimated thorium reserves are sufficient to supply the world's electric energy needs for many hundreds of years if the thorium is used in a high-gain breeder reactor. It is projected that if this quantity of thorium were used in a breeder reactor, approximately 1,000,000 quad (1 quad = 1 quadrillion Btu) would be realized from this fertile material. It is estimated that the uranium reserves would also supply 1,000,000 quads of energy if the uranium were used in LMFBRs. In contrast, only 20,000 quads would be available if thorium were used as the fertile material in an advanced converter reactor because the reactor would be dependent upon 235U availability for fissile inventory make-up. (Note: a conservative estimate is that between 20,000 and 30,000 quads will be used for electric power generation between now and the year 2100.)

IV. HISTORICAL DEVELOPMENT OF MOLTEN SALT REACTORS

The investigation of molten salt reactors began in the late 1940's as part of the U.S. Aircraft Nuclear Propulsion (ANP) Program. Subsequently, the Aircraft Reactor Experiment (ARE) was built at Oak Ridge and in 1954 it was operated successfully for nine days at power levels up to 2.5 MWt and fuel outlet temperatures up to 1580ºF (1133 K). The ARE fuel was a mixture of NaF, ZrF4, and UF4. The moderator was beryllium oxide and the piping and vessel were constructed of Inconel.

In 1956, ORNL began to study molten salt reactors for application as central station converters and breeders. These studies concluded that graphite moderated, thermal spectrum reactors operating on a thorium-uranium cycle were most attractive for economic power production. Based on the technology at that time, it was thought that a two-fluid reactor in which the fertile and fissile salts were kept separate was required in order to have a breeder system. The single-fluid reactor, while not a breeder, appeared simpler in design and also seemed to have the potential for low power costs.

Over the next few years, ORNL continued to study both the two-fluid and single-fluid concepts, and in 1960 the design of the single-fluid 8 MWt Molten Salt Reactor Experiment (MSRE) was begun. The MSRE was completed in 1965 and operated successfully during the period 1965-1969. The MSRE experience is treated in more detail in a later section.

Concurrent with the construction of the MSRE, ORNL performed research and development on means for processing molten salt fuels. In 1967 new discoveries were made which suggested that a single-fluid reactor could be combined with continuous on-line fuel processing to become a breeder system. Because of the mechanical design problems of the two-fluid concept and the laboratory-scale development of processes which would permit on-line reprocessing, it was determined that a shift in emphasis to the single-fluid breeder concept should be made; this system is being studied at the present.

I have, in three previous posts discussed the effects that Milton Shaw's beliefs, managerial style, and policies had both on nuclear research at some and probably all National Laboratories. Shaw's ruthless methods of imposing his views even lead to the firing of Alvin Weinberg over disagreements about nuclear safety. In my last post, on the introduction to WASH-1222, I suggested that not only did Shaw have mistaken beliefs about the maturity of reactor technology, but that these beliefs cost reactor owners tens of billions of dollars. I may elaborate on this at a further time. I also argued that Shaw's mistakes about technology extended to breeder reactors, He held the mistaken belief that the maturity of LMFBR's had reached a level of maturity similar to that of LWRs.

In my last post, I began a review of a WASH-1222, a document prepared under Shaw;s direction. In the introduction to the document i found evidence that Shaw's mistaken beliefs, coupled with the consequences of his own bureaucratic decisions for which he failed to acknowledge responsibility, and shocking errors in logic, led Shaw to discount a promising new reactor technology, the Molten Salt Reactor. At the same time, Shaw was implicitly pushing other technologies, even though many the shared many of the problems of Molten Salt Reactor technology, while for other problems with technology Shaw favored, were solved by using the molten salt approach. Shaw claimed that unanticipated costs might incurred during the corse of development
.

AN EVALUATION OF THE MOLTEN SALT BREEDER REACTOR

I. SUMMARY

The MSBR concept is a thermal spectrum, fluid-fuel reactor which operates on the thorium-uranium fuel cycle and when coupled with on-line fuel processing, has the potential for breeding at a meaningful level. The marked differences in the concept as compared to solid-fueled reactors make the MSBR a distinctive alternate. Although the concept has attractive features, there are a number of difficult development problems that must be resolved; many of these are unique to the MSBR while others are pertinent to any complex reactor system.

The technical effort accomplished since the publication of WASH-1097 and WASH-1098 has identified and further defined the problem areas; however, this work has not advanced the program beyond the initial phase of research and development. Although progress has been made in several areas (e.g., reprocessing and improved graphite), new problems not addressed in WASH-1097 have arisen which could affect the practicality of designing and operating a MSBR. Examples of major uncertainties relate to materials of construction, methods for control of tritium, and the design of components and systems along with their special handling, inspection and maintenance equipment. Considerable research and development efforts are required in order to obtain the data necessary to resolve the uncertainties.

Assuming that practical solutions to these problems can be found, a further assessment would have to be made as to the advisability of proceeding to the next stage of the development program. In advancing to the next phase, it would be necessary to develop a greatly expanded industrial and utility participation and commitment along with a substantial increase in government support. Such broadened involvement would require an evaluation of the MSBR in terms of already existing commitments to other nuclear power and high priority energy development efforts.

III. RESOURCE UTILIZATION

It has long been recognized that the importance of nuclear fuels for power production depends initially on the utilization of the naturally occurring fissile 235U; but it is the more abundant fertile materials, 238U and 232Th, which will be the major source of nuclear power generated in the future. The basic physics characteristics of fissile plutonium produced from 238U offer the potential for high breeding gains in fast reactors, and the potential to expand greatly the utilization of uranium resources by making feasible the utilization of additional vast quantities of otherwise uneconomic low grade ore. In a similar manner, the basic physics characteristics of the thorium cycle will permit full utilization of the nation's thorium resources while at the same time offering the potential for breeding in thermal reactors.

The estimated thorium reserves are sufficient to supply the world's electric energy needs for many hundreds of years if the thorium is used in a high-gain breeder reactor. It is projected that if this quantity of thorium were used in a breeder reactor, approximately 1,000,000 quad (1 quad = 1 quadrillion Btu) would be realized from this fertile material. It is estimated that the uranium reserves would also supply 1,000,000 quads of energy if the uranium were used in LMFBRs. In contrast, only 20,000 quads would be available if thorium were used as the fertile material in an advanced converter reactor because the reactor would be dependent upon 235U availability for fissile inventory make-up. (Note: a conservative estimate is that between 20,000 and 30,000 quads will be used for electric power generation between now and the year 2100.)

IV. HISTORICAL DEVELOPMENT OF MOLTEN SALT REACTORS

The investigation of molten salt reactors began in the late 1940's as part of the U.S. Aircraft Nuclear Propulsion (ANP) Program. Subsequently, the Aircraft Reactor Experiment (ARE) was built at Oak Ridge and in 1954 it was operated successfully for nine days at power levels up to 2.5 MWt and fuel outlet temperatures up to 1580ºF (1133 K). The ARE fuel was a mixture of NaF, ZrF4, and UF4. The moderator was beryllium oxide and the piping and vessel were constructed of Inconel.

In 1956, ORNL began to study molten salt reactors for application as central station converters and breeders. These studies concluded that graphite moderated, thermal spectrum reactors operating on a thorium-uranium cycle were most attractive for economic power production. Based on the technology at that time, it was thought that a two-fluid reactor in which the fertile and fissile salts were kept separate was required in order to have a breeder system. The single-fluid reactor, while not a breeder, appeared simpler in design and also seemed to have the potential for low power costs.

Over the next few years, ORNL continued to study both the two-fluid and single-fluid concepts, and in 1960 the design of the single-fluid 8 MWt Molten Salt Reactor Experiment (MSRE) was begun. The MSRE was completed in 1965 and operated successfully during the period 1965-1969. The MSRE experience is treated in more detail in a later section.

Concurrent with the construction of the MSRE, ORNL performed research and development on means for processing molten salt fuels. In 1967 new discoveries were made which suggested that a single-fluid reactor could be combined with continuous on-line fuel processing to become a breeder system. Because of the mechanical design problems of the two-fluid concept and the laboratory-scale development of processes which would permit on-line reprocessing, it was determined that a shift in emphasis to the single-fluid breeder concept should be made; this system is being studied at the present.

Followers

Subscribe To Nuclear Green

Blog Archive

Contributors

Some neat videos

Nuclear Advocacy Webring
Ring Owner: Nuclear is Our Future Site: Nuclear is Our Future
Free Site Ring from Bravenet Free Site Ring from Bravenet Free Site Ring from Bravenet Free Site Ring from Bravenet Free Site Ring from Bravenet
Get Your Free Web Ring
by Bravenet.com
Dr. Joe Bonometti speaking on thorium/LFTR technology at Georgia Tech David LeBlanc on LFTR/MSR technology Robert Hargraves on AIM High