Saturday, April 30, 2011

Future storm damage to the grid may carry unacceptable costs

Yesterday I pointed out the extent of tornado damage to the south east, and the effect of that damage on economic activities in effected communities. The absence of electricity effects search and rescue operations as well as recovery efforts. In addition basic services, hospitals, schools may not be able to function, grocery stores are unable to refrigerate produce and meat as well as operate cash registers, service stations may be limited or unable to pump gas. Without refrigeration food will spoil. And in addition to electricity, local telephone systems may be down in many communities. Both land lines and cell phones may be effected.

Without electric and community services, businesses may not be able to operate. Conditions are so bad in parts of Alabama that thousands of families have spontaneously evacuated.

Tornado's can down power lines as well as knock down transmission towers. The Tennessee Valley Authority alone is reporting 70 damaged power lines and damage to 120 metal transmission towers and poles, Over 600,000 homes and businesses that receive electricity from TVA, are currently experiencing outages.
The damage includes a large portion of TVA's 500-kilovolt "network grid backbone" and most of the 161-kv lines serving northern Alabama and Mississippi.
Storm trackers report that tornadoes carved out grown paths of as long as 175 miles during the recent tornado outbreak. Thus one tornado is capable of downing multiple power lines. In addition the switch yard of TVA's northeast Alabama Widows Creek coal-fired plant power plant has received tornado damage and the plant can only transmit limited amounts of electricity.

Eric J. Lerner recently states:
. . . limited use of long-distance connections aided system reliability, because the physical complexities of power transmission rise rapidly as distance and the complexity of interconnections grow. Power in an electric network does not travel along a set path, as coal does, for example. When utility A agrees to send electricity to utility B, utility A increases the amount of power generated while utility B decreases production or has an increased demand. The power then flows from the “source” (A) to the “sink” (B) along all the paths that can connect them. This means that changes in generation and transmission at any point in the system will change loads on generators and transmission lines at every other point—often in ways not anticipated or easily controlled . . .
Thus the expansion of grid systems will increase grid vulnerability to unacceptable electrical outages.

Amory Lovins has long argued that the traditional grid is vulnerable to this sort of damage. Lovins proposed a paradigm shift from centralized to distributed generation and from fossil fuels and nuclear power to renewable based micro-generation. Critics have pointed to flaws in Lovins model. Renewable generation systems are unreliable and their output varies from locality to locality, as well as from day to day, and hour to hour. In order to bring greater stability and predictability to the grid, electrical engineers have proposed expanding the electrical transmission system with thousands of new miles of transmission cables to be added to bring electricity from high wind and high sunshine areas, to consumers. This would lead, if anything, to greater grid vulnerability to storm damage in a high renewable penetration situation.

Thus Lovins renewables/distributed generation model breaks down in the face of renewables limitations. Renewables penetration, will increase the distance between electrical generation facilities and customer homes and businesses, increasing the grid vulnerable to large scale damage, rather than enhancing reliability. Unfortunately Lovins failed to note that the distributed generation model actually worked much better with small nuclear power plants than with renewable generated electricity.

Small nuclear plants could be located much closer to customer's homes, decreasing the probability of storm damage to transmission lines. At the very worst, small NPPs would stop the slide toward increased grid expansion. Small reactors have been proposed as electrical sources for isolated communities that are too remote for grid hookups. If the cost of small reactors can be lowered sufficiently it might be possible for many and perhaps even most communities to unhook from the grid while maintaining a reliable electrical supply.

It is likely that electrical power will play an even more central role in a post-carbon energy era. Increased electrical dependency requires increased electrical reliability, and grid vulnerabilities limit electrical reliability. Storm damage can disrupt electrical service for days and even weeks. In a future, electricity dependent economy, grid damage can actually impede storm recovery efforts, making large scale grid damage semi-self perpetuating. Such grid unreliability becomes a threat to public health and safety. Thus grid reliability will be a more pressing future issue, than it has been. It is clear that renewable energy sources will worsen grid reliability,

Some renewable advocates have suggested that the so called "smart grid" will prevent grid outages. Yet the grid will never be smart enough to repair its own damaged power lines. In addition the "smart grid" will be venerable to hackers, and would be a handy target to statures. A smart grid would be an easy target for a Stuxnet type virus attack.

Not only does the "smart grid" not solve the problem posed by grid vulnerability to storm damage, but efficiency, another energy approach thought to be a panacea for electrical supply problems would be equally useless. Thus, decentralized electrical generation through the use of small nuclear power plants offers real potential for increasing electrical reliability, but successful use of renewable electrical generation approaches may worsen rather than improved grid reliability.

Friday, April 29, 2011

Crisis in Alabama

At last count the death total from the April 27th southern storms had risen to above 300.

Whole cities have been devastated. There is huge damage to the grid infrastructure in Northern Mississippi and Alabama,
TVA has more than 100 major fallen transmission lines in North Alabama and Mississippi. . . .

(L)arge transmission lines and its towers were downed, the Widows Creek fossil plant was damaged and is offline, and the Browns Ferry nuclear plant shut down after power going into the plant was interrupted.
A reported estimate of 600,000 TVA customers are without electric power, and more than one million people in Alabama are experiencing outages. TVA has reported that it would need
"days to weeks" for electricity service to resume. Hydropower from the Guntersville Dam is available and Browns Ferry can generate power when the transmission system is repaired.
Reportedly it will take a week before TVA will be able to provide Northern Alabama with some electricity, and even then only some customers, such as hospitals, pharmacies, grocery stores and gas stations. Residential electrical services will be restored later. The economy of northern Alabama has been hit with a terrible blow. Alabama cities and counties are declaring dusk to dawn cerfews.

In Huntsville, Alabama utilities officials said that
150 power poles (were) downed and some substations damaged . . . .
People, including LFTR maven Kirk Sorensen and his family, are leaving Huntsville. In effect they are storm refugees. The Huntsville Times reports,
The relentless, catastrophic outbreak of tornadoes that barreled through our homes, our neighborhoods and our souls has left us crippled. Power will be out for days. Refrigeration is going to be limited or out at supermarkets. ATM machines will likely be out of service. Gas stations will likely be closed. . . .

. . . The National Guard will be called in to offer support.
In East Tennessee the storms were unusual. I spent the first half of my life in East Tennessee, and never saw anything like them. First there were the thunder storms and super cells, one after another. Usually a row of storms will pass through and then the bad weather is over. These storms kept coming starting during the morning hours and lasting through midnight. There were not the huge devastating tornadoes which were seen in Alabama and Mississippi, but radar showed that a lot of the storm cells contained rotating winds. Most of the proto-tornadic storms I witnessed during my 35 years in Texas were swept in on a single rapidly moving line and then were gone. I don't recall ever witnessing a day long episode of proto-tornadic storms like this. This is the latest episode of extremely unusual storm events in the south east that began less than two years ago when Atlanta underwent a ten thousand year rain/flood event. This was followed about 6 months later, by a thousand year rain/flood event in Nashville. Forty years ago this spring, I heard CO2/environmental researcher, Jerry Olsen tell ORNL scientists that we were headed into an era of unusual storms, and like other forecasts Jerry made that day, this now seems to have come to pass. Some people, mainly Republicans are still in denial about global climate change, just as many Republicans also deny that President Obama's is a native born American citizen.

Of course the recent Southern storms do not prove beyond all doubt that Anthropogenic Global Warming is occurring, but they fit into AGW expectations. Perhaps it is not the storms themselves, but the lack or any evidence that would lead us to atribute them to some other causes.

The recent grid damage in Mississippi and Alabama should be the focus of a great deal of attention because of the degree of damage to the grid, and its undoubted economical consequences. I Intend to follow up this post with an analysis of the disaster's implication for the future of the grid and the future of nuclear power in the United States.

Wednesday, April 27, 2011

Global Burden of Disease: Epidemiologists at Work.


Cancer Cell #14

Angela Canada Hopkins, American Contemporary, Artist's collection, undated.


At the end of the century recently passed, the 20th century, huge strides were made by epidemiologists in understanding the distribution of disease and the effect of disease on not only economics but also upon human well being and culture. I have been thinking a great deal about risk lately, and herein I propose to discuss the origins of a tool developed in epidemiology to assess risk, specifically the conception of the DALY, which is an acronym for Disability Adjusted Lost Years. In case anyone thinks that "lost years" refers to some time that one spent the first years of college smoking pot and going through all sorts of machinations in hopes of getting into someone's pants - even going so far as to incur the risk of driving to Greenpeace "No Nukes" rallies with the object of one's desire - "lost years" here refers to what is also known as "YLL" or "YoLL," - Years of Lost Life. On the simplest level, "years of lost life" might be nothing more than not living to "average life expectancy" because one has incurred a risk of, say, of getting cancer through exposure to gasoline, or dying of a pulmonary embolism as the result of particulates lodged in one's lungs, though in practice, calculations of this type are way more sophisticated than that.

The important fact is that the unit developed is designed to quantify risk.

First I need to be clear about something: I am not an epidemiologist, but - in the spirit of being unafraid to approach any topic with the intention of learning more about it - and because my professionall work is peripherally involved in epidemiological conceptions - I often speak with epidemiologists in connection with my work - I thought it might be worthwhile to review what I have taught myself about their tools.

The main references for this diary will be a series of four publications published in the early 1990's by the epidemiologist C.J.L. Murray who was then Executive Director of the Evidence and Information Cluster at the World Health Organization......beginning with the less than succintly titled Quantifying the burden of disease: the technical basis for disability-adjusted life years. (Bulletin of the World Health Organization, 1994, 72 (3): 429-445). Dr. Murray, who is a physician and an epidemiologist, is now an Adjunct Professor of Population and Public Health at the Harvard School of Public Health.

Despite the fact that his titles (for himself and his papers) are as long winded (if more meaningful) than NNadir diaries, Dr. Murray's works are widely cited in the primary scientific literature.So what is the "DALY" and what is meant by the "Global Burden of Disease" beyond the obvious dimension of disease burdens?

I'll let Murray speak for himself on these points:

Why measure the burden of disease?

The intended use of an indicator of the burden of disease is critical to its design. At least four objectives are important.

- to aid in setting health service (both curative and
preventive) priorities;

- to aid in setting health research priorities;

- to aid in identifying disadvantaged groups and
targeting of health interventions;

- to provide a comparable measure of output for
intervention, programme and sector evaluation
and planning.

Not everyone appreciates the ethical dimension of health status indicators (4). Nevertheless, the first two objectives listed for measuring the burden of disease could influence the allocation of resources among individuals, clearly establishing an ethical dimension to the construction of an indicator of the burden of disease.


The bold in the word ethical is mine.

Without a doubt, an attempt to quantify ethics is immediately on shaky ground. How for instance can one rectify say, Ayn Rand type "ethics" - which I think that most of here regard as something of a very, very, very, very bad (and profoundly unfunny) joke - with the ethics of a Eleanor Roosevelt or a Raul Wallenberg? Still, the exercise may be a worthy one.

Many statistical measurements, such as the Gini index - which measures the distribution of weath in a culture and can be used to measure things like the distribution of decent health care in a culture - do have quantifiable aspects that should evoke an ethical response. It is possible to quantify distributions of access to health care by measurements of those portions of a population that have, for example, access to vaccination, or regular check ups, or laboratory blood work. A Gini type calculation might - I'm not aware of such research but am merely suggesting a possibility - measure the number of persons who can afford to have their cholesterol level checked - and compare two or three cultures. Obviously such a distribution would be very different if determined in French culture than if were determined in American culture or Malian culture. (If - as we may expect - nearly every person in France can have his, her or its cholesterol checked, their Gini distribution would be lower than in the United States, where some large percentage of the population has no access to health care, and even worse (at least we hope as Americans) in Mali, where access to laboratory tests are only available to the elite.

(For the record, most of nations that measure worse in the economic Gini wealth distribution index are not considered "first world" countries), although China, nominally socialist, has recently entered first world status while having a slightly worse Gini coefficient.)

Obviously in an increasingly complex world, ethical responses should have at least some basis in measurable data.

Also - and this is the point I think that Dr. Murray and his co-workers at the World Health Organization surely intended to make - it is definitely in a world of limited resources necessary to choose effective responses that will give the greatest and broadest benefit for the resources expended.

It would be unwise for instance to invest hundreds of billions of dollars to distribute home cholesterol test kits in France, since little change to health status would be effected and we might ethically argue that such money, if available, would produce a better result in building laboratories in Mali. However, without any knowledge of the data connected with the Malian health care system, such a determination would be impossible.

However, a very real and very urgent ethical point - very, very, very slippery in fact - would be in comparing the relative worth of a Malian with a French person, particularly if one is either French or Malian.

The Global Burden of Disease study conducted in the 1980's and 1990's was an effort to begin with the obvious point of determining what, actually, are the main causes of death in the world, and then, what the main causes of premature death. Surprisingly until that time - much of what was known on the that subject came largely on an ad hoc and lest than comprehensive and systematic basis.

The risks of ignorance of these things is, in fact, enormous. To quote Murray's paper again:

More importantly, there may be no open discussion or debate on key value choices or differential weightings. The wide variation in the implied value of saving a life in public safety legislation is but one example (8). Alternatively, we can explicitly choose a set of relative values for different health outcomes and construct a single indicator of health. The black box of the decision-maker's relative values is then opened for public scrutiny and influence.



Now, let me step back for a second and state my personal view that "public scrutiny" is usually an exercise in unbridled stupidity - with such stupidity being whipped up by the undue influence of very, very, very, very badly educated and narrow minded "journalists" possessing ersatz ethics.

For instance - and this is is diary about nuclear energy just as my recent diary about suspicions that talc in cosmetics causes ovarian cancer was also (albeit not explicitly) about nuclear energy - the "public" is supposed to have decided that nuclear energy is more "dangerous" than air pollution. Because of the mindless rote visceral unethical uneducated anti-nukes there is no open discussion of whether it is more important that 21 people at Fukushima are known to have been exposed to more than 100 microseiverts of radiation - with none having died from radiation more than a month into the accident and none likely to die from radiation in the near future - than the epidemiologically well known fact that more than 5000 people die each day from air pollution.

The fact that people on this website will invest thousands upon thousands upon thousands of hours debating the issue of whether someone, even 1000 or 10,000 people may face increased risk of getting, say cancer, has nothing at all to do with the far greater and far better understood risks of climate change - and the attendant rise in sea levels making any tsunami anywhere at any time even more devastating, with or without the agency of a nuclear facility being involved.

If - and this won't happen by the way - 10,000 people died from "eventual" cancers shown to be by a possible Bayesian analysis to be attributable to Fukushima, it still would not be the equivalent of two days worth of deaths from the normal operational output of dangerous fossil fuel plants, never mind the well known carcinogenic effects of say, crude oil, now distributed essentially forever, over the floor of the Gulf of Mexico.

For the record, the National Institute of Environmental Health and Safety, is conducting a study, The GuLF study on the health effects of populations and workers - many hundreds of thousands if not millions - of people exposed to carcinogenic crude oil exposure from the Horizon oil spill.

Where are the brainless Greenpeace types calling for banning oil?

I'll tell you where, driving their cars around to "No Nukes" meetings because of a 14 meter tsunami struck a nuclear plant in someone else's country. In brainless driving around (an using up nuclear and non-nuclear electricity to post insipid web posts) the Greenpeacees are effect making an argument that any life lost to nuclear causality is infinitely (and the word "infinite" still applies since immediate radiation deaths are still zero) to millions of energy related deaths from other causes, included but not limited to air pollution.

(No tsunami or earthquake was involved, by the way, in the explosion of the Horizon oil platform disaster off the coast of the United States and the Horizon oil platform explosion immediately incinerated more people than have died thus far from Fukushima.)

Predictable.

The fact is that a 14 meter tsuami swept over a nuclear facility, and stored nuclear fuel from its operations over decades and - compared objectively with the loss of life from non-nuclear causes connected with the same event, the tsunami - resulted in trivial loss of life. I don't know about you, but the feeling that anti-nukes are disappointed that the nuclear death toll was not enormous is palable from where I sit.

Anyway, about DALY's:

As I pointed out earlier in this diary, DALY's cannot help but be involved with some type of value judgment, and in fact a consideration of the mathematical distribution function that expressed one type of value judgement implicit in the DALY - the weighting of a person's age at the time of attributable death (say for instance from ozone exposure) - drew me to these seminal papers on the subject.

However, to be fair, again, it goes without saying that my reconsideration of DALY's, with which I was generally familiar, and my need to having a deeper understanding of their basis, was generated by my moral and intellectual disgust at the (to me) obvious international expressions of ignorance and superstition that have surrounded Fukushima.

Murray refers to, but divorces himself from quasi-philosophical approaches to value judgements with respect to the DALY.
This paper is not intended to present a new paradigm for measuring health, nor to firmly identify one intellectual tradition such as utilitarianism, human rights, or Rawls' theory of justice (9) as the basis for the social preferences incorporated into DALYs. Rather, the majority of the paper is devoted to a discussion of several types of social preferences which must be incorporated into any indicator of health status. In order to derive a usable indicator, a particular stand is also taken on each of the social values described. The philosophical basis for this position will not be argued in detail.
.

Thus, perhaps breezily, sweeping these points away, Murray describes the basic assumptions, with which, frankly I personally have no problem whatsoever. They are, with direct citation of the text:

(1) To the extent possible, any health outcome that represents a loss of welfare should be included in an indicator of health status.


Suppose one is studying coal miners on a Native American Reservation, like say, miners at the Kayenta coal mine, the official position of the management at this website notwithstanding that the only Native Americans ever injured by energy mining related causes were uranium miners. Murray argues that one need not wait for a Kayenta coal miner to die before accounting for the effect of coal on the health of the Dine people. If miners are disabled by say, black lung, and cannot work, the DALY should be designed to include the effects of this loss of productivity. Similarly, if a worker at Fukushima whose dose of radiation exceeded, say, 250 microsieverts, develops leukemia 15 years from now, the effect on his or her productivity is incurred not at death, but at the moment of disability.

I can live, and perhaps die with that.

Next:

(2) The characteristics of the individual affected by a health outcome that should be considered in calculating the associated burden of disease should be restricted to age and sex.


Murray makes reference to the belief on the part of some scholars that some people are, in fact, worth more than others, and that therefore demographic factors like education level, and individual importance to the economy makes a difference. By the logic of these scholars, individuals who have earned, say, a Ph.D. at Harvard, like, say, Kurt Wise are "worth" more than say, the guys and gals who clean Boston's sewers, because "society" has invested more in Kurt Wise than in the guys and gals in the sewers. Of course, the fact that Kurt Wise is actually a waste of humanity - as is a person who claims that by having earned a Ph.D (in something) and then joining Greenpeace one has proved Greenpeace a worthy organization - is generally not included in a value judgement which weighs a person of high educational status more than one who lacks an education entirely. One could make similar types of arguments: About 25 or 30 - I've lost count - people right now depend directly on my job performance for their economic well being.

Murray rejects all such distinctions.

If I die tonight from an embolism however, the lives of those people who feel they "need" me to help them hold their jobs need not be permanently effected; management may merely replace me with another executive, maybe with someone better at the job than I am. If there is a surfeit of people who can do what I can do, society doesn't really lose, and if there is a shortage of people who can do what I do, there is loss to society, but overall, over the range of all possible deaths from embolisms, things average out. Few, if any individual human beings have any profound level of importance in measuring the burden of disease.


In the DALY, only age and sex count, and the reasons for this should be obvious: If an old fat fart like me dies from Fukushima or ozone or exposure to crude oil or gasoline, it is not the equivalent of my sixteen year old son dying from any of these things. But other things like Kurt Wise and the sewer guys average out.


Thus I am the equivalent of a male greeter at Walmart who happens to be the same age as I am where my life or death are represented in a DALY calculation. Similarly if the guys who know how Boston's sewers work die on the job, the impact on society as a whole may actually be greater than if Kurt Wise dies.

(3) Treating like health outcomes as like

We articulate a principle of treating like health outcomes as like. For example, the premature death of a 40-year-old woman should contribute equally to estimates of the global burden of disease irrespective of whether she lives in the slums of Bogota or a wealthy suburb of Boston. Treating like events equally also ensures comparability of the burden of disease across different communities and in the same community over a period of time. Community specific characteristics such as local levels of mortality should not change the assumptions incorporated into the indicator design. The value of a person's health status is his or her own and does not depend on his or her neighbour's health status.


'Nuff said.

(4) Time is the unit of measure for the burden of disease


These are the "YoLL" type units mentioned in the intro of this diary. One attempts to understand how long a person should or could live and then by comparing factors in the lives of certain subsets of people, people who have had tuberculosis for instance, or AIDS or who have cancer, determining how many years of expected or achievable life have been lost as a result of these diseases. It follows almost immediately that such a calculation can easily be extended to giving insight to risks incurred by people to use just one example who have worked to clean up leaks of carcinogenic crude oil on the beaches of exclusive Florida white sand Gulf beaches in time for the tourist season for example - to see whether crude oil leaks have a quantifiable effect on health.

Of course, one can never been comprehensive. It may be possible to determine by examing things like "causes of hospital admissions" or "visits to physicians" or "death certificates" to determine cancer rates in a population, including maybe, most, if not all, of the population of the entire world. However it is more difficult to assertain whether any or many of said people who appear as cancer victims have worked with carcinogenic oil. It is certainly not possible to review the health records of the many hundreds of thousands (or millions) of people who were exposed - and are still be exposed - to Deepwater Horizon oil. Nevertheless one may draw statistical inferences, and by statistical means - connected largely with the size of samples - make fairly precise, with the likelihood that one is making an accurate association as measured by what are called "confidence limits" - estimate of the risk of crude oil exposure, or gasoline exposure.

Still it is important to be careful about what the takeaway from such statistics. The State of Florida has defined "screening" levels for Deepwater Horizon carcinogens, found in it's coastal waters and in beach sediments. It is useful to review the document and see that the "screening levels" are risk weighted, specifically, the components of dangerous crude oil in the waters off Florida are considered a risk to human health if they raise the lifetime risk of getting cancer by 1 X 10-6, or one in a million. This means that among one million people exposed to such levels there will be one additional cancer at these levels. Of course, during the disaster in the Gulf, exposure vastly exceeded these "safe" levels by vast amounts. Nevertheless such things are somewhat fuzzy because for many compounds in dangerous crude oil, their actual carcinogenicity is poorly understood, and, of course, people in the Gulf are often exposed to a host of other carcinogenic agents. But, be all that as it may, increased risk is often confused in the minds of the general public with certainty.

The comedian George Burns lived to be 100 years old despite the fact that during his 90 year show biz career he famously (for at least 80 years of performance) appeared while smoking a cigar. Now, a medical doctor in whose care he may have been in his 60's would have certainly advised him that he was at risk of getting cancer from his habit, but he didn't get cancer. He died as the result of complications from a fall.

One may look at George Burns and interpret these results as an indication that smoking cigars is safe but the problem has to do with sample size.

Of the four great Nobel Laureate American scientists who did the much to found the American commercial nuclear industry, three of the four, Seaborg, Wigner, and Bethe lived more than 87 years: Bethe died at the age of 99, just one year short of the lifetime of George Burns, Wigner - considered to be not only a scientific genius but also an engineering genius - lived to be 93, and Seaborg, who not only worked with some of the most radioactive elements known, and in fact was discoverer or co-discoverer of 10 of them, died at the age of 88, from a stroke. One of the American (although he won the Nobel while still a citizen of Italy) scientists, Enrico Fermi - who built the world's first nuclear reactor (in a squash court in Chicago) - died at the age of 53 from stomach cancer.

One cannot, of course, draw the conclusion that Fermi's death proves that people who work on nuclear reactors will die before 60, nor can one prove from Bethe's case that people who work on nuclear reactors will come close to becoming centigenarians.

The sample size of Nobel Laureates who worked on the development of nuclear energy is simply too small. (Alvin Weinberg, who was mentored by Wigner, and who considered Wigner to be his intellectual father, who was also the designer of the Pressurized Water Reactor which dominates the world's commercial nuclear fleet, and the in many ways superior Molten Salt Reactor - a reactor in a permanent and deliberate state of "meltdown: - which has never been commercialized, much to the loss of humanity as a whole - lived to be 91. One of Weinberg's ideas that has thus far proved a failure was his promotion of the National Renewable Energy Laboratory. Thus far the entire failed Renewable Energy - it's toxicological implications ignored by an insipid and unaware public - has done nothing more than to function as a fig leaf for the gas industry.)

At Fukushima, 21 people seem to have been exposed to more than 100 microsieverts of radiation, most within a period of a few weeks, some with a period of a few hours. None of them have died thus far from radiation (or anything else), but all of them will die, although not necessarily from radiation related effects. It may be that all of them will die of cancer, in which case dumb people will take it as "proof" that "nuclear energy is dangerous," although, again, 11 people were incinerated instantly when the Deepwater Horizon rig blew up in the Gulf producing zero calls from the mental midgets at Greenpeace for the immediate phase out of oil. On the other hand, none of the 21 Fukushima 100+ μSv exposed persons may get cancer, in which case this will not be proof that such exposures are safe.

For sure these people will be intensely studied, and their medical records will be elaborate. But because the sample size is so small, any conclusions drawn will be essentially meaningless.

Right now of course, the inane and toxic Greenpeace crowd will sit around on their useless consumer cult asses, claiming - or at least implying strongly - that every cancer in Japan henceforth can be attributable to Fukushima, even though before Fukushima, among humanity as a whole, everyone has roughly a 1 in 5 chance of dying from cancer. NOT ONE of the anti-science, anti-intellectual, dogmatic assholes in this toxic organization will stop for a second to consider the possibility that if cancer rates in Japan do rise in the area of the tsunami that it could be the result not only from the reactors but also because of carcinogenic gasoline leaking from tens of thousands of smashed cars or, for that matter, highly carcinogenic solvents leaching from smashed semiconductor plants, including those where solar cells for the failed (and toxic) solar industry are manufactured.

As it happens, right now, Japan has the highest life expectancy in the world. This should not be construed as evidence that having two cities destroyed by nuclear weapons is good for public health. The sample size of countries in which one (or more) cities have been destroyed by nuclear weapons is too small, one.

This limit to the sample size of nations that have been victims of nuclear war has been observed for more than 60 years, despite many representations by less than honest (and decidedly dogmatic) people like the (Snowmass) Valley Girl Amory Lovins that nuclear power will inevitably lead to nuclear war. (World production of nuclear energy has increased 4 fold since the insufferable ass Lovins wrote an insufferably stupid paper making this claim in 1980, when he was commenting on the supposed "death of nuclear power.")

But this brings me to my final point about the structure of the statistical "risk" measure, the DALY.

Japan's population is aging and birth rates in that country are actually below the replacement rate. This means that the mean and median ages of citizens in Japan is rising.

One of the factors included in the DALY is a "discount rate" for human beings. The rationale for this approach is that as humans age, society invests in them - in the form, first, of health care, then education - which may last two or three decades - and finally professional training. After this "investment" according to the theory, people reach a maximum "return on the investment" with their productivity increasing and their "contribution to society" being accrued. By this scheme, losing a man or woman at the age of 30 costs society more if society has invested substantial resources in them - in terms of education and training - since society has not generally accrued all of the possible benefits of the productivity derived from said education and training.

By contrast - this will sound cold - children under 10 represent very little "investment" by society in most cases, and society can stand the loss of children better. Similarly, after the age of 65 (62 if you're lucky enough to be French), people have retired and, in fact, they are often simply consumers rather than producers. (Before anyone attacks me for this let me say that many elderly people do produce significant and important work. Hans Bethe, mentioned earlier, worked on very important areas of physics right up to his death.

Murray uses what he calls a "modified Delphi method" - consultation with a panel of "experts" - this may be the most unsatisfactory part of the paper because (for my taste) it's way too touchy-feeley to choose the constant β in the continuous function he proposes for age weighting from a "seat of the pants" geometric consideration:
f(x) = Cxe-βx


The oracles at Delphi consider that the optimal value of β that can be chosen is 0.04, which gives a function with a maxima at around 30 years of age.

The need for a continuous function avoids large discrete calculations - given that this approach is nearly 20 years old - which, back then at least, would have been very demanding for computers of the time. This may be less true today.

The complete description of the DALY function for individual calculations of the DALY - the solution for a differential equation - is given in the paper. It is too cumbersome to attempt to produce it in this editor.

In any case, Japan has one of the most elderly populations in the world, with about 25% of its citizens being over the age of 65. Part of this may be explained by long life expectancy in this country, which produces about 30% of its electricity from nuclear energy.

Given the age weighted nature of the DALY, appeals to this measure of risk in connection to the events of Fukushima will be influenced and one should keep this in mind if one is interested in metrics.

However, if one is not concerned about measurement, may I suggest huge outbursts of hysteria, fear, wild suppositions, inattention to trivial things like control groups, and of course that old standby, hearing only what one wants to hear.

Anyway, I should close.

If you are unlucky enough to be struck by a tsunami while driving your safe car around, try not to think how much more fun the tsunami would be if sea levels were jacked up another meter. What's important is that the world's largest source of climate change gas free primary energy be risk free, and if it's not risk free, we should abandon it for other things, accepting any degree of risk, because the perfect should always be the enemy of the excellent.

Have a nice coal, oil and gas powered day tomorrow.

(Orignally posted at Daily Kos with an amusing poll.
Here is the link.)

The Molten Salt Reactor Family: Uranium Fuel

In an earlier post, I stated that there were many different possible Molten Salt Reactor designs. I pointed to nuclear fuel as one possible source of reactor design variations. There are two potential nuclear fuel cycles that can be used in Molten Salt Reactors. Choice of fuel cycles can make a difference in reactor designs. Today, I want to focus on one of the two fuel cycle options, uranium. There are in fact several different types of uranium fueled Molten Salt Reactors.

The first type that I would consider could be called the ORNL technology uranium fueled MSR. The reactor could be a direct development of the technology used in the Molten Salt Reactor Experiment. Such a reactor would use LEU with up to 19.75% U-235, and could operate at temperatures of up to 704°C (1300°F), it would. Without nuclear proliferation concerns, the U-235 content could be raised higher, even 100% U-235 could be used. ORNL technologists preferred building their MSR core structure with Hastelloy ® N, an nickel alloy. This high operating temperature allows fos significant improvements in electrical generation thermal efficiency compared to conventional Light Water Reactors (LWRs).

In addition the UMSR would be simple and compact. It would not require massive steel pressure vessels, and massive concrete containment structures. These are two characteristics that could potentially lower reactor manufacturing costs. Thus the UMSR is likely to be less expensive to manufacture and less expensive to operate than conventional reactors. The ORNL Molten Salt Reactor Experiment (MSRE) proved to be highly reliable, thus the UMSR could compete with LWR as base load power sources. Advance reactor safety features, that are unique to Molten Salt Reactors could be included in the UMSR design. With low enriched uranium (LEU) the UMSR operate as a U-238 fuel cycle plutonium converter. A converter is a reactor that produces some new nuclear fuel, Unlike a nuclear breeder, a converter produces less than one atom of new nuclear fuel for every atom of old fuel it uses.

A U-238 cycle MSR converter would solve much of the nuclear waste problem that characterizes LWRs. MSRs are good plutonium burners, although they do not dispose of plutonium quite as efficiently as fast reactors. Because Xenon-135 can be continuously removed from the MSR core, thermal MSRs can convert U-238 into plutonium at a higher conversion ratio than LWRs can. And because Molten Salt Fuel can be easily reprocessed in its liquid form, any plutonium removed during reprocessing could be returned to the MSR core. Thus in MSRs plutonium and other actinides do not pose a long term nuclear wast problem. In fact faster MSRs are potentially so good at destroying nuclear waste, that both Russian and American reactor scientists have proposed using them to destroy the actinides waste from LWRs. What is left over from the nuclear waste destroying MSR process is fission products, much of which becomes useful for a variety of industrial uses very quickly, and all of which will be no more radioactive - and thus no more dangerous - than newly mind uranium within 300 years. (See this Google lecture by Kirk Sorensen, on the MSR solution to the so called nuclear waste problem.)

It has been a long standing contention of Nuclear Green that it is less expensive to build reactors in factories than to build them in the field. Although it is possible to factory manufacture large reactors in the form of kits containing several hundred large modules, smaller kits which contain as few as a half dozen modules are desirable. It would also be highly desirable if the modules could all be moved by truck or by train. The smallest practical size for such a reactor would be 100 MWe, although a 200-300 MWe size might be desirable.

Thus the UMSR, if it were to bew developed would be a transitional step in the evolution of reactors toward the Liquid Fluoride Thorium Reactor (the LFTR).

A second form of Uranium fueled MSRs would be what I call the Uranium Big Lots Reactor. The name Big Lots came from reactor design ideas I thought about while shopping in a Big Lots Store. The Big Lots Reactor was originally intended to be a LFTR, but it would work well with an all uranium fuel formula. The Big Lots idea was triggered by some comments by physicist David LeBlanc, who suggested MSRs cost could be lowered by building reactors from lower cost materials. What I realized during my Big Lots excursion was that for a small amount of the MSR performance sacrifice - say lowering operating temperatures from 700°C to 600°C - and by anticipating less capacity utilization - say a 15% to 25% capacity factor rather than the 90% capacity factor expected of base load generators. The Big Lots reactor was intended to load follow, to produce peak load and back up electrical generation. These were grid functions that both conventional nuclear power and renewable energy sources were not very good at. could be shifted from fossil fuels, probably without an increase in electrical price.

The Big Lots reactor would then be a discount store version of the Molten Salt Reactor. Not quite as good as the UMSR or the LFTR at pumping out full power 24 hours a day, 7 days a week, but very good for putting out power when the temperature runs to 103°F on a hot Texas Summer afternoon, or for providing quickly accessible nuclear power, if a wind farm looses its breeze or a base load nuclear plant unexpectedly shuts down. MSRs are superbly suited for backup role, but they can be designed to automatically shut down when they reach their top operating temperature. Fission product decay will keep the fluid salts in the core at peak temperature for some time. Power is transmitted to the electrical generating system by heated salt, and heated salt can be kept on tap for a week or so. Then the reactor will fire up again for a short while, only long enough to produce another few days worth of fission product decay heat.

I have recommended a number of steps to decrease nuclear costs in general and the costs of Molten Salt Reactors in particular. All of these steps could be applied to Big Lots Reactor cost lowering. Small size reactors represent a smaller risk to lenders and investors. Interest rates are tied to risks, and the lower the risk, the lower the interest rate. Thus building small reactors will quite likely lower interest rates on nuclear projects.

Big Lot reactors can be housed in underground silos, and thus would be invulnerable form attacks by large aircraft. Existing sites for natural gas fired power plants can be used to house Big Lot reactors. This would lead to further cost savings. For example the existing grid connection can be reused, saving the cost of building a new grid connection system.

Because they would only be expected to operate a small percentage of the time, and then frequently at less than full power, the Big Lot Reactor would have lower maintenance cost. Neutron radiation caused damage to reactor materials would be significantly less than in base load UMSRs.

Both the Big Lot and the base load UMSR could come in one and two fluid versions, although most would probably be one fluid reactors, because proliferation concerns would require mixing U-235 with U-238, and Plutonium involved in the nuclear process, should be kept in the same carrier fluid as the Uranium. Thus only one fluid would be required.

Thermal UMSRs would in all likelihood graphite moderated, although it has been proposed that thermal MSRs could also be heavy water moderated. This raises safety concerns.

If we decide to not use graphite as a moderator, and as I will indicate in a separate post on the use of graphite in MSRs, there are reasons why future MSR designers might decide to forego the use of graphite, we can still choose to moderate the nuclear process through carrier salt moderation. The primary carrier salt moderators are lithium fluoride (LiF) and beryllium fluoride (BeF2). These salts will slow neutrons to an epithermal speed range. More fissionable materials are required to sustain a nuclear reaction in an epithermal reaction than to sustain a chain reaction in a thermal reactor, and there ars some other issues as well, but if graphite concerns become to a major issue, epithermal may serve as a significant option.

Before we leave the world of graphite reactors behind, I would like to mention one more family of thermal Molten salt option, the Advanced High Temperature Reactor (AHTR) option being explored at the University of California Berkley and at ORNL. This reactor family might be considered a cousin of the MSR which uses liquid Salts are coolants but not as fuel carriers. The nuclear fuel for these reactors embeds the U-233, U-235 and/or Pu-239 in graphite, either in the form of graphite core structures or in the form of graphite pebbles. The AHTR is thus a hybrid of MSR and gas cooled graphite reactor technologies. ORNL is developing a small advanced high temperature reactor (SmAHTR), as a source of industrial process heat. The SmAHTR like Big Lots Reactor can serve as a source of peak demand electrical capacity, through the use of stored heated liquid salt.

Finally, it is possable to build fast Molten Salt Reactors, and they have a number of advantages over Liquid Metal Fas Breeder Reactors. Fast U-238 breeding MSRs can be designed to use either Fluoride or Chloride salts. Although it is possible to design a two fluid Fast Thorium Breeder of a hybred U-238/Th-232 fast breeder, it would certainly be possible to build a single fluid uranium cycle breeder. French physicists, working at Laboratoire de Physique Subatomique et de Cosmologie, of the University of Grenoble, (France), have proposed building a single fluid fast MSR which they intend to use as a thorium breeder, but which can be used as either a hybred breeder or a uranium cycle breeder. The French Reactor designers propose to eliminate beryllium from the salt formula. Lithium is a a moderator, but some what less so than beryllium, and there are some secondary safety advantages to removing beryllium from the reactor core.

The French molten salt fast breeder is primarily a thorium breeder, it offers some attractive features which I will discuss in a later post. In addition. Uranium/thorium hybred breeding cycles require firther discussion, and of course so does both thorium breeders and thorium converters.

Monday, April 25, 2011

The Molten Salt Reactor Family: Fuel

I intend to offer a series of posts designed to explain the sometimes bewildering complexity of Molten Salt Reactor Technology. This first post explains two nuclear fuel breeding cycles.

Rather than offering a single potential reactor design, the Molten Salt Reactor (MSR) idea offers a large number of design options, each of which would require a significant amount of research, before a prototype reactor could be built. The Molten Salt Reactor designer is faced with a bewildering number of elective choices, each offering a set of advantages and disadvantages. Each choice that the designer makes will dictate a number of design features some of which require further choices.

Lets start with nuclear fuel. My father first demonstrated that not only U-235 but also Pu-239 could be used as a reactor fuel in MSRs. During the ORNL Molten Salt Reactor experiment Oak Ridge scientists tested the use of the three fissionable materials that can be used as nuclear fuels, Plutonium-239 (Pu-239), Uranium-235 (U-235) and Uranium-233 (U-233). Once during the operation of the Molten Salt Reactor Experiment (MSRE) they used all three potential fuels in the reactor at the same time.

Of the three potential fuels, U-233 had some significant advantages. Neither U-235 nor Pu-239 produced enough neutrons per neutron hit, to support breeding more nuclear fuel at a slow (thermal) neutron speed range. U-233, produced by breeding thorium did produce enough neutrons to breed thorium at a slow temperature range. We will see that this offers a very large advantage. U-235 is not efficiently produced by breeding, while Pu-239 can only be produced in the breeding range with fast neutrons.

Breeding means that for every fuel atom used in the nuclear process, at least one new fuel atom is produced. Thus in a plutonium fast breeder, if a neutron strikes a plutonium atom, it is very likely to fission into two smaller atoms, almost always with three neutrons left over. Those neutrons will be moving fast and will contain a lot of energy. Fast neutrons are more likely to produce fission in plutonium atoms than slow neutrons. Neither U-235 nor Pu-239 produce enough neutrons to maintain breeding if they encounter a slow (also called thermal) neutron. Thus Plutonium can only be produced as a nuclear fuel in so called fast reactors. There are, as we shall see, some major disadvantages to fast reactors.

Fast reactors are often thought of as having liquid sodium as their coolant, although liquid lead, and a liquid lead-bismuth mixture have also been used as a coolant in fast reactors. In addition it is possible to build fast Molten Salt Reactors. The stability of Molten Salt Reactor operations in enhanced by Xenon-135 removal. Xenon-135 is a radioactive gas that is a byproduct of nuclear fission and has a very large neutron cross section. Because it is very likely to capture neutrons, Xenon-135 can adversely effect a chain reactor in a reactor. Thus it would be highly desirable to get Xenon-135 out of a reactor core quickly after it is produced. That is impossible in a solid core reactor, but it is not difficult to do in a Molten Salt Reactor. The presence of Xenon-135 adversely effects to the ability of reactors to breed nuclear fuel, so any MSR that is designed as a thorium breeder would have a system for moving Xenon-135 out of its core.

There are decided advantages for fuel reprocessing with MSRs. Compare the fuel reprocessing technique for a Molten Salt Reactor with the fuel reprocessing technique proposed for the Integral Fast Reactor (IFR) a LMFBR. In two fluid MSR, the blanket salt flows out of the blanket, and protactinium and U-233 are withdrawn from it by chemical processes. Once they are processed out of the carrier salt, the U-233 is re-fluoridated and returned to the core. The protactinium is set aside until it undergoes a nuclear transformation to U-233, and then that U-233 is returned to the core. In a IFR, the spent fuel is fished out of the reactor core, and once recovered, dumped into a molten salt bath, in which it dissolves. Then by use of electroplating, various material from the old fuel, for example plutonium, are separated out of the bath, and deposited on electrodes. Eventually the separated metal, is recovered, melted and mixed into an alloy, which is then cooled enough to serve as fuel elements, and then returned into the reactor. The MSR fuel reprocessing technology is much simpler than the fuel reprocessing technology designed for the IFR.

In addition fast reactors require 10 times as much nuclear fuel to produce a chain reaction as thermal breeder reactors. It does not really matter if the fast reactor is cooled by liquid metal of liquid salts, a fast breeder reactor just needs a who lot more fuel in order to operate than a thermal breeder reactor does. This makes fast reactors poor candidates to replace fossil fuels like coal with nuclear power, because many reactors will have to be built quickly, and fueling enough fast reactors quickly will be a big challenge.

There are two breeding cycles, the Uranium 238 breeding cucle, and the thorium 232 breeding cycle. Both cycles have some advantage. Plutonium-239 produces more neutrons per fission event than thorium, but fewer fission events per neutron in the thermal spectrum. In fact Pu-239 produces so many fewer fission events in the thermal spectrum than in the fast spectrum, that it is impossible to achieve a positive breeding ratio for the U-238/Pu-239 breeding cycle in a thermal reactor. On the other nand U-233 produces about as many neutrons per fission event in the thermal range as in the fast range, and about as many fission events. That means that the Th-232/U-233 breeding cycle is as effective in the thermal range asin the fast range, and because thorium breeding only requires about 10% of the nuclear fuel in the thermal range as U-238 breeding requires in the fast range, thorium breeding cycle reactors can be deployed far faster.

In addition Liquid fuel reactors have advantages over solid fuel reactors. Once a sollid fuel is inserted into a reactor it almost always stayes there for a year or more, while fission products build up in the fuel. We have already seen that Xenon-135 becomes a reactor control problem, although Xenon-135 eventually reaches an equalibrium because of its short half-life. The presence of Xenon-135 in a nuclear core, can interfear with a reactor's capacity to bread, especially in the thermal breeding range. Thus Thorium fuel cycle breeder reactor are better candidates for rapid deployment than U-238 fuel cycle breeders, and liquid fuel thorium breeding reactors have advantages as solid fuel thorium breeder. Liquid fueled thorium breeders, as we have already noted, have advantages over solid fuel U-238 breeders. Thus the Thorium fuel cycle Molten Salt Reactor (often called the LFTR) would seem to offer several advantages over U-238 fuel cycle liquid metal fast reactor.

In the next post of this series I intend to explain the difference between single fluid and twoi fluid Molten Salt Reactors.

Friday, April 22, 2011

Kurt Cobb on Resources, Energy, Thorium and Molten Salt Reactor Technology

Kurt Cobb, is an energy writer whose vision is in many respects clear headed, and who has acknowledged both the problems and potential while appearing to be intrigued by Molten Salt Reactor/thorium fuel cycle ideas. Cobb has understood that nuclear power offered a solution to the future problems of global energy. In 2009 Cobb identified the problem,
The end of the fossil fuel era is coming sooner than most people believe as exponentially increasing fossil fuel consumption brings us ever closer to the day when production will peak for oil, natural gas and coal and then begin irrevocable declines. The only options left for powering a modern technical society will then be solar, wind, tidal, hydroelectric, geothermal and nuclear. And of these, only nuclear can conceivably be located wherever it is needed at the scale required.
The earth, Cobb argued, had plenty of resources needed to sustain industrial civilization,
granite contains many common metals such aluminum, iron, magnesium, titanium and manganese. Many more minerals including uranium are available in quantities of parts per million. Seawater contains most of the elements on the periodic table, the source of which is the erosion produced by streams and rivers feeding the oceans. The air contains rare "noble" gases that are important to industrial civilization including argon, neon, helium, krypton and xenon.
The visions of the resource optimists may not work out
here's why the future may not work out as Simon and other cornucopians envision. The main energy resources we use today are mineral resources. Oil, natural gas, and coal provide 86 percent of the world's energy. All of these resources are thought to be growing more abundant through the magic of the resource pyramid. But, if you examine the pyramid closely, you will see that not only do low-grade fossil fuel resources require better technology to extract them, they also require increasing amounts of energy to run that technology. At some point the amount of energy needed to bring low-grade deposits of oil, natural gas and coal to the surface and process and transport them will be more than the energy we get from these resources. At that point they will cease to be energy sources, and the vast, remaining ultra-low-grade deposits of these fuels will be useless to us except perhaps as feedstocks for chemicals.
Cobb adds,
Without a transition to vast new supplies of nuclear and renewable energy, the promise that we will be able to go all the way to the bottom of the resource pyramid is a mere daydream. The resource pyramid only shows what is possible. It does not guarantee that humans will achieve it. If peaks in fossil fuel production are nearing, either society will have to learn to get along without many of its critical resources, or it will have to make the transition to alternative energy swiftly as part of an engineering and planning feat that would be unparalleled in human history.
Cobb is pessimistic about the ability of society to make a rapid and timely transition to post fossil fuel energy sources,
Despite the pressing need for a rapid energy transition, it is doubtful that such a transition will be initiated by market forces before fossil fuels become scarce and therefore very expensive. The reason for this is that markets consistently wrongly assess the mineral economy, projecting what resource economist Douglas Reynolds calls "the illusion of decreasing scarcity." That means that prices stay relatively low until shortly before a resource peaks. . .
Because of the very long lead times required to transform our liquid-fuel based infrastructure, for example, into one that runs on electricity, undertaking such a conversion while oil or other fossil fuel supplies are declining could be very challenging indeed. The alternatives may not expand quickly enough to make up for the energy being lost. In that case, the whole transition project would be imperiled by the declining total energy available to society. That means that money and therefore energy would have to be taken from somewhere else in an already squeezed economy to keep the transition going. Contrary to expectations that so-called green industries will create new jobs, this scenario would result in the creation of new green jobs probably at the expense of jobs elsewhere in the economy (that is, barring improbable and extraordinary sudden leaps in the energy efficiency of the economy).
In such circumstances most people would naturally be focused on just making it through the day with little concern or appetite for spending a considerable amount of their incomes to buy electric cars or retrofit their homes for energy efficiency or passive solar heat. Nor would there likely be much appetite for raising taxes for a government-led transition program and/or set of subsidies related to making a transition away from fossil fuels.
Given the current skyrocketing prices of all fossil fuels, it appears that we are very late in the game indeed. It is not clear that a transition program started now would be completed before oil and possibly natural gas began to decline. But, it is clear that the public--at least in the United States--already has little appetite for a government-led solution when the major U. S. presidential candidates are proposing to lower gasoline taxes this summer to ease the burden on family budgets.
Let's take a 500-megawatt power plant which by itself can power a city of 300,000. (A megawatt is one million watts.) It will sit astride a fairly large plot of land. A coal-fired plant near me is just under that capacity (495 MW) and sits on about 300 acres. Most of that land, however, is essentially devoted to undeveloped transmission right-of-way filled with ponds, woods and streams. Only a small portion is covered by plant facilities including coal storage. I estimate less than 30 acres.

For new wind projects huge 5-megawatt wind generators are just now being deployed. If we take these as typical (and they are not), then using an estimate of the direct land footprint for wind towers of 0.38 acres per tower, we find that we'd need 100 towers covering 38 acres. But wind turbines run at only about 30 percent capacity because the wind doesn't blow all the time. This compares to about 70 percent capacity for coal-fired power plants. So we need to multiply 100 towers by about 2 1/3 to get the number of towers we'd need to match the operating capacity of one coal-fired plant. That means we'd need about 233 towers with a direct land footprint of 87 acres. That doesn't seem too bad. And, the land under the turbines is still available for farming and other purposes. The overall direct effects on the land and water are certainly less when compared to the coal plant.

But we're not done. The spacing between towers is typically at least five diameters of the rotor. That doesn't sound like much. But for the 5-megawatt towers in this example, the spacing would be 2,065 feet times 232--we don't need to separate the last tower from another tower beyond it. Then we'd add the diameter of the rotors--413 feet times 233--and we get a distance equivalent to about 110 miles. So, we'd need a line of 5-megawatt turbines stretching 110 miles. In theory, we'd want to split them up and put them in various locations in which the wind blows hardest at different times. But the total length of the line would still be at least 110 miles. If we take the largest separation recommended between towers which is 10 diameters of the rotors, we'd have to just about double that distance.

By comparison most people who live 110 miles from a coal-fired power plant are rarely even aware that it might be a source of electricity for them. And, the plant is certainly not a direct irritation. The lesson here, however, is not one of aesthetics. It is an illustration of the disparity in power densities between those energy sources on which we currently rely and the alternatives now being proposed and deployed.

The power density problem for solar energy is no less daunting.
Cobb then puts his finger on the problem,
We will be obliged to devote vast tracts of space--far more vast than the buildings they serve--to support the energy use of our current infrastructure.

This may not be impossible, but it will certainly be costly and socially disruptive.
In 2008 Cobb saw the failure of the first nuclear age as a potential tragedy for humanity. Cobb wrote,
It is a sad commentary that so many who knew the planet would one day run short of fossil fuels were unable to convince the world to embrace nuclear power in a more thoroughgoing way. With enough development, with careful and serious attention to the waste problem, and with lower-cost, decentralized designs that maximize safety, nuclear power might have succeeded in making any decline in fossil fuel availability just another historical footnote--but only if deployed on a large enough scale and far enough in advance of such a decline.

Now it may be too late. The time for the development of the nuclear economy appears to have come and gone with few people even realizing it.
Yet in the same essay, Cobb criticized the Price-Anderson Act by characterizing it as limiting the
liability for nuclear plant operators.
In fact Price-Anderson arguably protects the government from the consequences of having to pick up the first ten billion dollars of the bill, in the event of a major nuclear accident. The major accomplishment of Price Anderson is to set up an insurance pool that protects under funded nuclear operators.

Even in 2008 Cobb was prepaired to engage in real dialogue with nuclear supporters, and too acknowledge,
The solution, of course, is to build breeder reactors and I have seen designs which address the proliferation problem, in part, by using a hybrid technology that allows non-breeder and breeder operation in sequence and so the reactor doesn't have to be refueled for something on the order of 50 years.
Cobb was pessimistic about such a future,however,
I have come to the conclusion that the regulatory hurdles facing such designs are so great that it is unlikely they will be approved and built in time to address the energy deficits we will be facing after fossil fuels peak.
Cobb believed that the idea of using thorium as a basis for the nuclear fuel cycle was promising, and
besides availability, thorium has three additional distinct advantages over uranium fuel. First, thorium fuel elements can be designed in a way that make it difficult to recover the fissile uranium produced by breeding for bomb making. This reduces the likelihood of nuclear weapons spreading to nonnuclear nations that adopt thorium-based fuel technologies.

Second, the waste stream can be considerably smaller since unlike current reactors which often use only about 2 percent of the available fuel, thorium-fueled reactors with optimal designs could burn nearly all of the fuel. This is the main reason besides its sheer natural abundance that thorium could provide such long-lived supplies of fuel for nuclear power.

Third, the danger from the waste of the thorium fuel cycle is potentially far less long-lived. The claim is that the reprocessed waste will be no more radioactive than thorium ore after about 300 years. This claim is based on the idea that virtually all of the long-lived radioactive products of breeding will be consumed in the reactor before the final round of reprocessing takes place.
Cobb also notes the potential usefulness and value of Molten Salt Reactors in managing the thorium fuel cycle,
There are also practical hurdles for reprocessing solid fuel. But advocates of the so-called molten salt reactor claim that this design lessens the problem of reprocessing since the products of breeding can be continuously extracted and processed from the molten liquid stream inside a closed fuel cycle. They also claim that the design is far less prone to accidents which might release radioactive materials into the environment. None of this, of course, solves the problems of existing reactors that use solid fuel assemblies. But it does suggest a plausible course for vastly expanding nuclear power generation with little worry about fuel supplies and fewer concerns about nuclear weapons proliferation.
Cobb points to what he believes is a possible problem with MSR nuclear technology,
The main concern about these replacements is whether they can be built fast enough to head off an overall reduction in the amount of energy available to society.
I will address this concern.

Cobb's latest essay on nuclear technology is titled, "The Road to Fukushima: The Nuclear Industry's Wrong Turn." While Cobb does not mention either Nuclear Green or Charles Barton, many of the ideas in this essay parallel, indas I have frequently expressed. The lead sentence to Cobb's essay states,
Nuclear researchers knew long ago that reactor designs now in wide use had already been bested in safety by another design.
Then Cobb asks,
Why did the industry turn its back on that design?
This is indeed a very troubling question, and one to which I have devoted a number of posts on Nuclear Green. Cobb asks,
Imagine a nuclear reactor that runs on fuel that could power civilization for millennia; cannot melt down; resists weapons proliferation; can be built on a relatively small parcel of land; and produces little hazardous waste. It sounds like a good idea, and it was a well-tested reality in 1970 when it was abandoned for the current crop of reactors that subject society to the kinds of catastrophes now on display in Japan.

This rather remarkable design is called the molten salt reactor (MSR), and it lost out for two reasons: 1) It wasn't compatible with the U.S. government's desire to have a civilian nuclear program that would have dual use, that is, that could supply the military with nuclear bomb-making materials. 2) Uranium-fueled light water reactors, which are in wide use today, already had a large, expensive infrastructure supporting them back in 1970. To build MSRs would have required the entire industry to retool or at least create another expensive parallel infrastructure. And, that's how MSRs became the victim of lock-in.
Much of this simply parafrases Nuclear Green, although I have recently offered a somewhat more complex view on why the government turned its back on Molten Salt Reactor technology.

Whatever the actual reason for the exclusion of Molten Salt Reactor technology by the United States Government, Cobb is quite correct about the consequences of that decision,
Lock-in has worked in much the same way for the nuclear industry. The decision within U.S. government circles to focus on light water reactors and abandon MSRs relegated the latter to a footnote in the history of civilian nuclear power. And, because the United States was the leader in civilian nuclear technology at the time, every nation followed us.
Then Cobb points to an important question,
So, should the world look again at this "old" technology as a way forward for nuclear power after Fukushima?
Cobb answers his own question,
My sympathies are with the MSR advocates. If the world had adopted MSR technology early on, there would have been no partial meltdown at Three Mile Island, no explosion at Chernobyl, and no meltdown and subsequent dispersion of radioactive byproducts into the air and water at Fukushima. It's true that MSR technology is not foolproof. But its very design prevents known catastrophic problems from developing. The nuclear fuel is dissolved in molten salt which, counterintuitively, is the coolant. If the reactor overheats, a plug at the base melts away draining the molten salt into holding tanks that allow it to cool down. Only gravity is required, so power outages don't matter.

As for leaks, a coolant leak (that is a water leak) in a light water reactor, can quickly become dangerous. If there is a leak from an MSR, the fuel, which is dissolved in the molten salt, leaks out with it, thereby withdrawing the source of the heat. You end up with a radioactive mess inside the containment building, but that's about it.

If the world had adopted MSRs at the beginning of the development of civilian nuclear power, electricity production might now be dominated by them. And, we might be busily constructing wind generators and solar panels to replace the remaining coal- and natural gas-fired power plants. Would there have been accidents at MSRs? Certainly. Would these accidents have been large enough and scary enough to end new orders for nuclear power plants as happened after the 1979 Three Mile Island accident in the United States? I doubt it.
Cobb is still pessimistic however,
Having said all this, I believe that MSR technology will never be widely adopted. The same problem that derailed it early in the history of civilian nuclear power is still with us. We still have lock-in for light water reactors. Yes, the new designs are admittedly quite a bit safer. But these designs still don't solve as many problems as MSRs do, and they continue to rely on uranium for their fuel. MSRs have shown themselves capable of running on thorium, a metal that is three times more abundant than uranium, and 400 times more abundant than the only isotope of uranium that can be used for fuel, U-235. This is the basis for the claim that MSRs fueled with thorium could power civilization for millennia. . . .

. . . in the United States it is easier to predict that we'll see little progress. In the U.S. it is the industry that tells the government what new nuclear technologies will be developed rather than the other way around. And, the American nuclear industry is committed to light water reactors.

I believe that even if the Fukushima accident had not occurred, nuclear power generation would probably have done no more than maintain its share of the total energy pie in the coming decades. Now, I am convinced that that share will shrink as people in democratic societies reject new nuclear plants.
Yet Cobb also acknowledges that one nation is interested in developing Molten Salt Reactor Technology,
The Chinese have announced that they are interested in pursuing MSRs and the use of thorium to fuel them. Perhaps in China--where the nuclear industry is synonymous with the government and therefore does what the government tells it to--MSRs might actually be deployed. I have my doubts. Even China suffers from the lock-in problem.
I disagree with Cobb's pessimism. Although I believe what he calls the "Nuclear Industry, the current small set of reactor manufactures outside Canada, India and China are wedded to Light Water Reactor technology, the path to the development and deployment to Molten Salt Reactors is open wide open. Molten Salt Reactors are simpler, will require less labor to construct, and fewer building materials than Light Water Reactors. This means that there is a high likelihood that Molten Salt Reactors will be cheaper to manufacture, and simpler to deploy. This gives MSRs superior scalability. MSRs are also more efficient than LWRs. MSRs can do things that neither renewables nor LWRs can do. They can produce industrial process heat of up to !200 C. With their lower costs, MSRs can offer back up generation and peak generation capacity to the electrical industry.

Thus the question is will MSRs spread from China, which appears to be committed to the development of MSR technology, or will MSR technology be developed by other societies as well? There are several paths to MSR development. MSRs could be developed in the United States by one or more National Laboratories, MSR technology can be developed as a ship propulsion technology by the United States Navy. MSR technology can be developed by the United States military as a means of supplying electricity to military bases, and for military operations. MSR technology can be developed by private manufacturing businesses, which are interested in turning their manufacturing skills into a new source of energy related revenue. MSR technology could be developed by large fossil fuel energy companies, which seek a means of remaining in the energy business after their fossil fuel business declines. MSR technology could also be developed by a group of nations, which are attracted by the energy advantages MSRs offer. Thus there are many potential pathos to MSR development, and once adventurers start down one of them, other paths are likely to quickly open up.

When I began to write about MSRs in 2007, virtually no one had heard of them. On the Internet I found, Bruce Hoglund's Molten Salt Interest Pages, and Kirk Sorensen's Energy from Thorium. Fast forward to 2011, and the Molten Salt Reactor, mainly in the form of Liquid Fluoride Thorium Reactor, a name given by Kirk Sorensen, is widely known. The idea of a thorium fuel cycle Molten Salt Reactor has been adopted for development by China as a promising new nuclear technology, as Kurt Cobb has pointed out. Other parties are looking with interest, but have not announced plans yet. I expect some MSR development plans to emerge before the end of 2012.

Thursday, April 14, 2011

Did Graphite in the Chernobyl Reactor Burn?

In two previous posts, " Does Nuclear Grade Graphite Burn?," and "Did the Graphite in the Windscale Reactor Burn?" I reviewed a number of reports and other information sources on Nuclear Graphite Flamibility. Although I did not come to a firm conclusion, i did find strong evidence that Nuclear Graphite does not burn under many conditions in which one would expect fire. There is also startling evidence that at least one of the the two reactor fires which are attributed to graphite, the Windscale accident, appears to have not involved a graphite fire. I concluded my Windscale review with the statement,

Given these facts, the assertion that there was a core graphite fire at Chernobyl ought also to be revisited.
This post considers several reports that are relivant to an evaluation of the role of graphite in te Chernobyl fire.

In the wake of the Chernobyl Reactor fire, the United States Department of Energy had a serious concern. The DoE operated a reactor that was similar to the Chernobyl reactor, the N reactor at Hanford, Washington. The N reactor, like the Soviet RBMK-1000, had graphite in its core. The DoE wanted to know if a Chernobyl type accident would be possible at Hanford. The DoE commissioned a review of N Reactor safety in light of the Chernobyl accident. The researchers asked
What is the potential for obtaining conditions conducive to a graphite fire in N Reactor?
And answered,
The graphite stack is protected by a helium cover gas contained within the shield structure. Combustion cannot occur unless the shield structure is sufficiently damaged to leak inert gas faster than available makeup supply. Should that occur, the rate of oxidation would be very slow because graphite temperatures would remain below the threshod for rapid oxidation because of heat removal from the stack by the ECCS [Emergency Core Cooling System] or the GSCS [Graphite and Shield Cooling System], The GSCS alone is capable of removing both decay heat and any heat load from graphite oxidation, stabilizing temperatures in a range which ensures control.

In the Chernobyl accident sequence, the plant was effectively destroyed and conditions for exothermic chemical reactions involving a number of core materials were present before graphite fire made any contribution. It is likely that the major contribution from graphite was to serve as a refractory container for decay heat buildup, zirconium oxidation along with carbothermic reduction of the UO2, and complex gas producing redox reactions. For any N Reactor accident where the GSCS and biological shield are intact, there is no way to achieve ignition of the graphite. It has been demonstrated experimentally that oxidation nuclear grade graphite takes very high temperatures to initiate, and the contribution to total heat load is only a small fraction of the decay heat.
They also reported finding that
Detailed reaction rate models have been developed to analyze graphite oxidation. These models tend to show that graphite oxidation in N Reactor would be limited both by available oxygen and the requirement that a high-temperature source (>1100°C) be available to drive a significant reaction. The analyses have effectively shown that graphite will not con- tribute significant accident heat loads.
Why then did the Chernobyl reactor graphite burn? According to the N Reactor review,
The Chernobyl release must be viewed as resulting from both very high temperatures in the core rubble, extensive mechanical disruption and dispersal of core material and the large draft "chimney effect" that followed the total disruption of that particular reactor configuration. There is no accident sequence that could produce an equivalent disruption of N Reactor; there would be some confinement even in the lowest probability event sequences. Because of the horizontal arrangement of pressure tubes, Chernobyl fission product release rates and magnitude are not pertinent to N Reactor accident scenarios with mechanistic initiators.
In 1987 the NRC did its own safety assessment of the Graphite Reactors it licensed. The NRC report described the limitations of graphite fires,
For reasons that are well understood, graphite is considerably more difficult to burn than is coal, coke, or charcoal. Graphite has a much higher thermal conductivity than have coals, cokes or charcoals, making it easier to dissipate the heat produced by the burning and consequently making it more difficult to keep the graphite hot. Concomitantly, coals, cokes and charcoals develop a porous white ash on the burning surfaces which greatly reduces radiation heat losses while simultaneously allowing air to reach the carbon surfaces and maintain the burning. In addition, coals, cokes and charcoals are heavily loaded with impurities which catalyze the oxidation processes. Nuclear graphite is one of the purest substances produced In massive quantities.

The literature on the oxidation of graphite under a very wide range of conditions is extensive. Effects of temperature, radiation, impurities, porosity, etc., have been studied in great detail for many different types of graphites and carbons [Nightingale, 1962]. This information served as a foundation for the full scale detailed studies on graphite burning accidents In air-cooled reactors initiated and completed at Brookhaven National Laboratory [Schweitzer, 1962a-f]. After British experimenters at Harwell confirmed the results obtained at BNL [Lewis, 1963] there appeared to be no new conclusions from additional work in this field. The aspects of the work pertinent to evaluating the potential for graphite burning accidents are described here In some detail.

Burning, as used here, is defined as self-sustained combustion of graphite. Combustion is defined as rapid oxidation of graphite at high temperatures. Self-sustained combustion produces enough heat to maintain the react- ing species at a fixed temperature or is sufficient to increase the temperature under actual conditions where heat can be lost by conduction, convection, and radiation. In the case where the temperature of the reaction Increases, the temperature will continue to rise until the rate of heat loss Is just equal to the rate of heat production. Sustained combustion is distinguished from self-sustained combustion when, in the first case, the combustion is sustained by a heat source other than the graphite oxygen reactions (e.g., decay heat from reactor fuel).

Early attempts to model the events at Windscale [Robinson, 1961; Nairn, 1961] were followed by the BNL work described here.

Some 50 experiments on graphite burning and oxidation were carried out in 10-foot long graphite channels at temperatures from 600°C to above 800°C. To obtain a lower bound on the minimum temperature at which burning could occur, the experiments were specifically designed to minimize heat losses from radiation, conduction, and convection.

The objectives of the full scale channel experiments were to determine under what conditions burning might initiate in the Brookhaven Graphite Research Reactor (BGRR) and how it could be controlled if it did start. Channels 10-feet long were machined from the standard 4 in. x 4 in, blocks of AGOT graphite used in the original construction. The internal diameter of the BGRR channel was 2.63 Inches. Experiments were also carried out on channel diameters of one to three Inches on 10-foot long test channels In order to obtain generic Information. The full length of the channels was heated by a temperature controlled furnace and was Insulated from conductive heat losses. At intervals along the length there were penetrations in the furnace through which thermocouples used to read the temperature of the graphite and air were introduced, and from which air and air combustion products were sampled. A preheater at the inlet of the graphite channel was used to adjust the air to the desired temperature. The volume of air was controlled and monitored by flow meters to allow flow measurements in both laminar and turbulent flow conditions.

In a typical experimental run the graphite was first heated to a preselected temperature. The external heaters were kept on to minimize heat losses by conduction and radiation. The temperature changes along the graphite channel were then measured for each flow rate as a function of time with the heaters kept on. It was observed that below 675°C it was not possible to obtain temperature rises along the channel if the heat transfer coefficient (h) was greater than 10~ cal/cm-sec-°C. Below 650°C it was not possible to get large temperature rises along the channel with 30°C inlet air temperatures at any flow rate. For h values lower than 10~ cal/cm-sec-''C maximum temperature rises were 0-50"C and remained essentially constant for long periods of time (five hours). For h values greater than 10~ cal/cm-sec-°C the full length of the channel was cooled rapidly.

There were two chemical reactions occurring along channels. At low temperatures the reaction C + O2 to form CO2 predominated. As the temperature Increased along the channel CO formed either directly at the surface of the channel or by the reaction CO2 + C. At temperatures above 700"C, CO reacts in the gaseous phase to form CO2 with accompaniment of a visible flame. It was observed that the unstable conditions which were accompanied by large and rapid Increases in temperature Involved the gas phase reaction CO + O2 and occurred only for h values below 10~ cal/cm-sec-°C below 750"C. Temperature rises associated with the formation of CO2 from C + O2 were smaller than those due to CO + O2 and decreased with time. They too occurred at h values below 10" cal/cm-sec-°C.

In a channel which was held above 650°C there was an entrance region running some distance down the channel which was always cooled. A position was reached where the heat lost to the flowing gas and the heat lost by radial conduction through the graphite was exactly equal to the heat generated by the oxidation of the graphite and of the CO. This position remained essentially constant with time. Beyond this point rapid oxidation of graphite occurred with the accompaniment of a flame (due to the CO-0 gas phase reaction). Under conditions of burning, the phenomena were essentially Independent of the bulk graphite chemical reactivity. Rate controlling reactions during burning were determined by surface mass transport of reactants and products.

The experiments were used to develop an equation which expressed the length of channel that can be cooled as a function of temperature, flow rate (heat transfer coefficient), diameter and reactivity of the graphite. It was found that the maximum temperature at which thermal equilibrium (between heat generated by graphite oxidation and heat removed by the air stream) will occur in a channel can be predicted from the heat transfer coefficient, the energy of activation and a single value of the graphite reactivity at any temperature. Above this maximum temperature the total length of channel Is unstable and graphite will burn. The studies show that the bounding conditions needed to initiate burning are:
1. Graphite must be heated to at least 650°C.
2. This temperature must be maintained either by the heat of combustion or some outside energy source.
3. There imist be an adequate supply of oxidant (air or oxygen).
4. The gaseous source of oxidant must flow at a rate capable of removing gaseous reaction products without excessive cooling of the graphite surface.
5. In the case of a channel cooled by air these conditions can be met. However, where such a configuration is not built into the structure it is necessary for a geometry to develop to maintain an adequate flow of oxidant and removal of the combustion products from the reacting surface. Otherwise, the reaction ceases.
The report went on to discuss the potential contribution of Wignarian energy to a graphite reactor fire, and found that if a reactor operated at a high enough temperature to preform Wignerian annealing its graphite would not accumulate Wignerian energy. The report also stated that,
The factors needed to determine whether or not graphite can burn in air are the graphite temperature, the air temperature, the air flow rates, and the ratio of heat lost by all possible mechanisms to the heat produced by the burning reactions [Schweitzer, 1962a-f]. In the absence of adequate air flow, graphite will not burn at any temperature. Rapid graphite oxidation in air removes oxygen and produces CO2 and CO which, along with the residual nitrogen, suffocate the reaction causing the graphite to cool through unavoidable heat loss mechanisms. Self-sustained rapid graphite oxidation cannot occur unless a geometry is maintained that allows the gaseous reaction products to be removed from the surface of the graphite and be replaced by fresh reactant. This necessary gas flow of Incoming reactant and outgoing products is Intrinsically associated with a heat transfer mechanism. When the incoming air is lower in temperature than the reacting graphite, the flow rate is a deciding factor in determining whether the graphite cools or continues to heat. Experimental studies on graphite burning have shown that for all the geometries tested which Involved the conditions of small radiation and conduction heat losses, it was not possible to develop self-sustained rapid oxidation for graphite temperatures below about 650*'C when the air temperatures were below the graphite temperature. At both high and low flow rates, the graphite was cooled by heat losses to the gas stream even under conditions where other heat loss mechanisms such as radiation and conduction were negligible.

At temperatures above about 650°C, in realistic geometries where radiation is a major heat loss mechanism, graphite will burn only in a limited range of flow rates of air and only when the air temperatures are high. At low flow rates, inadequate ingress of air restricts burning. At high flow rates, the rate of cooling by the flowing gas can exceed the rate of heat produced by oxidation.

Studies have shown that burning will not occur when there is no mechanism to raise the graphite temperature to about 650°C [Schweitzer, 1962a-f]. If the temperature is raised above 650°C, burning will not occur unless a flow pattern is maintained that provides enough air to sustain combustion but not enough to cause cooling. Since the experiments were designed to minimize all heat losses other than those associated with the air flow, 650°C can be considered a lower bound for burning.
Thus the NRC's answer to the original question which I asked at the beginning of this series is "yes, graphite does burn" but only under a very limited set of conditions.

The NRC report simply assumed that those conditions had been meet at Windscale and Chernobyl. We now know what the NRC did not know in 1987, that the Windscale fire was not a graphite fire. Neither report reviewed here offers conclusive evidence that the Chernobyl fire was a graphite fire. A major conclusion of the report draws a big question mark over the Chernobyl graphite fire hypothesis,
in order to have self-sustained rapid graphite oxidation in any of these reactors certain necessary conditions of geometry, temperature, oxygen supply, reaction product removal and favorable heat balance must exist.
Yet the Soviets claimed and American nuclear safety experts like H.J.C Kouts accepted the notion that Graphite could burn like charcoal.
The emission of radionuclide continued for about nine days, aided by burning of the graphite. It is estimated that upwards of ten percent of the graphite in the core burned, in a manner similar to the rapid oxidation of charcoal.
We know that Kouts view cannot be correct, nuclear graphite does not burn like charcoal, and the assertion that only 10% of the Chernobyl core graphite burned does not suggest graphite was the major source of the Chernobyl fire. The question is were the conditions conditions that are conducive to a graphite fire present at Chernobyl, and if so how? In answers to these questions, and without other evidence we must consider the claim of a graphite fire at Chernobyl to be unconfirmed.

As we have seen, the use of graphite in a reactor core is consistent with safe reactor operations. The danger of a core fire due to graphite burning is quite limited. The time has now arrived to ask the question, is it dangerous to use graphite in the core of a Molten Salt Reactor.

We have already noted that the possibility of graphite fires in a reactor core can be eliminated by core design. In the case of Molten Salt Reactors, the possibility of a core fire is eliminated by the two modes of MSR operation. A MSR is only active if liquid salt is present in the core of the reactor. But if liquid salt is present then air cannot be. In the case of the presence of molten salt in the core, the presence of salt would prevent air from reaching the graphite. If the salt is drained, either deliberately, by accident or by operation of the freeze valve safety system, then the heat producing fission products will be drained from the core as well. The absence of fission products in the core would mean that a high enough temperature required to trigger a graphite fire would not be possible. Thus the use of graphite in a Molten Salt Reactor core would be inherently safe.

Followers

Blog Archive

Some neat videos

Nuclear Advocacy Webring
Ring Owner: Nuclear is Our Future Site: Nuclear is Our Future
Free Site Ring from Bravenet Free Site Ring from Bravenet Free Site Ring from Bravenet Free Site Ring from Bravenet Free Site Ring from Bravenet
Get Your Free Web Ring
by Bravenet.com
Dr. Joe Bonometti speaking on thorium/LFTR technology at Georgia Tech David LeBlanc on LFTR/MSR technology Robert Hargraves on AIM High