Monthly Archives: September 2011

Welcome to the LTA Windpower Founder’s Energy Blog

Welcome to this, the first posting on the “Founder’s Energy Blog” at LTA Windpower.

Introduction

This blog will initially be written by me, Nykolai Bilaniuk, the founder and CEO of the company. It is difficult to say how the blog will evolve over the long haul, but the intention is to present news and views on energy related issues. The aim is to maintain a high standard, so that claims are to have a sound scientific basis even when they do reflect my own biases. The  plan is to open the blog to questions and comments from visitors as soon as the necessary software can be put in place. Critical (but polite and constructive) feedback is more than welcome. I may have a Ph.D. in engineering and may be a registered P.Eng., but I know I too have a lot to learn, and blog visitors can contribute to my understanding of the world around me, including on the very complex and sometimes controversial topic of energy.

More than likely, the blog will put an accent on windpower since that is the business of our company, but the blog’s focus is not meant to be that narrow. Instead, it’s important to grasp the entire energy context, of which wind energy is only a small component. For that reason, I want to begin with a series of mini-lectures that discuss energy use in general.

Global Energy – Supply and Demand

The supply of energy used by human societies comes in several basic varieties. In the earliest paleolithic times, members of our genus used only food energy. The first great energy innovation was the harnessing of fire. Apparently the earliest signs of cooked food date from 1.9 million years ago, but fire was probably not controlled until 400,000 years ago, and its use did not become a regular human habit until 50,000 to 100,000 years ago [1]. Thus after food energy, the first energy source to be mastered was combustion of organic materials, and this sort of chemical energy remains an important component in our species’ “energy diet” today. Over the milennia, we improved the tools with which we burned plant matter, graduating from open fires to metal stoves. We also found more varied reasons for harnessing combustion. Initially, it was all about food preparation and heating, but later, after humans developed agriculture and sedentary civilizations, energy came to be used for all kinds of additional processes: firing ceramics, metallurgy, driving steam engines, and so on. The number of humans in the world and their per-capita energy consumption both increased dramatically. We could have denuded the planet of its trees long ago if we hadn’t also discovered another category of combustibles: fossil fuels. The first of these were peat and coal, and starting from about the 18th century also petroleum, then natural gas. Other than combustion, some of the few energy sources harnessed before modern times were the gravitational potential energy of descending water (i.e. hydropower) and the kinetic energy of moving masses of air (i.e. windpower). Both of these were widely used for milling grain and other early industrial processes, and wind was also used for transportation in sailing ships.

Electricity was known since ancient times, but it wasn’t until the 19th century that, thanks to the likes of James Clerc Maxwell, humans learned enough about electricity to take advantage of it in many useful ways. In and of itself, electricity is not an energy resource so much as a tool for transforming and transmitting energy. This is because our ability to store electricity is still very limited, so (with the small-scale exception of batteries) electricity must be produced exactly when needed. Electricity allowed us to make more convenient use of already known sources of energy (e.g. hydro and chemical combustion) and also opened up the possibility of harnessing new sources of energy (e.g. nuclear fission).

Per-capita energy use has increased greatly since the early paleolithic before the mastery of fire. A modern adult male leading a typical life in a warm environment requires on the order of 2500 kilocalories (10 megajoules or MJ) of food energy per day [2] or about 3.6 GJ/person/year, and a female slightly less. Thus a person’s body uses about as much energy as a 100 watt light bulb that is lit continuously. Let’s contrast food energy with actual per-capita energy consumption by society as a whole [3]: There is a range spanning 6.76 GJ/person/year in Bangladesh to 898 GJ/person/year in Qatar. China clocks in at 47.8 GJ/person/year and the United States at 327 GJ/person/year. Note that when population is taken into account, China and the USA produce comparable amounts of energy per year. The USA still consumes slightly more overall due to net imports, but China’s consumption is growing faster and will soon overtake the USA’s.

The total energy consumption for the entire world is around 500 exajoules/year at this time (1EJ = 10^18J). In 2011 the world population is projected to cross the 7 billion barrier [4], so the per capita energy consumption is around 71 GJ/person/year or 195 MJ/person/day. Looking at it another way, the average human’s energy footprint is 20 times the adult food intake. Energy demand (per capita and overall) is likely to continue to INCREASE over the coming years, decades, and centuries. The reasons are that the human population will continue to increase for at least a few generations and possibly longer, and because many poorer countries (notably the populous BRIC group – Brazil, Russia, India, China) have growing economies and will not accept being held back economically just to avoid a hypothesized Malthusian[5] catastrophe.

This last observation won’t sit well with people who hope for energy sustainability. Surely we need to decrease not only per capita energy consumption, but also total energy consumption? Yet as noted, we can safely assume the world WILL continue to increase its total energy consumption whether we like it or not. The solution is to look carefully at the global energy portfolio (both energy sources and applications) and try to decrease the use of those forms of energy that are most damaging to the planet while permitting and even encouraging the growth of those that are less troublesome.

[1] http://en.wikipedia.org/wiki/Fire
[2] http://en.wikipedia.org/wiki/Calorie_food
[3] http://en.wikipedia.org/wiki/List_of_countries_by_energy_consumption_per_capita
[4] National Geographic, January 2011
[5] Thomas Malthus’ An Essay on the Principle of Population was first published in 1798, and is now in the public domain. It remains a classic in the study of the tension between population growth and resource depletion.

Posted in Uncategorized | Leave a comment

The world’s energy sources and sinks

The first posting concluded with the observation that the global average per-capita energy consumption is about 20 times the food energy required by an average adult, and with the prediction that global energy consumption would continue to rise regardless of the developed countries’ desire to see energy usage curbed. Whoever doubts the prediction of continuing increases in demand for energy should put themselves in the bare feet of the Third World’s poor, especially the 1.5 billion who are not yet connected to an electricity grid: Why should they not develop their economies to the point where they can enjoy at least a small fraction of the luxuries rich countries takes for granted? On the ground in sub-Saharan Africa and south Asia, this feeling is too powerful to be stopped by scientists who complain about such things as the need to reduce CO2 emissions. A Catch-22 ensues: Developing countries probably feel that the climate change problem was created by rich countries and should therefore be solved by and at the expense of rich countries. Rich countries know that all their expensive efforts will be insufficient and probably completely ineffective without the co-operation of the developing countries in curbing the amount of carbon in circulation. Thus a way has to be found to allow economic growth (and the inevitable growth in the consumption of energy in the developing world) while also curbing the carbon in circulation.

In order to address this problem systematically, it is helpful to see what are the world’s energy sources and energy sinks, and what are the trends in their usage. Concerning energy sources, we have the following picture thanks to Wikipedia [1], the excellent energy research firm Enerdata [2], and the US government’s Energy Information Administration [3].

Fossil fuel combustion accounts for the overwhelming majority of the world’s energy budget. In 2004, the share was 86%, and that has declined only slightly since then. In 2005, oil accounted for 37%, coal 25%, and natural gas 23% of the energy supply. In 2009, the figures were 33%, 27%, and 21% respectively. If one adds biomass and biofuels to those figures, then the chemical-fuel-based share of the total energy budget is holding steady at around 90%. Nuclear energy sits at around 6% and hydro about 3%. Solar thermal, solar photovoltaic, wind, and geothermal sources together account for only about 1% of the energy sources. Enerdata’s yearbook [2] provides very nice graphical overviews of the situation.

These figures should be sufficient to make the point that the world cannot quickly rid itself of reliance on fossil fuels! Even if the whole world were collectively determined to put a serious dent in fossil fuel consumption (and it’s doubtful that it is), it would take decades to build the necessary alternative infrastructure. It isn’t even proven that satisfactory alternative technology exists. The global addiction to fossil fuels runs deep, but we can predict with relative confidence that the share of the global energy budget provided by oil and gas will decline, while that of coal will grow on the scale of the next century or so – whether we like it or not. To understand why this is so will be examined in coming postings. Among other things, we will look at the “Peak Oil” theory first proposed in 1956 by Dr. Marion King Hubbert, and will examine the workings of electricity grids.

For the moment, let’s have a peek at the flipside of the global energy supply, the energy uses or “sinks”. Here is how the global energy budget is consumed. Wikipedia [1], citing the EIA and industry sources, indicates that industry (including agriculture but apparently excluding the energy industry) consumes 37% of the energy, transportation (personal and commercial) consumes 20%, residential use (heating, lighting, and appliances) consumes 11%, and commercial use (lighting, heating, cooling, provisioning) consumes 5% of the total. The remaining 27% of the world’s energy is consumed in its recovery and distribution (called generation and transmission loss in the case of electricity).

Remember, combustion of fuels accounts for 90% of the energy market. As energy usage sectors or “sinks” go, heating and transportation tend to use combustion directly, and in many cases industry does too. However, many other applications, notably lighting and cooling, use energy after it has been transformed into electricity. Unfortunately the transformation of chemical potential energy from fossil fuels to electricity is particularly lossy. Generating electricity accounted for about 33% of the energy consumed in 2005, but the average efficiency of the process was only 40%, so the amount of electricity made available was only about 13% of the energy consumed, and electricity generation alone accounted for half of the 27% overall energy loss.[1]

The world needs to think about that going forward, so we will revisit the electricity market frequently in future postings. For example, if we were to remove all of the fossil fuel transportation equipment off the world’s roads and replaced them with zero emission electric vehicles, we would seemingly be cutting out 20% of the consumption of CO2-producing fuels, but in reality we would be worse off for it unless we also very drastically changed the world’s mix of electricity generation technologies. That is an extremely challenging undertaking, as we will also explore in coming posts.

[1] http://en.wikipedia.org/wiki/World_energy_resources_and_consumption
[2] http://yearbook.enerdata.net/
[3] http://www.eia.doe.gov/, see e.g. http://www.eia.doe.gov/emeu/international/contents.html

Posted in Uncategorized | Leave a comment

Efficiency Myths

Laymen and scientists have fairly similar perceptions of the meaning of “efficiency”, but very different views of what is achievable.

Processes and machines transform energy from one form to another. Efficiency is the fraction of the input power that is usefully available at the output.

For example, the purpose of a light bulb is to produce light visible to the human eye, so we would say that its input is electric power, and its (much smaller) useful output is visible light power (aka luminous flux). The rest of the input power ends up as waste heat.

A very important category of machines is the heat engine. What heat engines have generically in common is that they involve heat flows. There are many embodiments of this idea. External combustion engines such as steam engines and steam turbines, internal combustion engines such as gasoline and diesel engines, furnaces, refrigerators, air conditioners, and heat pumps can all be thought of in heat engine terms. These applications taken together represent a large fraction of the energy “sinks” on the planet.

Although the following doesn’t quite capture the full variety of lay views of efficiency, a typical layperson’s way of thinking is to have faith in future technological progress: “As time goes on, scientists and engineers will find new ways of improving efficiency. There will always be improvements. Eventually efficiency will approach 100% for most machines and processes. As improvements are being made, we can maintain our lifestyle while reducing our energy consumption.” In a previous posting, it was noted that the efficiency with which coal is burned to make electricity is about 40%. Seemingly there is a lot of room for improvement.

This view of efficiency and progress is badly mistaken. Ironically, it is the scientists of whom the public expects the future improvements who realize best that most of the available improvements in process and machine efficiency have already been made.

Theoretical work on heat engine efficiency has been under way at least since 1824, when a French engineer named Sadi Carnot made a very significant observation about this category of machines. With the intent to apply his research to improving steam engines, he studied an abstract representation of a heat engine that involved working with a gas that undergoes processes at a constant temperature (isothermal processes), alternating with processes in which heat is not transfered into or out of the system (adiabatic processes). According to the laws of thermodynamics, heat always flows from hotter to cooler places. Such processes depend on the relationship between the volume, pressure, and temperature of the gas, called an “equation of state”. The simplest equation of state for gases is the “ideal gas law” PV=nRT that many people learn about in high school. Carnot found that if one assumes the ideal gas law applies, then the efficiency of the system depends only on the hotter temperature T2 and the cooler temperature T1: Efficiency=(T2-T1)/T2 .

It turns out that his work has much wider applications than just to ideal gases. His abstract engine, now called the “Carnot cycle”, is actually the upper limit on the efficiency of many practical processes. We can see from the formula that, in order to have an efficiency approaching unity (aka 100%), it it necessary to either have the hotter temperature T2 approach infinity, or the lower temperature T1 approach absolute zero. Yet intuition should warn us that achieving either of those itself costs an enormous amount of energy! This means that in practice, heat engines must make do with whatever temperature differential they can get, and they can never approach 100% efficiency.

Let’s return to the observation that coal fired power plants actually achieve about 40% efficiency. The reality is that there ISN’T a lot of room for improvement, given coal combustion can only be driven so hot, and the spent heat has to be released to the ambient temperature. Power plants fueled with natural gas are somewhat more efficient than their coal counterparts mostly because gas burns hotter than coal. Small incremental efficiency improvements can still be made, yes, but most of the easy efficiency improvements have already been wrung. At 40% efficiency we are already approaching the asymptotic limits of what is possible. Future improvements will suffer from the law of diminishing returns, and increasingly expensive efforts will yield ever smaller results.

With regard to devices that are not heat engines, there are often other good theoretical models that impose limits of efficiency well below 100%. Important devices of interest include light bulbs. Incandescent bulbs have an efficiency around 2%, fluorescents around 10%, and light emitting diodes (LEDs) up to about 20%.[1] With light bulbs, part of the trick to achieving high efficiency is limiting the emission spectrum of the light to the visible range. Light sources differ so much in how they produce light that their theoretical maximum efficiency must be addressed using different models. For instance, incandescents can be modelled approximately as what physicists call “blackbody radiators”. We won’t go into the details of what that means, but from this model it is clear that incandescents are inherently inefficient because they have a very wide emission spectrum that reaches well beyond the visible, particularly on the infrared side (which has longer wavelength and lower frequency than visible light).

In general, the lesson is that all energy transformations involve loss. Therefore it is important to minimize the number of transformations that our energy supplies must undergo before they are put to their final use. Thus efficiency is always something of an obstacle, but often not the only one. For instance electricity storage techniques (such as rechargable batteries, capacitors, and pumped water reservoirs), are held back by their cost, the limited availability of certain resources, ecology, and energy density as well as efficiency.

[1] http://en.wikipedia.org/wiki/Luminous_efficacy

Posted in Uncategorized | Leave a comment

Peak Oil

Dr. Marion King Hubbert was a genius. An American geophysicist who was born in 1903 and passed away in 1989, he’d be unknown today if it weren’t for the fact that he not only introduced the theory of “Peak Oil”, but developed it with incredible thoroughness and clarity. He correctly pointed out the unpleasant fact that fossil fuels are a limited resource for which supply cannot be arbitrarily increased to meet demand. Most importantly, his methodology for estimating when supplies of oil, gas, and coal would run out musters a lot of credibility given that he correctly predicted when U.S. production of oil would hit its peak, never to recover. The consequences of Hubbert’s “Peak Oil” theory are of great importance for the energy future of the world.

Dr. Hubbert’s seminal paper “Nuclear Energy and the Fossil Fuels” was presented in 1956 at a meeting of the American Petroleum Institute. It is worth reading in its entirety, and from time to time it can be found on the Internet although at present a prime source is inaccessible.[1] Typed up double-spaced using an old-fashioned typewriter and peppered with a few plots, the document takes 57 pages. In a nutshell, the key points of the argument go something like this:

(1) Fossil carbon (coal) and fossil hydrocarbons (oil and natural gas) came from plants, and the supplies that exist had been accumulating for 500 million years before they began to be exploited by modern humans. Whatever the supply may be, it is finite. Additional accumulations during the time of exploitation will be negligible, because that exploitation is happening on a timescale more than a million times shorter.

(2) Fossil fuel exploration and discovery follows a particular pattern. Some aspects of this can be understood by analogy to Christopher Columbus’ and other explorers’ discovery of New World lands: North and South America each had a much higher probability of early discovery than any particular small Caribbean island. It is similar with fossil fuel deposits. The largest resources have a high likelihood of being among the first to be discovered and exploited. As the number of discoveries accumulates, the likelihood of additonal finds decreases and their exploitation cost rises. And yes, Hubbert was aware of tar sands and shale hydrocarbon deposits.

(3) Two points on the resource exploitation curve are fixed in advance: Before the first discovery, the quantity of known resource is zero. After a resource has been thoroughly exploited, the amount remaining that can be effectively exploited is also zero. In between, initially there is ramp-up in both known reserves and production, with annual new discoveries outpacing annual production so that known reserves are increasing even while the rate of production is increasing too. Next inflection points are reached at which the rate of discovery and the rate of increase in production begin to slow. Then a maximum in the rate of production is reached. There may be a bit of a plateau around that peak, but there is likely to be just one dominant peak. By this time, production outpaces new discoveries. Production then declines, and eventually dwindles to zero despite the fact that exploration continues and some new discoveries are made during the decline.

The theoretical production curve is bell shaped with one peak, hence the name “Peak Oil Theory”. Dr. Hubbert understood very well that actual production curves for a resource are not perfectly smooth theoretical figures, in fact, he gives a number of examples of this imperfection. Since the 1956 paper, various authors have worked at refining the theoretical model. Nonetheless, the basic pattern always holds, and it was possible for Dr. Hubbert to take historical data about the reserves and production of fossil fuels and fit them to the theoretical model in order to determine at what stage in the exploitation curve each resource is in. The trickiest part of the process is predicting future demand, since sometimes, for some resources, the demand falls below the maximum possible production, and this delays the peak.

According to Hubbert’s 1956 predictions, US production of petroleum would peak in 1970 and global production would peak in 2000. US production of natural gas was also expected to peak around 1970. Global production of coal was predicted to peak circa 2150 if demand for coal were not affected by the situation with the other fossil fuel resources. It should be pointed out that any errors in estimation would not have invalidated the theory in general, but would call for an adjustment. So how accurate was Dr. Hubbert? His prediction re. petroleum production in the USA was very accurate, while the prediction concerning global production was out by about a decade. The peak in global petroleum production may have happened in 2008 or 2009 instead of 2000, but we won’t know for sure until the global production decline begins in earnest. In any case, respectable people believe the peak has happened already.[2]

Economists are fond of the law of “supply and demand”. This idea has been around in some form at least since the publication of Adam Smith’s classic magnum opus, “Wealth of Nations”[3] The basic notion is that prices adjust to account for changes in supply and demand. Increasing demand or decreasing supply puts pressure on prices to increase. Increasing prices tend to encourage more market entrants and therefore increase supply. When it comes to natural resources such as oil and gas, price increases do encourage more efficient exploitation and use, but they cannot truly increase supply. What our rich but abused Earth has to offer is all there is or will be. Optimists pin their hopes on future discoveries and better recovery technologies, but the best that can be accomplished is to slightly delay the inevitable: Oil and gas prices will skyrocket but supply will dwindle to zero anyway.

We end up with a dire collection of givens:
- global demand for petroleum and natural gas will not only hold steady but will actually increase thanks to developing countries like China and India, even if demand in the West can be curbed;
- global production of petroleum and gas is already at or near its peak and has either started declining or soon will;
- the world needs this fossil hydrocarbon energy for heating, and transportation, and many other requirements;
- combustion still provides 90% of all global energy, and most of that is from fossil fuels. Burning non-fossil carbon is not a good option either. Replacing fossil with nonfossil hydrocarbons would rapidly deplete the world’s stock of plant matter while also robbing soils of nutrients;
- although outside the scope of the energy problem, it is worth noting that petroleum is also a crucial input into fertilizer production. Without fertilizer, crop yields per hectare can be expected to fall, which will have terrible consequences for those countries that are already barely able to feed themselves.

Dr. Hubbert’s paper accomplished more than point out the impending crisis. He also identified the energy sources that the world would fall back on (for lack of any other realistic alternative) during the decades of the fossil hydrocarbons’ decline: coal combustion and nuclear energy. The world’s reliance on coal will be inevitable and fateful: The dwindling of hydrocarbon supplies might at least have had the benefit of slowing or stopping the steady increase in the amount of carbon in circulation. Thanks to coal, that change in human behaviour won’t happen for over a century. In the meantime, increases in atmospheric CO2 and the resulting global warming are likely to continue uninterrupted whether scientists and environmentalists in the West like it or not. Remember, the advanced West’s circa 700 million people represent only one-tenth of the world’s population. No wonder even some environmentalists are beginning to embrace nuclear power.

Is there an alternative energy technology that can displace combustion and nuclear, and is ready to be phased in starting immediately, to fill the role of 90% of the world’s energy supply? This serious matter needs to be explored further. In order to understand the world’s energy options after the decline of fossil hydrocarbons, we must examine critically all the non-fossil fuel sources including the alternative ones such as wind, solar, and geothermal, and all plausible energy intermediaries, chiefly electricity and hydrogen. From such an examination, we can assess what combination of technologies looks promising, and equally importantly, which technologies are likely dead ends. In order to implement a long-term solution and avert catastrophe plenty of research and manufacturing are required, but that will take decades.

If the world had taken Dr. Hubbert’s work more seriously in 1956, maybe our alternative energy development and deployment would have been advanced enough by now to take over from carbon combustion. Sadly, it isn’t. At least today Hubbert has a legion of fans, the Association for the Study of Peak Oil and Gas [4]. Among other things, their website archives recent papers offering elaborations and refinements of Hubbert’s model.

[1] I have in mind the web page http://www.hubbertpeak.com/Hubbert/1956/1956.pdf
[2] e.g. the European Union’s Guenther Oettinger. see http://www.reuters.com/article/idUSBRU01112520101110
[3] Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, 1776. This work is now in the public domain. See e.g. http://openlibrary.org/works/OL76827W/The_wealth_of_nations .
[4] http://www.peakoil.net/

Posted in Uncategorized | Leave a comment

Electricity Grids

Electricity is the best approach to distributing energy to static locations. It’s very expensive to set up, but once an electric grid is in place, energy can be sent long distances with low losses, and the electricity supply can be subdivided in real time such that millions of individual loads get exactly the amount of power they need. No combustion fuel, not even natural gas and its pipeline system, comes close to matching the versatility of electricity. However electricity also has its shortcomings.

One downside is that there is no way of storing electricity on a large scale. Capacitors store electric energy efficiently, but can hold very little at a time. Batteries can hold more, but they are less efficient, very heavy, most of them use toxic chemicals, and some use rare elements. Most of all, both capacitors and batteries are far too expensive to store electric power within a grid system. For instance, one of the least expensive types to use are lead-acid automotive batteries. One such battery costs about $100 and stores about 1 kilowatt-hour of electricity at a time, which is worth only 3 to 10 cents. Assuming the battery can last a hundred charge/discharge cycles, its use multiplies the cost of electricity at least tenfold, not to mention that it creates the potential for an environmental problem if its contents are not recycled properly. Other electricity storage ideas have been discussed, for example pumping water uphill to a reservoir to store energy, and letting the water run downhill to a hydroelectric generator to release energy. We will return to the subject of energy storage. For now, suffice to say that none of these storage techniques are either energy efficient or cost efficient compared to doing the obvious: generating electric power exactly when it is needed.

Demand for electricity fluctuates from hour to hour, by the day of the week, and from season to season. Fortunately, while you cannot predict when any one customer will turn loads on or off, there is a fair bit of predictability concerning the load in aggregate. Typically, in the United States the largest loading happens on weekday summer afternoons when many air conditioners are running, while the least loading happens in the wee hours on springtime Sunday mornings when almost everyone is asleep. Similar patterns occur in other wealthy countries, except that the colder and more northerly countries have smaller peaks associated with summer afternoon air conditioning, and larger peaks associated with winter evening lighting and heating.

Electric utilities’ strategy to cope with load variations is to increase and decrease the output to match the changing load. The daily demand can be thought of as consisting of two portions: The “base load” is the demand that is always present regardless of the time of day, and the “peak load” is the variable portion of demand. Coping with the peak load means having a power generation technology that can be adjusted rapidly enough to match demand. This is called a “dispatchable” generator. There are a variety of electric generation technogies, and some have greater dispatchability than others. Non-dispatchable resources are clearly suited for base load use only, while dispatchable resources can be used for base or peak loading. Let’s look at the key characteristics of the main generation technologies that support the grid: hydro, thermal, and nuclear.

Hydroelectric plants – This was the first electricity generating technology used on a large scale starting a century ago, which is why many utilities anachronistically still have “Hydro” in their name. A hydro plant converts the gravitational potential energy of water to electricity. The power available depends of the “head” and the “flow” (how far the water drops, and how much of it drops.) Hydro plants are expensive to build but cheap to operate, and produce some of the lowest cost electricity if in a good location. Originally, hydro was even thought to be eco-friendly, before people realized that damming rivers is very damaging both to aquatic life (especially migratory fish), and to land animals that depend on the ability to ford the river. Hydro power is fairly reliable, but reservoirs tend to have seasonal variation in water availability that doesn’t match demand: In temperate climates, the best supply of water is during the spring thaw, which is the season of the least electricity demand, and the worst supply of water is towards the end of a hot dry summer when people crave air conditioning. On the positive side of the ledger, hydroelectric capacity is highly dispatchable, but within limits imposed by the availability of water and the need to respect the momentum of the water flowing in the tubes. There are abundant examples of what not to do – e.g. overly rapid closure of water valves was a major contributing factor to an accident that destroyed the huge Sayano-Sushenskaya hydro plant’s turbine hall in southern Siberia on 17 August 2009, and took 75 workers’ lives.[1] Fortunately the dam itself held that time, but as everyone knows, the occasional catastrophic dam burst does occur. Although hydroelectric stations were once the dominant electric generation technology, that will never be the case again simply because of the shortage of suitable sites. Most of the good sites are already taken. Developing all the as-yet undeveloped decent hydro sites in the whole world would not significantly raise hydro’s share of the global energy budget beyond the existing 3%.

Combustion Thermal plants – Whether fueled by coal, natural gas, or oil, the basic concept is that combustion boils water, and the steam is sent through a turbine which in turn drives a generator. The water is subsequently condensed and reused. These powerplants convert the chemical potential energy in fuel to heat, the heat to turbine motion, and that motion to electricity. In contrast to hydroelectric facilities, these plants are generally cheap to build but expensive to operate, because they consume a lot of fuel. The current fashion is to view these plants as the most ecologically damaging of all, since they convert huge amounts of fossil carbon into CO2, and often spew out other pollutants such as oxides of nitrogen, ash, and so on. Coal is rightly considered the dirtiest of the fossil fuels. Combustion technology is the dominant method of generating electricity in the world today, and will remain so for a long time. Sometimes non-fossil fuels are used for combustion on an opportunistic basis, e.g. wood by-products from forestry operations in some northern Canadian communities, or straw and other plant refuse in India. Thermal plants are reasonably dispatchable, but at a price: Turning them up and down (aka “cycling”) to meet changing demand is punishing on the equipment, reducing its service life and increasing maintenance costs.

There are some variations on the combustion thermal plant scheme. Gas turbines are sometimes used, so that no boiler or steam are involved. This is in effect a specialized stationary jet engine. Among other things, this method is used to provide power (including electric propulsion) on some of the world’s largest cruise ships. Typically fueled with diesel fuel, the internal combustion engine is another variation on the combustion theme that is used for smaller installations, from the familiar portable generators of a few kilowatts to large stationary units of a few megawatts. These are fairly cheap to build and expensive to operate, but are still a sensible choice for off-grid locations. Internal combustion engines are highly dispatchable with few downsides.

Nuclear Thermal plants – Conceptually, nuclear plants are close cousins of combustion thermal plants in that a nuclear reactor is used to produce heat, which in turn is used to boil water to turn a turbine. Nuclear plants involve a much smaller fuel load than combustion plants, and during normal operation don’t release any pollution at all. The biggest headache with this technology is the storage of spent fuel, which remains radioactive for centuries and may be a terrorism target to boot. In addition, the worst-case-scenario accidents are very nasty, particularly in a supremely ill-designed and badly managed facility like Chornobyl in Ukraine which exploded on 26 April 1986, causing tens of directly attributable deaths, probably also thousands of additional cancers, and forcing the evacuation of over 300,000 people. In addition, nuclear power plants are very expensive in countries that build them as “one-of’s” such as the United States. France has a more “cookie-cutter” approach to replicating nuclear power plants which helps to contain costs, but even there the plants are fairly expensive to build. There are a variety of reactor designs that vary in the degree the uranium fuel is enriched to favour the U-235 isotope amid the more abundant U-238, in the moderators and coolants used, et cetera. Perhaps the pros and cons of particular reactor types and nuclear powerplant designs can be the subject of a future post. For now, one significant fact is that no nuclear power plants like to have their power ramped up and down. In short, nuclear power is not readily dispatchable – it is suitable for base loading only, not for peak loading.

In general, using any generation facility to track peak loads causes stress that costs energy and shortens the lifespan of the equipment, and therefore increases costs. It’s just that some technologies suffer more than others from being subjected to this treatment. Utilities have varying strategies for dealing with peak loading depending on the mix of technologies available to them and the degree of peaking that they experience with their customer base. If nuclear power is in the mix, it is always used for base loading. Thermal and hydro plants can both be dispatched reasonably well. Such strategizing is not a new idea. Here is an example diagram addressing ways to balance hydro and steam (i.e. thermal) resources, from the 1953 edition of a textbook first published in 1932.[2] The strategy advocated here is to use hydro for base loading if the water flow is adequate.

It should be clear that if all the generating capacity on a given grid is owned by one utility, that organization will use a dispatching strategy that is best for it at a given time. If it’s a private sector utility, that means doing what is most profitable in the long term. Usually, one would opt to use some mix of hydro and thermal plants for peak loading, and nuclear reserved for base loading. Through the interconnection of local grids into regional grids and regional grids into national or even transnational grids, it is possible to optimise the matching of supply and demand by trading electricity across grid boundaries. These days, this is a routinely used feature and usually works well. On rare occasions, interconnection enables grids to destabilize each other on a massive scale, such as the blackout that began in the US northeast and Canada’s province of Ontario on August 14, 2003. There is of course yet another possibility: Supplementing these approaches to dealing with peaks is the possibility of load management. That is, the utility may have an arrangement with customers that the utility may ask them to shut down or even cut them off unilaterally during demand peaks. In the past, this applied only to large customers, and usually the customer received a discount on the power as a way for the utility to compensate the customer for the disruption. Future “smart grids” may bring this kind of disruption to individual homeowners with no compensation.

The story becomes more politically complex in those jurisdictions where the electric power system is not vertically integrated. Suppose there is one regulator, one distribution grid operator, but several generating companies (and of course many customers). Who among the multiple generating companies gets the benefit of contributing base load capacity, and who carries the heavy financial burden of handling peak loads? Who gets the blame if the system crashes and a blackout occurs? These matters have to be worked out before a grid is “opened” to competing generating companies.

Economic theory holds that true market conditions exist when there is a large enough number of sellers and buyers offering comparable product, such that the entry or exit of any one of them from the market doesn’t fundamentally change the situation. For the reasons described above, true market conditions cannot apply to the electricity grid. It is always a distribution monopoly with at most some “managed” competition in generation. Diehard capitalists will be disappointed by the reality that governments always have a hand in electricity grids. Sometimes those governments do a good job, and at other times they botch the regulation of the grid rather terribly.

What happens when “alternative” electricity generation technologies are added to the mix, such as wind and solar? These systems are not dispatchable and therefore are not suitable for peak loading, nor are they even suitable for base loading. Instead, they operate on an opportunistic basis, producing when the wind blows or the sun shines, and their contribution of power to the grid has to be “covered” by a dispatchable technology which has to be paid for too. How is this hidden cost of the alternative energy sources to be accounted for? All these issues are food for future discussion.

[1] F.A. Hamill, “Sayano-Suchenskaya Accident – Presenting a Possible Direct Cause”, International Water & Power Dam Construction, November 2010, pgs. 30-37
[2] Frederick T. Morse, Power Plant Engineering, 3rd ed. D. Van Nostrand Company (NY: 1953), pg. 52

Posted in Uncategorized | Leave a comment

Introduction to Energy Storage

Let’s consider the question of energy storage, which has practical applications in a variety of contexts. There are so many reasons to store energy and ways of doing so that some sort of classification system is worthwhile to sort through the mess. An important sorting criterion for energy storage solutions is by mobile vs stationary applications, with the former including most (but not all) transportation equipment, and the latter including everything else. Another way of sorting energy storage is by storage technique, i.e. according to the physics & chemistry of the medium. Many energy storage applications need to be concerned with energy density and specific energy. Energy density is defined as the energy per unit volume, while specific energy is the energy per unit mass.

A look at statistics on attainable energ density [1] and specific energy [2] shows that of the three main categories of energy storage – nuclear, combustion chemical, and electricity storage – nuclear energy offers by far the highest energy densities and specific energies at > 40,000 MJ/kg and > 600,000 MJ/litre. Combustible fuels manage between 3 and 70 MJ/kg with energy densities on the same order of magnitude. Experimental electric batteries manage as much as 2.5 MJ/kg, but realistic mass-producible batteries deliver below 1 MJ/kg, with lead-acid technology at 0.14 MJ/kg for example. Ultracapacitors and capacitors do even worse, at < 0.02 MJ/kg. There are some energy storage technologies that don’t fall into these main categories – flywheels and pumped water storage for example. Flywheels may be capable of 0.5 MJ/kg. In the case of pumped water storage, energy density is a question of how high one is willing to lift the water: the gravitational potential energy is about 9.8 J/kg/m (note that is in joules, not megajoules).

We will consider the energy storage needs of motor vehicles, and one stationary application namely peak leveling for electricity grids.

[1] http://en.wikipedia.org/wiki/Energy_density, http://en.wikipedia.org/wiki/File:Energy_density.svg
[2] http://en.wikipedia.org/wiki/Specific_energy

Posted in Uncategorized | Leave a comment

Energy Storage in Vehicles

Almost all vehicles store the energy they need on board. The notable exceptions are electric trains, streetcars, and trolley buses which transfer power from electrical conductors along their routes. The mass of the powerplant itself is also significant to mobile applications, and the safety of the technology is even more important for transportation equipment than for stationary applications. Aircraft, which have to lift everything off the ground, are the most demanding type of vehicle.

Motor vehicle design involves many tradeoffs, but power and range are among the most important to the consumer. These parameters are limited by the energy supply stored. Common hydrocarbon transportation fuels all have comparable energy storage performance: gasoline 46.4 MJ/kg or 34.2 MJ/litre, diesel 46.2 MJ/kg or 37.3 MJ/litre, jet A fuel (kerosene) 42.8 MJ/kg or 33 MJ/litre.[1][2] Typical passenger cars have an engine with a maximum power output of order 100 kW (746 W = 1 horsepower) although the average power output during normal use is much lower than that. They also have a fuel tank holding around 50-75 litres of gasoline, and achieve a typical cruising range of perhaps 500 km. The typical motorist’s commute to work is on the order of 10′s of km, with North American suburbanites going much longer distances than their urban cousins anywhere in the world. In that sense, a 500 km range is unnecessary most of the time, but motorists sure appreciate a large cruising range, particularly on long road trips.

Now suppose that manufacturers want to build a vehicle that doesn’t rely on hydrocarbon energy storage. What options to they have? The most talked about alternatives are electricity and hydrogen.

In the case of electric vehicles (EVs), the greatest attraction is that electric motors are ideally suited for propelling vehicles. Vehicles have to start and stop often. Internal combustion engines have poor torque at low speed, which is why most vehicles need transmissions. Electric motors can be built to deliver high torque even at zero speed and can be made reversible, allowing some EV designs to be “direct drive” with no transmission at all, greatly simplifying the mechanical components of the drivetrain. Also, electric motors can be designed to run as generators and recapture energy when braking. This is called “regenerative braking” which both adds a bit of charge to the energy storage system and helps reduce wear on the mechanical brakes. Electric motors are also very efficient. Some can achieve over 90% efficiency, twice what is possible with internal combustion engines.

Unfortunately, the stored energy density of normal battery systems is about two orders of magnitude worse than hydrocarbon fuels. There is, however, a large variety in battery performance depending on chemistry. Lithium-ion are among the best, managing up to 0.72 MJ/kg, only 64 times worse than gasoline, but much better than lead-acid batteries which at 0.14 MJ/kg are 330 times worse than gasoline. Even that is deceptive, as batteries cannot be fully discharged without damaging them, so only at best half the capacity is actually available in practical applications. Lithium is thought to be in adequate supply for the auto industry’s potential needs, with an estimated global reserve of 29 million tonnes.[3] Still, that 64 times worse energy density is going to hurt. Auto makers and buyers must accept much shorter cruising ranges, and often some compromise in performance. Personally, I wonder whether they have thought through the problem of heating cars in cold climates. That consumes a lot of power, power that is readily available as surplus heat in combustion engined cars. But that power is at a premium in an EV, and the situation is aggravated by the fact that many battery types perform poorly in the cold.

What is the likelihood of scientific breakthroughs resulting in dramatically better batteries? Consider that all elements and their physical properties (including density) are known. The elements’ electrochemical properties such as electron affinities and electronegativities (tendency to attract electrons) are known. The effect of the elements’ incorporation into larger molecules is also well understood. As a result, it is possible to calculate an upper limit on the specific energy and energy density of batteries that carry their own reactants- a definite upper bound happens when all the materials in the battery are selected to maximize specific energy or energy density, and there is 0% supporting materials content. A recent battery proposal, as yet unproven, may be pushing the limit of what’s possible, claiming to achieve 97% functional materials with a lithium chemistry and a solid electrolyte.[4] Another approach is to make oxygen one of the reactants in the cells, so that the battery doesn’t have to carry all its chemicals, and have it “breathe” instead. This approach, using a lithium anode – oxygen cathode chemistry, is thought to be capable of 600 mAh/g of specific energy not counting the oxygen[5], which by my calculations translates into a theoretical 6.28 MJ/kg, or only 7-8 times worse than gasoline. However, Li-O batteries do not yet lend themselves to recharging.

It would be unreasonable to expect that electrochemical batteries with dramatically better specific energies or energy densities than these will ever be invented. However, small improvements in energy storage density may still be possible. Efforts to improve other parameters – battery reliability, greater number of charge cycles, deep cycling, rapid charging/discharging, cost, efficient lithium recovery from spent batteries and Li reprocessing into new batteries – these will probably be more productive research directions.

Since most motorists cannot accept a cruising range of 100 km as can be expected from a pure EV, the short-term solution is hybrids. They carry the hydrocarbon fuel and a small internal combustion engine to extend the driving range. The details of how this is done may be revisited in a future post on EV design.

Are there alternatives to electrochemical batteries for electric vehicles?

Some people have touted the possibility of using ultracapacitors instead of batteries. Capacitors and ultracapacitors store electric charge as free electrons instead of binding electrons chemically. Capacitors can be fully charged and discharged very rapidly, with no deterioration in their condition, and with nearly 100% efficiency. However, the energy density achieved so far is poor: as noted above it’s less than 0.02 MJ/kg, or more than two thousand times worse than gasoline. One company, EEStor, has promised ultracapacitors with a phenomenal energy density of up to 10,000 J/cm3 (10 MJ/litre) [6] but the feasibility of such ultracapacitors is being greeted with much skepticism in the scientific community. Ultracapacitors nonetheless have their uses in EVs and hybrid cars, namely for short-term energy storage in the regenerative braking system.

Other people consider hydrogen to be a promising energy storage medium for cars. Molecular hydrogen, or H2, is a gas at room temperature. The most economical way to produce hydrogen is by re-forming hydrocarbons such as natural gas (methane). Hydrogen can be combined with oxygen by burning in a combustion engine or by reacting it with oxygen in a proton exchange membrane (PEM) fuel cell. Fuel cells have achieved an efficiency of about 50%, and the theoretical limit is 83% at a working temperature of 298 Kelvins.[7] Decades of research and hundreds of millions of dollars have gone into PEM R&D, but the technology has yet to be deployed on a large scale. PEMs appear to be too unreliable to use in mass-produced cars. PEM cost is also unclear. Devices actually available for sale cost 3000 $/kW of peak output versus about 50 $/kW for internal combustion engines[8], but other sources claim that the cost of fuel cells can be as low as 61$/kW. PEMs are very fragile, with a life expectancy in a motor vehicle that is a small fraction of the 20 years an engine can last. Therefore if hydrogen is to be used at all, it makes more sense to burn it in an internal combustion engine.[9]

Hydrogen has an extremely high specific energy of 143 MJ/kg, but the energy density is low and depends on the form in which the hydrogen is held: liquid 10.1 MJ/litre, gas at 700 bar pressure manages 5.6 MJ/litre, while gas at 1 atmospheric pressure delivers 0.01079 MJ/litre[1]. A “bar” is a unit of pressure approximately equal to one atmosphere at sea level. I’m not sure how the quoted figures for liquid and compressed gas take into account the latent heat of evaporation and the work done during the expansion of the gas. These factors do affect the energy actually available. Hydrogen is extremely troublesome as a fuel precisely because of its ugly storage properties. It should be obvious that in uncompressed form, it is useless because its density is too low. That means one is tempted to compress it as a gas in a steel gas cylinder. But hydrogen is known to cause metals to deteriorate by a process known as “hydrogen embrittlement”. H2 occasionally dissociates into 2 H atoms, and these are highly reactive. They tie off loose bonds in the surface to which they cling, and over time cause the structure (e.g. the gas cylinder) to become like swiss cheese on an atomic level, and much weaker structurally. Eventually, the weakened pressurized gas cylinder may explode, probably while being refilled or while being bumped in an accident. Neither automakers nor their customers would like to see that happen. Another way of storing hydrogen is by chemical storage in tanks, using metal hydrides, carbon nanotubes, or other stabilizers. This, unfortunately, makes for very heavy tanks because stabilizers hold only 2% of their own weight in hydrogen, or even less.[10] This may be acceptable in stationary applications, but not in motor vehicles. Finally, there is the liquid option. Unfortunately, liquid hydrogen is a cryogenic liquid (meaning, it exists only at extremely low temperatures). In the case of H2, the temperatures at which it is liquid are below −423.17 °F, −252.87°C, or 20.28 Kelvins, just slightly above absolute zero. Hydrogen is very expensive (in energy and dollars) to compress to the liquid state, and it boils off quickly even when kept in special cryogenic tanks. 10% of tank capacity loss per day is a reasonable guess, and that’s after a 10-20 % loss during filling. Distributing hydrogen the way we distribute gasoline or diesel would be a nightmare, and filling tanks would pose great risks of cryogenic burns as well as explosions. While not a truly promising option as a transportation fuel (in my opinion at any rate) hydrogen is sufficiently interesting and often talked about that it warrants a more detailed examination in an future post.

In the case of aircraft fuels, neither hydrogen nor electricity merit serious consideration. Both have an energy storage density that is far too low to be of any use, and in addition hydrogen is too awkward and dangerous to handle for this application.

Given that neither electricity nor hydrogen appear to be the “silver bullet” that will meet the needs of the transportation sector as an across-the-board replacement for fossil hydrocarbons, it is likely that in the distant future we will see vehicle specialization. Small EVs may be used as commuter cars alongside greater use of public transit. More electric commuter rail lines might be built, and even long-range electric rail, an activity in which Europe is already well ahead of North America. Meanwhile long-range road vehicles such as transport trucks and cars meant for highway cruising will continue using hydrocarbon fuels, but not necessarily fossil hydrocarbons. We will need to take a closer look at the “biofuel” and “re-formed” hydrocarbon alternatives. These long-range vehicles may be hybrids.

[1] http://en.wikipedia.org/wiki/Energy_density
[2] http://en.wikipedia.org/wiki/Specific_energy
[3] http://lithiumabundance.blogspot.com/
[4] “The power of the press”, The Economist, January 29, 2011, pgs.77-78
[5] “Batteries that breathe”, IEEE Spectrum, February 2011, pg. 13
[6] http://en.wikipedia.org/wiki/EEStor
[7] http://en.wikipedia.org/wiki/Fuel_cell
[8] http://www.dotyenergy.com/Markets/Hydrogen.htm
[9] http://www1.eere.energy.gov/vehiclesandfuels/avta/light_duty/hicev/index.html
[10] “Feathers Hold Hydrogen Promise”, New Scientist, 27 June 2009, pg.19

Posted in Uncategorized | Leave a comment

Energy Storage in Stationary Applications

Considering that 90% of energy usage takes the form of combusting carbon-based fuels, it should come as no surprise that most energy storage is fuel storage. This makes perfect sense whether the fuel is destined to be used directly for some particular application, or to generate electricity. Coal and hydrocarbon distribution and storage are mature technologies. We know how to dig the stuff out of the ground, how to build hopper cars that deliver coal to powerplants, and how to build petroleum tankers, refineries, pipelines, & storage tanks. Natural gas pipelines are also well-understood, although large-scale distribution of liquified natural gas (LNG) is still fairly new. These activities involving fossil hydrocarbons account for the vast majority of energy storage today. However, with fossil hydrocarbons projected to decline eventually – and with many climatologists who track global warming hoping they will decline faster in importance than just because of resource depletion – it makes sense to look at storage of energy in other forms.

If the world wants to wean itself off fossil fuels, it needs to look at ways of storing electricity. Here we survey some of the more promising methods for large-scale storage and recovery of electricity. On-grid storage will become increasingly necessary for a variety of reasons. If the grid of the future will utilize nuclear as the predominant form of generation, storage will be necessary because nuclear reactors are not readily dispatchable. If what we now call “alternative” technologies such as wind and solar emerge as the predominant technologies, then matching supply and demand will be even more challenging and will require even more storage.

Pumped Water

Storing water to generate electricity can take one of two basic forms. The simple case doesn’t involve pumping at all. Consider a reservoir behind a hydro dam: If the utility allows water levels to rise in off-peak periods and drop during on-peak periods, then it has achieved water storage without pumping. This allows energy to be captured from the water as efficiently as operating the plant at a constant output. This is possible wherever hydro dams are found, but isn’t suitable for waterfall or “run-of-the-river” hydro plants that do not have a controllable reservoir. “Pumped water storage” usually refers to scenarios where, absent a hydro dam, water is pumped uphill to a reservoir during off-peak periods, and allowed to run down again during the peak, driving a hydro turbine at the bottom in the usual manner. The gravitational potential energy E stored in the water is E = mgh, where m is the mass of the water, g the acceleration due to gravity, and h the height through which the water falls to the turbine. The power needed to pump the water uphill need not have been produced hydroelectrically. Pumped storage could be used in conjunction with nuclear plants, for example. To be truly useful, the reservoir needs to have enough capacity to handle an entire daily peak, typically six hours of the peak portion of demand.

Pumped water technology allows recovery of about 75% of the energy consumed in pumping the water uphill, and is currently the most cost effective form of mass power storage. There is over 90 GW of pumped storage in operation around the world, which is about 3% of instantaneous global generation capacity.[1] The main difficulties with pumped water storage are two: First, the 25% energy loss is a fairly high energy cost to pay for getting storage. Second, suitable sites are not easy to find, because creating a large reservoir requires either an elevated natural lake, an elevated dammable ravine, or a plateau on which a reservoir can be dug.

Compressed Air

The concept here is to compress air into a large reservoir during off-peak periods, and then allow the air to expand and drive a pneumatic turbine during peaks. Remember that compressing gas costs energy which ends up increasing its temperature, while expanding gas consumes energy for the expansion but cools the gas. This follows from equations of state for gases, such as the ideal gas equation mentioned in an earlier post, PV=nRT. Here the “P” is the pressure, “V” the volume, and “T” the temperature of the gas. “n” is the amount of gas (the “molality” or number of molecules), and R is the ideal gas constant. When compressing a fixed amount of gas, the work that is done increases the energy of the gas but decreases its volume. Since n and R don’t change, if V decreases then P must increase to make up for the reduced volume, and T must increase to make up for the increased energy – unless, of course, the heat is allowed to escape. However, if the heat is allowed to escape after compression, then that amount of energy will be missing during expansion. A compression and storage process that retains all the heat is called adiabatic, while one that allows heat to escape during compression so that the temperature doesn’t rise beyond the ambient is called isothermal. In between these are diabatic processes. The best case scenario for compressed air energy storage is an adiabatic facility. That ideal is impossible to achieve in real life, although it may be possible to come close enough to call a practical plant “adiabatic” without too much shame.

Being more complicated and less efficient than pumped water storage, compressed air storage is not widely used by electric utilities. The earliest large compressed air plant was at Huntorf in Germany, with a capacity of 290 MW. Another facility is in the USA at McIntosh Alabama with a capacity of 110 MW. Both are diabatic plants. It is estimated that an adiabatic plant could achieve 70 % efficiency vs a little over 50% for diabatic plants.[2]

Stationary batteries

When batteries are used for stationary electric power storage, the constraints are somewhat different than for mobile applications. For mobile systems, energy density and specific energy are of paramount concern, but it’s also very important to achieve low maintenance, and the ability to operate in cold weather. Batteries used for on-grid power storage might benefit from these characteristics too, but more important still are cost and efficiency. For this reason, the battery chemistries being explored for stationary batteries are very different.

The main attraction of the lead-acid battery for stationary storage (and for many other applications) is that it’s an inexpensive technology, and with strong peak output at least for short periods. Wikipedia has a good summary of these batteries’ performance.[3] The specific energy and energy density are unimpressive at 0.11 to 0.14 MJ/kg and 0.22 to 0.27 MJ/litre respectively. The charge-discharge cycle efficiency varies between 50%-92% depending on various factors such as the quality of the battery, its age, how often and how deeply it has been discharged, and how rapidly it’s being charged or discharged. The batteries can last up to 800 cycles if they are never deeply discharged. The self-discharge rate is 3 to 20 %/month, so this technology isn’t very good for long term storage.

Another candidate is the sodium-sulphur (NaS) battery. There is a test project currently underway by Ecel Energy with such a battery system.[4] Sodium metal is corrosive and explosive on contact with water, and the batteries need to be kept at temperatures well above room temperature, which would limit this battery type’s appeal for automotive use. However, it does have a relatively high specific energy of 0.325 MJ/kg and the materials aren’t rare. In the Ecel test setup, an 80 tonne battery has a capacity of 7.2 MHh (26 GJ), and offers a 1 MW throughput, enough to deal with several hundred homes. The efficiency is not disclosed. The cost of the project is high at 1M$, but remember this is a “one-of” prototype so far with staff looking after it. It is not clear how deeply the NaS battery can be discharged, or how often it can be recharged.

Flow batteries are an entirely new approach to electric battery design. They have some similarities to fuel cells in the way they operate. In a conventional battery, the electrolyte chemicals are stationary within the cell. In a flow battery, the electrolyte is not stationary. Instead, there are four tanks. Electrolytes from two tanks with fresh ‘charged’ electrolytes are sent to the cell where they react chemically to produce electricity, while the spent electrolytes proceed to the two tanks for ‘spent’ electrolyte. During charging, the flow is reversed and electricity is applied to create fresh electrolyte. The total capacity of the battery is thus dependent only on the tank sizes (which can be scaled up to very large sizes). The throughput depends more than anything on the size of the cell’s reactive surfaces. The most common flow batteries so far are based on vanadium chemistry. Unfortunately this element is not very abundant. The specific energy and energy density are poor, in the range of 0.036 to 0.072 MJ/kg and 0.54 to 0.065 MJ/litre. Charge-discharge efficiency is in the range 75 to 80%. These cells are projected to be durable, lasting up to 20 years, and should be able to withstand upwards of 10,000 cycles.[5]

Hydrogen from Electrolysis of Water

Today by far the most economical way to produce hydrogen is as a by-product of processing hydrocarbons and this accounts for over 95% of hydrogen production, but if the gas is to serve as an energy storage medium in a future world where hydrocarbons are scarce, then it must be obtainable from another source. The obvious answer to that problem is to obtain hydrogen through the electrolysis of water. Pure H2O is not a good conductor of electricity, so some ions need to be in the water to catalyze the process. In one common ‘hydrogen by electrolysis’ process, the hydrogen is a by-product of chlorine production from sea salt. This uses the reaction 2 NaCl + 2 H2O → Cl2 + H2 + 2 NaOH . A major manufacturer of electrolyzers, Norsk StatOil Hydro, uses potassium hydroxide (KOH) in the electrolyte instead of sea salt.[6] Electrolyzer designs vary in detail. Norsk produces two types, one being “pressurized module electrolyzers” and a newer smaller-scale version uses proton exchange membranes (PEMs), so the latter one is much like a fuel cell in reverse. Another manufacturer of electrolyzers is Teledyne.[7]

In the most simplified conceptual representation the electrolysis reaction takes the form 2 H2O -> 2H2 + O2. In other words, two water molecules can be dissociated into two molecules of hydrogen gas and one molecule of oxygen. This reaction is highly endothermic, i.e. it requires a large energy input. Electrolysis (using electricity to dissociate water molecules) is much more efficient than thermal decomposition, which requires very high temperatures. The resulting oxygen can be released while the hydrogen is stored. When hydrogen is burned, the reaction is reversed, a lot of energy is released, and the main waste product is water. Overall hydrogen is a very clean-burning fuel. In practice, there can be some oxides of nitrogen in the exhaust because like other fuels, hydrogen is usually burned in air which is mostly composed of N2.

As discussed in the post on energy storage for mobile applications, there are serious obstacles in the way of using hydrogen as a transportation fuel even though its performance as fuel is good per se . Notably there are difficulties with storing the material, but stationary applications relax many of the storage constraints. All three modalities of hydrogen storage are more plausible in a stationary setting than a mobile one. As a highly compressed gas, it can sit in a tank in an unmanned underground bunker. The tank can be instrumented to monitor it for risk of failure from hydrogen embrittlement, but if it ever does explode in most instances no one will be hurt. Storage using tanks with stabilizers (e.g. metal to form hydrides) may indeed require a stabilizer mass 50 times the mass of the stored H2, but in a stationary application the mass of the stabilizer doesn’t hurt too much although the cost may still bite. Finally, one might contemplate having large cryogenic tanks for the H2, in which case one would try to put to good use the H2 that will inevitably boil off.

As an electricity storage medium, hydrogen involves using electricity to electrolyze water, and then compressing and storing the hydrogen, and finally either burning it or reacting it in a fuel cell to produce electricity again. The electrolysis step alone has an efficiency in the range of 50 to 80%.[8] Burning or reacting the hydrogen to produce electricity should have an efficiency of comparable to other combustion processes, also no better than 50%. This means that the overall efficiency of hydrogen energy storage is in the range of 25 to 40% at best, since we have assumed that compression and storage losses are zero.

Notes and Conclusions

The vast majority of the world’s dispatchable generation capacity today is in the form of fossil fuel burning thermal plants, and there isn’t enough untapped hydroelectric capacity to replace the thermal plants should they go out of service. If the dispatchable generation capacity provided today by fossil fuels is  eventually replaced by bio hydrocarbons or re-formed hydrocarbons, then we will continue to store energy in large fuel tanks as we do now. However, if these new fuels prove inadequate, or if electricity produced by wind and photovoltaics claims a large share of future grids, then there is little doubt that on-grid energy storage will be necessary in that future. Making nuclear power plants more dispatchable is yet another approach to avoiding the need for storage. It’s an effort that France is pursuing.

There are a number of techniques that may work for storing electricity on-grid, but we have to consider their ecological sustainability, cost, and efficiency. Pumped water storage is probably the best idea on balance, but is only feasible in a select minority of locations. Compressed air storage has yet to prove itself. It might work out, but is less efficient and is also limited by the availability of good sites. Electric batteries are widely used today, but not on a utility scale. They are a very expensive way to go, and if utility-scale application of batteries ever does happen, they may be held back by the limited supply of certain materials such as vanadium. Recycling the key elements will be an important part of the industry. Hydrogen has the attraction that its supply in seawater is essentially unlimited, and the hydrogen fuel can be used for purposes other than generating electricity. But the electrolysis – to – electricity process suffers from poor efficiency.

The consequences of efficiency shortcomings should not be underestimated. For an electric utility, having a shortfall in generating capacity of even 5% is a major headache. The utility must buy from another utility over an interconnect, or cut off some customers, but in the long run it means increasing capacity, and that is costly. Now consider what storage efficiency means to the utility. In order to store a given amount of power with an efficiency of, say, 75%, the utility has to generate 33% more, and it has to have the capacity to do that. Adding a lot of capacity is not trivial – it’s a hugely expensive undertaking. Hydrogen, with an end-to-end efficiency of only 25 to 40%, leads to the need for primary generation of some three or four times as much power as is later released from storage. These enormous costs (which are on top of the cost of the storage system itself) are the reason why storage is used as a last resort. It is still far better to have adequate dispatchable generating capacity than to have on-grid storage, and that preference will probably persist forever even if cheap storage is eventually achieved, just because of the impossibility of ever making storage 100% efficient.

[1] http://en.wikipedia.org/wiki/Grid_energy_storage
[2] http://en.wikipedia.org/wiki/Compressed_air_energy_storage
[3] http://en.wikipedia.org/wiki/Lead-acid_battery
[4] http://www.xcelenergy.com/Minnesota/Company/Environment/Renewable%20Development%20Fund/RDFProjects/Pages/SodiumSulfurBatteryEnergyStorage.aspx
[5] http://en.wikipedia.org/wiki/Vanadium_redox_battery
[6] http://www3.statoil.com/hydrogentechnologies/svg03816.nsf?OpenDatabase
[7] http://www1.eere.energy.gov/hydrogenandfuelcells/pdfs/euiw_3_doe_utility.pdf
[8] http://en.wikipedia.org/wiki/Electrolysis_of_water

Posted in Uncategorized | Leave a comment

The Bio behind Biofuels

The alternative to fossil fuels might be non-fossil biofuels, that is, molecules assembled recently by living organisms. The argument goes that if the world used biofuels to the exclusion of fossil fuels, leaving the remaining fossil fuels sequestered underground as they had been for hundreds of millions of years, this would limit the amount of carbon in circulation, cap the atmospheric supply of CO2, and limit the effects of global warming. Biofuels therefore deserve to be explored.

It’s not our objective to delve too deeply into organic chemistry – the chemistry of carbon compounds – but some definitions are important. One word we’ve already used often is hydrocarbon. Hydrocarbons are molecules that consist of carbon and hydrogen. The carbon atoms join together in various patterns while the hydrogens dangle from the carbons. There are many related categories of molecules which have additional species of atom in them in some configuration or another. For example, alcohols resemble hydrocarbons but have a hydroxyl (-OH) group bound to a carbon atom. Meanwhile carbohydrates consist of carbon, hydrogen, and oxygen atoms with a hydrogen to oxygen ratio of 2 to 1 as in water. Sugars are an important subset of the carbohydrates. There are many other interesting sorts of molecules. Organic chemistry and biochemistry are very rich subjects.

From the perspective of provisioning energy for human use, what matters most with biofuels is the energy it costs to create these organic molecules, and the ability to use them by tearing them apart to liberate some of that energy again. This energy liberation is done by combustion or burning, a chemical reaction in which a fuel reacts with an oxidant. Most often, the oxidant is oxygen gas (O2) although combustion is sometimes taken to include reactions with other elements such as halogens. The result of combustion is heat energy and smaller waste molecules, such as CO2 and H2O. Although humans mastered fire tens of thousands of years ago, we are not yet very good at turning CO2 back into longer organic molecules, but we do try and the results are the re-formed fuels, also known as synfuels. Fortunately other organisms had the secret of CO2 recycling figured out long ago, and if it weren’t so we would not exist.

Nourishment and Growth of Living Organisms

Life forms are not magically exempt from the laws of physics. The phenomenally complicated processes that take place in living organisms (and which modern biology has yet to fully grasp) all obey the laws of physics and chemistry. All earthly life forms are based on carbon, they require oxygen, hydrogen, and a handful of other elements for key processes, and they need a source of energy. It is the organisms’ nourishment mechanism that distinguishes them – what they use as their sources of carbon, energy, and electric charge. Open courseware material from the Massechusetts Institute of Technology presents a beautiful summary of this subject.[1] Although this section may seem wildly out of place in a discussion of biofuels, it presents a fundamental overview of what processes are possible for the creation of useful fuels by living organisms, and just as burning fuels in engines has to follow the laws of thermodynamics, so too does the creation of the fuels.

As a first cut, organisms are usually sorted according to their cell structure, namely whether they have a nucleus (eukaryotes) or lack a nucleus (prokaryotes). For the life thermodynamics of interest here, there is an additional distinction between self-nourishers (autotrophs) and those nourished by others (heterotrophs). Each come in three basic flavours.

Autotrophs are those organisms that get their carbon for growth and production of new cells from CO2. They need an electron donor to balance electric charges. The three autotrophic nourishment mechanisms are:

Oxygenic photosynthesis by plants and certain bacteria and algae. Their energy comes from sunlight and the electron donor is water. They can produce carbohydrates and oxygen. These organisms thrive in aerobic light conditions.
Anoxygenic photosynthesis by sulfur bacteria uses energy from sunlight, but the electron donors are not water. Instead these organisms use hydrogen, hydrogen sulfide, or metal ions as the electron donor. These organisms live in anaerobic light conditions.
Chemosynthesis by chemoautotrophic bacteria. Although they share with other autotrophs the characteristic of using CO2 as their carbon source, their energy and electron sources are reduced mostly inorganic compounds (CH4, H2, NH4, H2S, Fe2+). These organisms prefer anaerobic dark conditions.

Heterotrophs get their energy and carbon by burning reduced organic compounds. They require an electron acceptor. The three heterotrophic nourishment mechanisms are:
Aerobic respiration by aerobic eukaryotes and prokaryotes. These use carbohydrates as both their carbon and energy sources. Oxygen is the electron acceptor. CO2 is a waste product. These organisms need aerobic conditions to survive.
Fermentation by eukaryotes and prokaryotes. In this mechanism, carbohydrates are again the carbon and energy sources, but the electron acceptors are organic compounds – part of the energy source is oxidized and the rest reduced. This takes place in anaerobic conditions. The products are organic acids and alcohols.
Anaerobic respiration is only known among some prokaryotes. Again, carbohydrates are the carbon and energy sources, but the electron acceptor is not O2 but rather an oxidized compound such as (SO42-, Fe3+, NO3+, etc.). Otherwise this is similar to aerobic respiration.

Reference will be made to at least some of these processes as the production of biofuels is described.

Possible biofuel production mechanisms

Fossil hydrocarbons are the most familiar and desirable fuels today. Biochemical processes do not yield hydrocarbons, but any substance they can make that behaves similarly enough in our engines, turbines, and heating systems would be welcome as a substitute fuel. Thus, making “biofuels” means harnessing organisms (usually plants but sometimes algae, yeasts, bacteria, fungi, or other organisms) to prepare fuels that we like. Practical biofuels tend to fall into two general categories: alcohols and diesels.

Before beginning that exploration, it is important to learn a key performance criterion for fuels that is especially applicable to biofuels, namely “energy balance”. That is the ratio between the energy obtained from the fuel divided by the energy consumed in its production. In the posting on “The world’s energy sources and sinks”, it was noted that about 27% of the world’s energy is consumed in its production and distribution, and that reflects mostly the energy cost of recovering and distributing fossil fuels, but includes additional losses in generating electricity. So the “energy balance” of fossil fuels taken together is about (1-0.27)/0.27 = 2.7. Note this is not a uniform result. For example, extracting gas from America’s shales or oil from Canada’s tar sands, are far more energy-hungry processes than taking oil from a well in Kuwait. It should be obvious that any primary energy source whose energy balance is less than unity is worse than useless. Such an energy “source” should instead be considered a secondary energy storage medium. In the search for biofuels, the emphasis should be on ones that have a high energy balance.

[1] ocw.mit.edu/courses/civil-and-environmental-engineering/1-018j-ecology-i-the-earth-system-fall-2009/lecture-notes/MIT1_018JF09_Lec03.pdf

Posted in Uncategorized | 1 Comment

Bio-Alcohols

Bio-alcohol is the most widely used liquid biofuel, most often as a gasoline additive. Alcohols are usually made using fermentation by yeasts, but that is just one step in the process. The typical path is as follows: First, aerobic photosynthesis is used to grow plants. Then some portion of the plant matter is pre-digested using enzymes to make sugars (a family of carbohydrates) if it isn’t already in the form of sugars. Next the sugars are fermented by anaerobic yeasts to make alcohols. Finally the alcohol is distilled out of the aqueous solutions to make nearly pure fuel.

Intentional fermentation to make a particular alcohol for human consumption – ethanol – was practiced at least since the time of the Sumerian civilization if not earlier. A receipe for beer in the form of a prayer known as the “Hymn to Ninkasi” has survived from around 1800 BC from the Sumerian city of Nippur.[1]

Ethanol happens to have chemical properties somewhat similar to gasoline, so that internal combustion gasoline engines can use it without very much modification. Ethanol tends to attack certain polymers (rubber and plastic compounds) that are gasoline-safe, so vehicles designed to run on fuel containing alcohol (“flex-fuel” cars) have to be careful with their choice of non-metallic fittings. A disadvantage of ethanol is that it has one-third lower specific energy (30 MJ/kg) and energy density (24 MJ/litre) than gasoline, resulting in proportionately poorer fuel economy.

Although bio-fuel alcohols are mostly the same chemical – ethanol – there are different methods for producing the stuff. Cane and corn ethanols are the most popular today but cellulosic ethanol is also promising.

The simplest process is ethanol production from sugar cane. This tropical plant is so sugar-rich that the rest of its molecules don’t need to be broken down by enzymes before fermentation becomes viable. This is a favorite method in Brazil which has an appropriately warm climate and produced 24.5 gigalitres of cane ethanol in 2008, representing approximately 18% of the country’s transportation energy. It is used mostly in the form of 20 to 25% ethanol blends in gasoline.[2] In that year, Brazil used over 7.7 million hectares of farmland for raising cane, and of that crop ethanol production accounted for 3.6 million hectares. Biofuel yield was in the range 6,800 to 8,000 litres/hectare/year.[3] It would have taken about 25 million hectares to produce enough ethanol to meet the nation’s transportation sector’s entire energy demand.

Corn ethanol is made mostly in the United States of America. In this process, it is the starch-rich grain that is cooked then enzymatically digested to convert starch to a sugar (dextrose), and that in turn is fermented. Most of the plant is discarded. Ethanol, mostly from corn, is used as a gasoline additive and amounted to almost 8 percent of the US gasoline supply in 2009, and US domestic production capacity has increased tenfold in two decades, from 3.4 gigalitres in 1990 to 40 gigalitres in 2009. Corn ethanol activity consumed 10 million hectares of arable land. Yield was 3,800 to 4,000 litres/hectare/year.[3] The production of ethanol fuel from corn is dogged by controversy concerning its actual costs. Critics point out that the true cost of corn ethanol should include the energy consumed in producing and using fertilizers and pesticides and their environmental impact including pollution from runoff, the energy consumed in the operation of agricultural machinery, the energy required to ship and process the grains, the consumption of massive quantities of water during all stages of the process, the consumption of large amounts of electricity most of which comes from coal plants, and the impact on global food availability and prices.[4] The large subsidies received by US farmers further distort the picture.

One may compare the cane and corn ethanol industries by examining the energy balance. Brazilian cane ethanol production has an energy balance of about 8 to 10, while US corn ethanol production is claimed to achieve an energy balance of only 1.3 to 1.6, and even that is contested by critics who fear corn ethanol may be below unity. The related greenhouse gas reductions resulting from the use of these fuels are 61% for Brazilian cane ethanol and 21% for US corn ethanol according to the US Environmental Protection Agency.[3]

Advocates of cellulosic ethanol point out that their proposed biofuel can improve on the ecological performance of corn ethanol and even on cane ethanol, but this technology has yet to scale up to market needs. The basic idea behind cellulosic ethanol is to maximize the use of the plant once you’ve gone to the trouble of growing it. Most of a plant is composed of “woody” material dominated by the molecules cellulose and lignin. These cannot be digested by yeasts, nor are they broken down by the enzymes used to break down starch. Over a century ago, the earliest processes for making cellulosic ethanol used sulphuric acid hydrolysis, but more recently enzymatic methods have become the dominant method of breaking down the woody molecules into sugars. Once this hurdle is overcome, the feedstock for fermentation can be any plant matter – wood, grasses, and agricultural waste such as straw or corn stover. Thus corn kernels can be used for food, feed, and vegetable oil, while the rest of the plant is converted to ethanol fuel. This sounds great, but the chemistry is not without its problems, and that is why the production and use of cellulosic ethanol is not yet widespread. It is more of a research and development activity. One problem is optimizing the enzymatic decomposition of lignin and cellulose into sugars. The search is still on for effective and inexpensive-to-produce enzymes to support this process at the lowest possible temperatures. The result of the enzymatic hydrolysis usually includes a significant proportion of large sugar molecules, which conventional yeasts cannot digest. As a result, there is a need for special yeasts that can digest these sugars. Rest assured, these problems are keeping a lot of genetic engineers and other researchers very busy. Subsidized pilot plants have been built and are in operation, but so far cellulosic ethanol costs more than twice as much as corn ethanol, despite the much lower cost of the feedstocks.[5] Scaling up to commercial levels is now underway by a technology leader in this field, Iogen Corp. of Canada.[6] The energy balance of cellulosic ethanol is not clear because it depends heavily on the feedstock and enzyme technology used, but it should at least match corn ethanol already, and should perform much better eventually.

[1] http://findarticles.com/p/articles/mi_m3469/is_13_54/ai_100074934/ (and many other links)
[2] http://en.wikipedia.org/wiki/Cane_ethanol
[3] http://en.wikipedia.org/wiki/Ethanol_fuel_in_the_United_States
[4] http://en.wikipedia.org/wiki/Corn_ethanol
[5] http://en.wikipedia.org/wiki/Cellulosic_ethanol
[6] http://www.iogen.ca/, http://en.wikipedia.org/wiki/Iogen_Corp.

Posted in Uncategorized | Leave a comment

Bio-Diesels

Biodiesel is the second most popular liquid biofuel. “Diesel” fuel got its name from the type of internal combustion engine in which it is used. The engine was first demonstrated by Rudolf Diesel in 1893, and uses high compression heating instead of spark plugs to ignite the fuel. That first prototype was run on a biofuel – peanut oil – and not on a petroleum distillate.

Whereas ethanol is ethanol regardless of how it was produced, biodiesel is chemically quite different from petroleum diesel, and the two should not be confused. Petroleum-derived diesel is composed of about 75% saturated hydrocarbons (primarily paraffins including n, iso, and cycloparaffins), and 25% aromatic hydrocarbons (including naphthalenes and alkylbenzenes). The average chemical formula for common diesel fuel is C12H23, ranging approximately from C10H20 to C15H28.[1] Biodiesel refers to a vegetable oil- or animal fat-based fuel consisting of long-chain alkyl (methyl, propyl or ethyl) esters made by chemically reacting lipids (e.g., vegetable oil, animal fat (tallow)) with an alcohol.[2] As with alcohol biofuels vs. gasoline, biodiesels degrade more polymers than petroleum diesel, so one has to be careful in the choice of fittings. However, biodiesel has the advantage that it is less prone to fouling fuel injectors. At 37 MJ/litre, biodiesel has a 9% lower energy density than petroleum diesel, but a higher flash point. The typical density is marginally higher, at 0.88 kg/litre for biodiesel and 0.85 kg/litre for diesel. It also has a high “gelling temperature” at which some of the ingredients begin to crystalize. This temperature depends on the feedstock, but it is too high to use the fuel in pure form in cold countries, particularly if made from tallow. Additives can help keep the fuel liquid.

Production of biodiesel has been increasing but it puts a strain on agricultural resources. Currently biodiesel is more popular in Europe than in the Americas. Globally palm oil and soybean oil are the most commonly used oil crops for making biodiesel with about 1/3 market share each, but of those grown within Europe, rapeseed oil and sunflower oil are the most popular. The end product is usually used as a 5% blend in petrodiesel. In 2006, global production of biodiesel was slightly more than 6 gigalitres, but by 2008 production in Europe alone reached 8.9 gigalitres. Total global production of vegetable oil for all purposes (in 2005-2006) was about 125 gigalitres.[2][3][4]

Whereas sugar cane yields ethanol at approximately 7000 litres/hectare/year, common vegetable oil crops from temperate climatic zones such as soy, rapeseed, peanet, and sunflower all yield below 1000 litres/ha/yr. Palm has the highest yield at 4700 litres/ha/yr, followed by coconut at 2100 litres/ha/yr.[2] Admittedly the energy content per litre is better with biodiesel than bioalcohol, but not enough to overcome the difference in crop yields. Also, these are gross yields, not net yields after subtracting the energy consumed in production. In other words, they do not take the energy balance into account. The energy balance of biodiesel varies with the crop, but some estimates for rapeseed range from 1.8 to 3.7, which is worse than for cane ethanol but better than corn alcohol. According to the EPA, biodiesel from soy oil results, on average, in a 57% reduction in greenhouse gases compared to fossil diesel, and biodiesel produced from waste grease results in an 86% reduction.[2]

[1] http://en.wikipedia.org/wiki/Diesel_fuel
[2] http://en.wikipedia.org/wiki/Biodiesel
[3] http://en.wikipedia.org/wiki/Biodiesel_production
[4] http://en.wikipedia.org/wiki/Biodiesel_around_the_world

Posted in Uncategorized | Leave a comment

The Footprint of Biofuels

How sustainable would the global transportation sector be if run entirely on biofuels?

There were about 806 million cars and light trucks on the road in 2007 around the world. They burn over a trillion litres of gasoline and diesel yearly, or about 1200 litres/vehicle/year on average. The number of vehicles is increasing rapidly, especially in China and India.[1] The annual production rate is about 70 million vehicles. The fuel consumption is not uniformly distributed among the world’s vehicles. With 5% of the world’s population, the United States is home to about 31% of the vehicles (250 million), and they consume about 44% of the world’s gasoline: 515 gigalitres or about 2060 litres/vehicle/year.[2] Under favourable circumstances, countries with a low population density and large supply of agricultural land could become self-sufficient in biofuels, if only to meet the needs of their transportation sector, perhaps even produce surpluses for export. Let’s look at some examples.

Can Brazil be self-sufficient? Sugar cane, palms, and coconuts apparently are the best fuel crops. They grow in lush tropical countries which are mostly crowded, and few such countries have the flexibility to divert much arable land from food crops. Brazil is an exception. In Amazonia there is still plenty of rain forest that can be torched to clear space for growing fuel. On the other hand, it bears remembering that the Amazon rain forest happens to be the world’s greatest remaining natural CO2 sink. Maybe the Brazilians shouldn’t rush to destroy it. However, if they chose to do so, they could find the 25 million or so hectares they would need to cultivate sugar cane for bioalcohol. Just bear in mind that this is a lot of land – it’s the size of the entire land mass of the United Kingdom.

Let’s consider the US case. The low yield of temperate zone biodiesel crops negatively influence biodiesel’s prospects as a replacement for fossil fuels in America. If one tried to substitute alcohol for gasoline and diesel, then the same volume of driving would require 733 gigalitres/year of alcohol because of the lower energy content of ethanol vs gasoline or diesel. At an alcohol yield of 4000 litres/hectare/year from corn, this would mean 180 million hectares under cultivation for corn fuel not even considering the energy balance overhead. Suppose instead the US planted soy for biodiesel. With a median yield of 740 litres/hectare/year but energy density comparable to gasoline, this would require 700 million hectares not including the energy balance overhead. Given that Americans need to eat and the US has a total of only 190 million hectares of arable land,[3] it is clear that neither corn ethanol nor soy biodiesel are viable substitutes for petroleum in the US transportation sector, unless Americans very drastically reduce their driving.

Will Green Slime Rescue America?

Algae are familiar to most people. They come in many sizes, but most species are microscopic single-celled organisms, and we know them best as the green slime that chokes many bodies of water. They do have a bright side: Researchers consider oxygenic photosynthesizing algae a promising alternative to other biodiesel crops. Algae produce lipids. If algae species are selected and cultivated for their oil yield, they have the potential to make biodiesel much more efficiently than other crops.The US Department of Energy estimates that algae may be capable of a biodiesel yield 30 times greater than soybeans.[4][5] Algae can be used to produce other biofuels including alcohols and methane, and via hydrocracking also synthetic hydrocarbon fuels. Other DOE estimates reported elsewhere suggest a potential fuel yield as high as 200 times greater than soybeans, or 160 kilolitres/hectare/year, mostly thanks to a short harvesting cycle. If this extremely optimistic scenario is correct, then replacing all the petroleum fuel in the United States would require only about 4 million hectares, which is about 1.3 Belgiums. At a more realistic yield 30 times better than soybeans, the area required for algae cultivation would be 27 million hectares (270,000 sq.km), or roughly 1.1 United Kingdoms. Of course, none of this considers the energy balance overhead. If the energy balance is 2, then we need to double these land masses to obtain a realistic number. Oh yes, and let’s not forget that growing algae requires a lot of water!

As a practical matter, it turns out to be very difficult to grow algae for fuel. If grown in open ponds, the algae colonies are susceptible to being invaded and colonized by wild rival species. The wild algae tend to be hardier and crowd out the cultivated algae. There are so many algal spores in the air, that it is guaranteed they will find the growers’ precious ponds, even in the middle of a desert. The alternative is growing the desirable algae in enclosed systems where they can receive sunlight and be fed sterile CO2. These systems are sometimes called “photobioreactors”. Naturally this is very costly, particularly if you need a factory the size of the United Kingdom.

A Tentative Conclusion

Simply put, when fossil fuels run out, American drivers will drive much, much less than they do today. Even before that happens, they will be encouraged to cut back by the sky-high prices of all fuels. Eventually most vehicles may be hybrids, with pricey liquid fuel reserved for long trips where a cruising range of 100′s of kilometres is indispensable. Commuting will be done by public transit for those wealthy enough to live in an urban area. The suburban poor will have to use electric power in commuter cars to make it in to work.

However, let’s not give up just yet, because we have one more class of fuels to examine: re-formed fuels & biofuels, also known as synfuels.

[1] http://en.wikipedia.org/wiki/Automobile
[2] http://www.becketts.ws/eaa/eaasv-forms/eaasv031905-gage.pdf
[3] http://en.wikipedia.org/wiki/Biodiesel
[4] http://www.washingtonpost.com/wp-dyn/content/article/2008/01/03/AR2008010303907.html
[5] http://en.wikipedia.org/wiki/Algae_fuel

Posted in Uncategorized | Leave a comment

Synfuels

The idea behind re-formed fuels or synfuels is to duplicate some of the amazing biochemical feats of autotrophic nourishment by constructing useful fuels out of undesirable waste CO and CO2. Nature tackled this problem a billion years ago, but people first began to work seriously on the idea of re-formed fuels in the 1920′s. Building liquid fuel molecules out of toxic waste like CO and greenhouse gases like CO2 is attractive on many levels.  Firstly, it creates fuels for which we already have all the necessary support infrastructure, and which have desirable properties like easy handling and long-term storage, and high energy density. Secondly, it gets rid of substances that cause global warming. Thirdly, it has the potential to free countries of dependence of foreign-sourced fossil fuel supplies, since CO2 is produced unevenly but shared globally.

How does this work in nature? In the posting about the biology behind biofuels, we mentioned oxygenic photosynthesis. Photosynthesis by plants and some bacteria and algae consumes atmospheric CO2 and sequesters it at least temporarily. In the process, the plants create oxygen for the atmosphere and larger carbon molecules that serve as the basis of both biofuel and eventually fossil fuel. The plants can be harvested and processed. As we have seen, there is trouble with biofuels in that most have a poor energy balance, and all of them occupy enormous amounts of arable land. It stands to reason that if one could reduce the number of steps required to convert CO2 to fuel, the overall process might be more efficient in the energy consumed, and would almost certainly be more efficient in land utilization. Nature provides some inspiration. In addition to photosynthesis, nature also serves up the strange autotrophic nourishment mechanism called chemosynthesis harnessed by chemoautotrophic bacteria. An example is the species acetobacterium woodii, which uses H2 as the electron donor, CO2 as the carbon source, and through the metabolic process of acetogenesis creates acetic acid (CH3-COOH), the active ingredient in vinegar. This molecule is closely related to ethanol.

If nature could manage the difficult feat of making fuel, then humans were bound to try as well. The process for creating liquid hydrocarbons was first developed in the early 20th century, and is called the Fischer-Tropsch process[1]. Most fuel synthesis methods have their roots in this one. The core chemical reaction is the production of alkanes (straight-chain hydrocarbons) by through the combination of hydrogen gas (H2) and carbon monoxide (CO):
(2n+1) H2 + n CO → CnH(2n+2) + n H2O
Alkanes are the basic ingredients of petroleum and its distillates such as gasoline and diesel. The reaction is facilitated by catalysts such as iron or cobalt. In the past, the feedstocks used to prepare the input ingredients H2 and CO were coal and methane. Hydrogen gas (H2) could be produced by the water shift reaction:
H2O + CO → H2 + CO2
or steam reforming with methane:
H2O + CH4 → CO + 3 H2
In turn, the carbon monoxide (CO) was often obtained by burning coal in an oxygen-starved atmosphere. Many variations on the Fischer-Tropsch process have been devised. If one such variation is to serve as a source of carbon-neutral synfuels, it must not use coal and methane as the primary inputs. Instead, the carbon should come from spent CO2 and the hydrogen from somewhere else, for instance from water.

Water is abundant, but breaking it up to liberate the hydrogen isn’t easy. Using chemical reagents (as in the so-called sulphur-iodine process) is possible, but the supply of reagent and the disposal of the resulting compounds makes the process non-renewable and therefore would be problematic when executed on a huge scale. Breaking up water into its consitutent gases directly ( 2 H2O → 2 H2 + O2 ) consumes a lot of energy. Of the available methods for direct decomposition of water, electrolysis (using electricity) is preferable, since the alternative of thermolysis (thermal decomposition) involves heating water to 2000 degrees Celsius.[2]

Partial decomposition of CO2 to produce CO is also not the easiest trick. Uhrig[3] advocates the reverse water shift reaction:
H2 + CO2 → CO + H2O
Using this process, three molecules of H2 would be needed for each “molecular building block” of CH2 produced. This is acceptable from the standpoint of reagent supply, because the hydrogen can come from electrolysis of water and is returned to water.

Where to obtain the CO2 is another matter. Its concentration is only about 0.04 per cent in air, making the extraction of CO2 from air very costly in energy. However one could use the smokestacks of fossil fuel electricity and heating plants as a source of concentrated CO2. For instance, the total production rate of CO2 in the U.S. is about 5,680 million metric tons per year (2002) of which the production of CO2 from coal-power plants in the U.S. is about 1,875 million metric tons/year.[3], This CO2′s use as an input for synfuel production would significantly cut the energy spent on gathering CO2 vs. taking it from the air. After all the fossil fuels are gone or to supplement the supply from burning fossil fuels, the process could be changed to extract CO2 dissolved in surface seawater instead of air, since ocean water has about 140 times the atmospheric concentration of CO2.[4]

Unfortunately, all this doesn’t come for free. As noted by Uhrig et al.: Because transportation uses about 70% of our petroleum consumption, shifting from a petroleum-based transportation economy to a syn­fuel-transportation economy could reduce our petroleum use by ~70% and reduce our CO2 production by ~33% with no increase in coal used in the power plants.
Such a shift to a synfuel economy, however, would require about 255 million metric tons/year of hydrogen, about 23 times our current national production that would have to be produced by water splitting using solar, wind, or nuclear energy. It has been indicated that a megawatt of electricity will produce about half a metric ton of hydrogen per day using conventional elec­trolysis. From these two numbers, the amount of electrical generating capacity needed to produce 255 million metric tons of hydrogen per year is:
[255 x 106 MT/yr] / [365 days/year x 0.5 MT/MWe-day] = 1.397 x 106 MWe = 1,397 GWe .
This represents the total output of almost 1,400 one-GWe electric power plants, some 40% more than the current generating capacity of the U.S. today.
[3]
Note this electrical energy budget covers only the most electricity-intensive step, the hydrolysis of water. The synfuel production process as a whole consumes even more energy per unit output.

Uhrig presents examines a hypothetical synfuel plant using nuclear power as the source of electricity. Production of 2.25 million litres of fuel per day using electrolysis for the hydrogen production step, and with continuous steady production around the clock, would require 5650 MW of electric capacity. This might be cut to 2690 MW if the nuclear reactor were of the modular helium type, and if surplus heat from this reactor were used to thermochemically decompose the water.[3] Using the first figure, we arrive at the conclusion that the electric energy input into CO2- and H2O- based synfuel production is about 103 MJ per litre. That sounds about right considering that the energy content of the resulting gasoline or diesel is 34 and 37 MJ/litre respectively.

Using the United States of America as an example, the replacement of fossil fuels by biofuels is impossible today because, as noted in an earlier post, just replacing the petroleum used in transportation would consume all available arable land and then some, except in the event that the speculative algal fuel technology really pans out. In order to also replace the petroleum used in other ways, and the natural gas, and the coal, would require five times more land. It is highly unlikely that the amount of arable land in the lower 48 states will ever increase dramatically. In fact, if anything the amount is more likely to decline as a result of climate change. At the same time, the replacement of fossil fuels by synfuels is equally impossible today, because synfuel production would consume too much electricity, most of which is produced using thermal plants powered by fossil fuel.

Fortunately, the fully post-fossil-fuel future is still far away. As noted in the post on Peak Oil, petroleum and natural gas supplies will become scarce starting in our day and will run out completely in a couple of generations. Coal should last longer, perhaps another century. There is also the “wild card” of fossil fuels to consider, methane hydrates (or methane clathrates). This is the only significant fossil fuel resource of which Dr. M.K. Hubbert was apparently not aware in 1956. Now we know there are inventories of this strange fossil fuel, but it’s probable that most of it is unrecoverable. It is equally possible that methane hydrate will prove to be a serious greenhouse gas problem rather than a fuel bonanza. More on this material in another post. In any event, even if we figure out how to mine methane hydrates, they too will contribute to global warming and will run out in a few generations if not sooner. Therefore it is never too early to start thinking about the post-fossil-fuel future. We need to be prepared before it arrives. The stark choices would seem to be that either:

(1) the algal fuel technology will have to succeed spectacularly and with an impressive energy balance so that it can replace not only gasoline and diesel as our transportation fuel (20% of the world’s energy budget), but replace fossil fuels in all other roles in which fossil fuels are now used (over 60% of the world’s energy budget).
(2) there will have to be an equally spectacular buildout of electric generating capacity for synfuel production from CO2 and H2O, and none of that electricity can be made using combustion, since using the one fuel to make another is necessarily a massively energy-wasting proposition. Instead, the electric power will have to come from other primary sources of generation, and liquid fuel will be relegated to being a storage medium rather than a primary energy source. The future primary sources of electric generation would need to be identified and built, a process that will take at least one generation.

Some private-sector researchers are already thinking about CO2-based synfuels for the post-fossil-fuel world, but surprisingly few companies are that farsighted. One that deserves full credit for taking this leap in vision is Doty Energy.[5]

[1] http://en.wikipedia.org/wiki/Fischer–Tropsch_process
[2] http://en.wikipedia.org/wiki/Thermolysis
[3] http://www.tbp.org/pages/publications/Bent/Features/Su07Uhrig.pdf
[4] http://www.newscientist.com/article/dn17632-how-to-turn-seawater-into-jet-fuel.html
[5] http://www.dotyenergy.com/

Posted in Uncategorized | Leave a comment

Methane Hydrates (Methane Clathrates)

Natural gas deposits aren’t created equal. The natural gas resources known to Dr. M.K. Hubbert when he was writing his 1956 paper was “conventional” natural gas which consisted of mostly methane (CH4) trapped by deep geological formations in a manner similar to oil. This resource was already well understood and well scoped out in that time. It turns out there is another mechanism by which methane is stored in nature. Methane and water can form crystals of methane hydrate when compressed together to about 50 atmospheres at temperatures around 0 Celsius. These conditions may arise beneath permafrost and at ocean bottoms on the continental shelf.[1] Methane hydrate looks like dirty ice. Estimates of the quantity of methane hydrate vary widely. One estimate puts the carbon content in global methane hydrate reserves at three trillion tonnes, much more than in the supply of conventional gas. If all of it could be recovered, the methane from hydrates would last a thousand years at the current rate of exploitation of methane, or one hundred years if methane hydrate replaced all fossil fuels.[1] Other estimates place the amount between 0.5 and 2.5 trillion tonnes.[2]

According to Wikipedia, the average methane clathrate hydrate composition is 1 mole of methane for every 5.75 moles of water, though this is dependent on how many methane molecules “fit” into the various cage structures of the water lattice. The observed density is around 0.9 kg/litre. One litre of methane clathrate solid would therefore contain, on average, 168 litres of methane gas under standard conditions.[2]

No deposit of methane hydrate has yet been developed commercially. Extraction is a tricky procedure. In the case of methane hydrates trapped below permafrost, two methods were tested in Canada’s Mackenzie delta in 2002. The first method, nicknamed “depressurization” involves drilling a conventional wellhead and waiting for the pressure differential from the gravitational pressure of the mass above the deposit against one atmosphere at the wellhead itself to force the methane up the well. The other is to pump steam into the methane hydrate. This second method accelerates the extraction but costs almost as much energy as it liberates, resulting in a poor energy balance.

Canada is not in a hurry to exploit methane hydrates because it has large conventional fossil fuel supplies, but some other countries are keen to try hydrate mining, and they are looking at undersea deposits. Of note are South Korea, which has discovered deposits in the Ulleung Basin, and Japan in the Nankai trough off Honshu. Production may begin in 2015-16.

Some scientists worry about the risk of a spontaneous release of methane into the atmosphere because the hydrates are in a delicate balance. If a deposit is disturbed such that an escape path for methane is created, then the escaping gas decreases the local pressure, which triggers the release of methane from neighbouring crystals. A chain reaction like this can lead to an uncontrollable “methane burp” that will empty an entire large reservoir of methane into the atmosphere. Not only would this mean a large waste of fuel and a large addition of carbon to the total in circulation, but methane is a powerful greenhouse gas in its own right. One idea being considered is to maintain pressure in the hydrate deposits by pumping in CO2 to replace the CH4. This should prevent methane burps and sequester about as much carbon as is being extracted.

The global demand for fossil fuels is so great that the methane in hydrates is likely to be exploited as thoroughly as possible even though it will contribute to and even accelerate global warming. However, it might not be feasible to exploit it as thoroughly as petroleum and conventional natural gas because these conventional hydrocarbons seem to gather in large pools or “fields”, while hydrates are more thinly spread.

It remains to be seen what effect methane hydrates have on the world’s energy situation. The maximum likelihood outcome is that they will be exploited, thus delaying the decline of natural gas fossil fuel, but will contribute to global warming. It is less likely that mining methane hydrates will prove completely unsuccessful. If that were to happen, then hydrates won’t affect our energy picture, but if the permafrosts of northern Russia, Canada, and Alaska melt because of global warming from other causes, that may trigger giant “methane burps” that will accelerate that warming trend.

[1] Ice on Fire, New Scientist, 27 June 2009
[2] http://en.wikipedia.org/wiki/Methane_hydrate, http://en.wikipedia.org/wiki/Methane_clathrate

Posted in Uncategorized | Leave a comment

The Future Energy Economy

The world is so fixated on fossil fuels today, that success in the post-fossil-fuel world will probably involve turning the whole energy economy on its head, from fossil fuel combustion being centre stage to an orientation on electricity production.

In 2009:
- primary energy sources were 90% combustion (33% petroleum, 27% coal, 21% natural gas, 9% other combustion incl. fossil peat, firewood, and waste biomass). The non-combustion balance was 6% nuclear, 3% hydro and 1% other.
- 20% of the primary energy supply was used for transportation, almost all of that being from petroleum. 33% of the primary energy supply was consumed to make electricity available as a secondary energy form, with the electrical output being equal to about 13% of the primary energy supply.
- electricity demand variation was met mostly by peaking fossil thermal (and to a lesser extent hydro) generating plants. Peaking nuclear plants and electrical energy storage played only minor roles.

Where do we go from here? Here are some additional facts to consider:
- For the needs of the transportation sector, there is no good replacement for liquid hydrocarbon fuels, and probably never will be. Because of basic limitations imposed by physics and chemisty, neither electric batteries nor hydrogen can equal liquid fuels as an energy storage medium in this portion of the economy. Hydrogen as a fuel will probably never make it into the mainstream at all. Electricity can be used where you don’t need to store it (as in subways, streetcars, and commuter rail) or where you only need to store a modest amount of power (as in short-range electric commuter vehicles).
- The world cannot grow enough biofuels to meet its demand for liquid fuel, not even for the portion used by the transportation sector. All known fuel crops except algae have yields far too low to do the job. Tremendous successes with algae are unlikely given the daunting challenges: the demand for water the algal fuel industry would involve, and the problem of interspecies algal contamination killing yields. Most of all, the biofuel industry involves the removal of arable land from food production when the world’s population is already underfed but still rising.
- Remaining for consideration as currently known non-fossil primary energy sources are: nuclear, hydro, wind, solar, and geothermal. Some of these have more than one specific form. There are many different kinds of nuclear fission reactors, and maybe fusion will someday provide a positive energy balance too. Wind turbines come in different styles as well. Hydro dams on rivers are conceptually different from tidal or ocean-current hydro stations. Solar photovoltaic technology is different from solar thermal. “Geothermal” systems may either tap solar energy (if the evaporator coils are buried in shallow trenches) or residual heat of the earth (if the evaporator coils are buried in deep shafts).

It is unlikely that in the post-fossil-fuel future, any one of these energy resources will become the sole primary energy source. It is wiser to maintain a degree of technology diversity. More than likely, we will use all of them in a proportion that reflects their relative cost and suitability for a given purpose given the state of the technology at the time. One aspect of the situation that seems problematic at first is that none of these technologies can do the duty of handling the peak loads on the world’s electricity grids. Only hydro is suitable, but the global supply of this resource is inadequate for the task.

The solution would seem to be to build out electrical generating capacity in all these forms, and find a different way of dealing with peak loads. Here’s a key idea: Given that the need for liquid fuels will persist even after the fossil fuels are gone, particularly for the transportation sector, it follows that these liquid fuels will need to be manufactured synfuels. In turn, that fuel production will require massive amounts of electricity. The solution is not electricity storage, nor is it to make the electricity generating plants track peak demand. Instead, the idea should be to make the synfuel production plants’ electricity consumption track against peak demand. Thus during moments of peak demand or moments of low supply from resources such as wind and solar, the synfuel plants would reduce their consumption of electricity and output of fuel, giving up the electricity to other users. The synfuel plants would soak up all the extra electricity when more is available than needed by other users.

Here is a possible post-fossil-fuel scenario:
- primary sources will be 90% electricity from principally nuclear, but also hydro, wind, solar PV, & solar thermal turbine sources. The 10% non-electricity balance will include biomass & biofuel combustion, and geothermal resources
- requirements for heat will be met by solar, geothermal, and electric heat, and only rarely by combustion
- over 50% of the primary energy (electricity) will be consumed to make available about 20% of the primary energy as synfuels.
- electricity demand variation will be met by increasing and decreasing the instantaneous synfuel production, not by peaking nuclear, wind, or photovoltaic plants. The world can store months’ and even years’ worth of liquid fuel, and synfuel plants will be designed to rapidly absorb any changes in electricity availability, so adjustments in electricity production can be made at a slow pace (which suits nukes just fine).
- energy prices will be higher in real terms than today, prompting greater efforts at conservation. Per capita energy use will probably decline in wealthy countries relative to 2009 consumption, but it will probably increase in most countries that were poor in 2009, due to rising prosperity. Absolute energy consumption will decline in the wealthy countries due to better efficiencies and declining populations, but will increase in the formerly poor countries because of rising prosperity and rising populations.

In many ways this future represents a reversal from current practice, from combustion-first to electricity-first, but it should be managable. What the future mix of primary electric generating capacity might be will depend on the relative merits of the relevant technologies. We shall therefore examine nuclear reactors and the supply of uranium, also solar technologies, geothermal, and wind.

Posted in Uncategorized | Leave a comment