Electricity is the best approach to distributing energy to static locations. It’s very expensive to set up, but once an electric grid is in place, energy can be sent long distances with low losses, and the electricity supply can be subdivided in real time such that millions of individual loads get exactly the amount of power they need. No combustion fuel, not even natural gas and its pipeline system, comes close to matching the versatility of electricity. However electricity also has its shortcomings.
One downside is that there is no way of storing electricity on a large scale. Capacitors store electric energy efficiently, but can hold very little at a time. Batteries can hold more, but they are less efficient, very heavy, most of them use toxic chemicals, and some use rare elements. Most of all, both capacitors and batteries are far too expensive to store electric power within a grid system. For instance, one of the least expensive types to use are lead-acid automotive batteries. One such battery costs about $100 and stores about 1 kilowatt-hour of electricity at a time, which is worth only 3 to 10 cents. Assuming the battery can last a hundred charge/discharge cycles, its use multiplies the cost of electricity at least tenfold, not to mention that it creates the potential for an environmental problem if its contents are not recycled properly. Other electricity storage ideas have been discussed, for example pumping water uphill to a reservoir to store energy, and letting the water run downhill to a hydroelectric generator to release energy. We will return to the subject of energy storage. For now, suffice to say that none of these storage techniques are either energy efficient or cost efficient compared to doing the obvious: generating electric power exactly when it is needed.
Demand for electricity fluctuates from hour to hour, by the day of the week, and from season to season. Fortunately, while you cannot predict when any one customer will turn loads on or off, there is a fair bit of predictability concerning the load in aggregate. Typically, in the United States the largest loading happens on weekday summer afternoons when many air conditioners are running, while the least loading happens in the wee hours on springtime Sunday mornings when almost everyone is asleep. Similar patterns occur in other wealthy countries, except that the colder and more northerly countries have smaller peaks associated with summer afternoon air conditioning, and larger peaks associated with winter evening lighting and heating.
Electric utilities’ strategy to cope with load variations is to increase and decrease the output to match the changing load. The daily demand can be thought of as consisting of two portions: The “base load” is the demand that is always present regardless of the time of day, and the “peak load” is the variable portion of demand. Coping with the peak load means having a power generation technology that can be adjusted rapidly enough to match demand. This is called a “dispatchable” generator. There are a variety of electric generation technogies, and some have greater dispatchability than others. Non-dispatchable resources are clearly suited for base load use only, while dispatchable resources can be used for base or peak loading. Let’s look at the key characteristics of the main generation technologies that support the grid: hydro, thermal, and nuclear.
Hydroelectric plants – This was the first electricity generating technology used on a large scale starting a century ago, which is why many utilities anachronistically still have “Hydro” in their name. A hydro plant converts the gravitational potential energy of water to electricity. The power available depends of the “head” and the “flow” (how far the water drops, and how much of it drops.) Hydro plants are expensive to build but cheap to operate, and produce some of the lowest cost electricity if in a good location. Originally, hydro was even thought to be eco-friendly, before people realized that damming rivers is very damaging both to aquatic life (especially migratory fish), and to land animals that depend on the ability to ford the river. Hydro power is fairly reliable, but reservoirs tend to have seasonal variation in water availability that doesn’t match demand: In temperate climates, the best supply of water is during the spring thaw, which is the season of the least electricity demand, and the worst supply of water is towards the end of a hot dry summer when people crave air conditioning. On the positive side of the ledger, hydroelectric capacity is highly dispatchable, but within limits imposed by the availability of water and the need to respect the momentum of the water flowing in the tubes. There are abundant examples of what not to do – e.g. overly rapid closure of water valves was a major contributing factor to an accident that destroyed the huge Sayano-Sushenskaya hydro plant’s turbine hall in southern Siberia on 17 August 2009, and took 75 workers’ lives. Fortunately the dam itself held that time, but as everyone knows, the occasional catastrophic dam burst does occur. Although hydroelectric stations were once the dominant electric generation technology, that will never be the case again simply because of the shortage of suitable sites. Most of the good sites are already taken. Developing all the as-yet undeveloped decent hydro sites in the whole world would not significantly raise hydro’s share of the global energy budget beyond the existing 3%.
Combustion Thermal plants – Whether fueled by coal, natural gas, or oil, the basic concept is that combustion boils water, and the steam is sent through a turbine which in turn drives a generator. The water is subsequently condensed and reused. These powerplants convert the chemical potential energy in fuel to heat, the heat to turbine motion, and that motion to electricity. In contrast to hydroelectric facilities, these plants are generally cheap to build but expensive to operate, because they consume a lot of fuel. The current fashion is to view these plants as the most ecologically damaging of all, since they convert huge amounts of fossil carbon into CO2, and often spew out other pollutants such as oxides of nitrogen, ash, and so on. Coal is rightly considered the dirtiest of the fossil fuels. Combustion technology is the dominant method of generating electricity in the world today, and will remain so for a long time. Sometimes non-fossil fuels are used for combustion on an opportunistic basis, e.g. wood by-products from forestry operations in some northern Canadian communities, or straw and other plant refuse in India. Thermal plants are reasonably dispatchable, but at a price: Turning them up and down (aka “cycling”) to meet changing demand is punishing on the equipment, reducing its service life and increasing maintenance costs.
There are some variations on the combustion thermal plant scheme. Gas turbines are sometimes used, so that no boiler or steam are involved. This is in effect a specialized stationary jet engine. Among other things, this method is used to provide power (including electric propulsion) on some of the world’s largest cruise ships. Typically fueled with diesel fuel, the internal combustion engine is another variation on the combustion theme that is used for smaller installations, from the familiar portable generators of a few kilowatts to large stationary units of a few megawatts. These are fairly cheap to build and expensive to operate, but are still a sensible choice for off-grid locations. Internal combustion engines are highly dispatchable with few downsides.
Nuclear Thermal plants – Conceptually, nuclear plants are close cousins of combustion thermal plants in that a nuclear reactor is used to produce heat, which in turn is used to boil water to turn a turbine. Nuclear plants involve a much smaller fuel load than combustion plants, and during normal operation don’t release any pollution at all. The biggest headache with this technology is the storage of spent fuel, which remains radioactive for centuries and may be a terrorism target to boot. In addition, the worst-case-scenario accidents are very nasty, particularly in a supremely ill-designed and badly managed facility like Chornobyl in Ukraine which exploded on 26 April 1986, causing tens of directly attributable deaths, probably also thousands of additional cancers, and forcing the evacuation of over 300,000 people. In addition, nuclear power plants are very expensive in countries that build them as “one-of’s” such as the United States. France has a more “cookie-cutter” approach to replicating nuclear power plants which helps to contain costs, but even there the plants are fairly expensive to build. There are a variety of reactor designs that vary in the degree the uranium fuel is enriched to favour the U-235 isotope amid the more abundant U-238, in the moderators and coolants used, et cetera. Perhaps the pros and cons of particular reactor types and nuclear powerplant designs can be the subject of a future post. For now, one significant fact is that no nuclear power plants like to have their power ramped up and down. In short, nuclear power is not readily dispatchable – it is suitable for base loading only, not for peak loading.
In general, using any generation facility to track peak loads causes stress that costs energy and shortens the lifespan of the equipment, and therefore increases costs. It’s just that some technologies suffer more than others from being subjected to this treatment. Utilities have varying strategies for dealing with peak loading depending on the mix of technologies available to them and the degree of peaking that they experience with their customer base. If nuclear power is in the mix, it is always used for base loading. Thermal and hydro plants can both be dispatched reasonably well. Such strategizing is not a new idea. Here is an example diagram addressing ways to balance hydro and steam (i.e. thermal) resources, from the 1953 edition of a textbook first published in 1932. The strategy advocated here is to use hydro for base loading if the water flow is adequate.
It should be clear that if all the generating capacity on a given grid is owned by one utility, that organization will use a dispatching strategy that is best for it at a given time. If it’s a private sector utility, that means doing what is most profitable in the long term. Usually, one would opt to use some mix of hydro and thermal plants for peak loading, and nuclear reserved for base loading. Through the interconnection of local grids into regional grids and regional grids into national or even transnational grids, it is possible to optimise the matching of supply and demand by trading electricity across grid boundaries. These days, this is a routinely used feature and usually works well. On rare occasions, interconnection enables grids to destabilize each other on a massive scale, such as the blackout that began in the US northeast and Canada’s province of Ontario on August 14, 2003. There is of course yet another possibility: Supplementing these approaches to dealing with peaks is the possibility of load management. That is, the utility may have an arrangement with customers that the utility may ask them to shut down or even cut them off unilaterally during demand peaks. In the past, this applied only to large customers, and usually the customer received a discount on the power as a way for the utility to compensate the customer for the disruption. Future “smart grids” may bring this kind of disruption to individual homeowners with no compensation.
The story becomes more politically complex in those jurisdictions where the electric power system is not vertically integrated. Suppose there is one regulator, one distribution grid operator, but several generating companies (and of course many customers). Who among the multiple generating companies gets the benefit of contributing base load capacity, and who carries the heavy financial burden of handling peak loads? Who gets the blame if the system crashes and a blackout occurs? These matters have to be worked out before a grid is “opened” to competing generating companies.
Economic theory holds that true market conditions exist when there is a large enough number of sellers and buyers offering comparable product, such that the entry or exit of any one of them from the market doesn’t fundamentally change the situation. For the reasons described above, true market conditions cannot apply to the electricity grid. It is always a distribution monopoly with at most some “managed” competition in generation. Diehard capitalists will be disappointed by the reality that governments always have a hand in electricity grids. Sometimes those governments do a good job, and at other times they botch the regulation of the grid rather terribly.
What happens when “alternative” electricity generation technologies are added to the mix, such as wind and solar? These systems are not dispatchable and therefore are not suitable for peak loading, nor are they even suitable for base loading. Instead, they operate on an opportunistic basis, producing when the wind blows or the sun shines, and their contribution of power to the grid has to be “covered” by a dispatchable technology which has to be paid for too. How is this hidden cost of the alternative energy sources to be accounted for? All these issues are food for future discussion.
 F.A. Hamill, “Sayano-Suchenskaya Accident – Presenting a Possible Direct Cause”, International Water & Power Dam Construction, November 2010, pgs. 30-37
 Frederick T. Morse, Power Plant Engineering, 3rd ed. D. Van Nostrand Company (NY: 1953), pg. 52