Site icon IER

Lunacy from the Journal of Power Sources: Just Build More Renewables

A study from the Journal of Power Sources indicated that by 2030 the grid could be powered almost entirely using a mix of wind (both on- and off-shore), solar and grid-scale energy storage, and that this grid would be both affordable and reliable. It would require building solar and wind capacity up to three times the grid’s actual load across a large geographical area. The concept challenges the notion that renewables need back-up power from fossil fuels for base load power generation.[i]

The study claims that up to 99.9 percent of hours of load can be met by wind and solar technologies by ramping up renewable generation capacity to around 290 percent of peak load and adding 9 to 72 hours of storage. This amounts to the grid using fossil fuels to generate electricity in just one hour out of every 41 days. According to the analysis, wind power by itself could meet all of the grid’s power needs 90 percent of the time if utilities over-built wind generation capacity to equal about 180 percent of peak load.[ii] And, at 2030 technology costs (about half the capital costs of today according to the authors), 90 percent of load hours could be met at costs below current electricity costs.

This article describes the problems with the study assumptions and methodology and thus indicates the dubious nature of the results.

Methodology

The authors developed a computer model, the Regional Renewable Electricity Economic Optimization Model, which analyzed four years of hour-to-hour data on weather and electricity consumption by the PJM Interconnection, the regional transmission organization for a 13-state region in the eastern United States. The data used covered the years 1999 to 2002. At that time, PJM managed 72 gigawatts of total generation, with an average load of 31.5 gigawatts.

The model requires that electrical load must be satisfied entirely from renewable generation and storage, and finds the least cost generation mix that meets that constraint. By constraining the model to only build renewable technologies and storage, the model overbuilds renewables in an attempt to meet capacity requirements. That is, the methodology forces the construction of renewable technologies because it eliminates the ability to meet capacity requirements with fossil energy. Storage, the most costly build option, is used to fill in supply gaps and to absorb excess production, adjusting for rapid changes in wind or solar output.

The model allows existing fossil generation plants to be used for backup power, producing to meet any shortfall not met by renewable technologies and storage. Since only existing fossil plants are used, the authors assume the cost associated with them is just fuel and operations and maintenance costs; there is no new fossil plant investment. The model allows back-up power to come from existing fossil technologies only. It does not allow hydroelectric or nuclear power to provide back-up power because only small amounts of hydroelectric power are available in PJM, and nuclear power cannot be ramped up and down quickly and has high capital costs making it uneconomical for infrequent usage.

The model seeks to find the optimal solution of solar PV, onshore wind, and offshore wind that lies between zero and the maximum resource limit of each technology in the PJM. The model considers three storage technologies: centralized hydrogen, centralized batteries, and grid integrated vehicles (GIV), the latter using plug-in vehicle batteries for grid storage when they are not being used during driving periods. If renewable generation is insufficient to meet the hourly load, storage is used before existing fossil fuel technologies are allowed to generate power. If excess renewable generation exists, storage is filled first and any remaining excess electricity is used to displace natural gas. When load, storage and gas needs are all met, the excess electricity is “spilled” at zero value, e.g. by feathering turbine blades.

Cost Assumptions

The cost of financing, building and operating solar, wind and storage technologies, expressed in cents per kWh, are used to find the least cost solution. According to the authors, the cost of renewable energy and storage technologies come from published information for 2008 and published projections for 2030 (in 2010 dollars). For example, according to the authors, the projected capital costs for wind and solar in 2030 are roughly half of today’s capital costs while projected operations and maintenance (O&M) costs are about the same. Wind and solar technology costs in 2030 are lower than today’s costs because it is assumed that as more of the technologies are built over time, economies of scale will bring their costs down. The 2030 cost projections, according to the authors, assume continuing technical improvements and scale-up, but do not assume breakthroughs in renewable generation and storage technologies. State and federal subsidies are not included in the technology costs for renewables, but externality costs are included for fossil technologies. While externality costs are not part of today’s market price, they are added to the price of electricity by the model to represent the modeler’s belief about the cost of pollution.

The authors use more optimistic cost assumptions for wind technologies than does the Energy Information Administration (EIA). While the lower technology costs assumed impact the total electricity price, those costs are not the driving force of the methodology, which is the assumption that only wind, solar, and storage can be built to meet demand. For EIA, capital cost assumptions, click here[iii], and for its levelized cost assumptions, click here[iv].

Issues with the Methodology

There are several issues with this approach and the interpretation of the results. First, the methodology is not representing the probability that the renewable technology (energy from the wind or the sun) will be available to meet peak demand because the authors are using a randomly selected 4-year hourly weather sample. Hourly data cannot indicate problems that arise due to very short drops in voltage caused by intermittent power. For example, when voltage weakened for a millisecond at an industrial plant in Germany, precision equipment was damaged resulting in firms purchasing emergency equipment and generators rather than face further damage to their equipment, costing tens of thousands to fix. Also, the sample used may not be indicative of extremely low wind periods, which have occurred and been recorded, for example, in Texas and in California. Further, the probabilistic aspects of system reliability are not represented. Thus, the authors are not estimating system reliability through meeting a reliability target or a reserve margin, but by simply meeting load. The data period they used from 1999 to 2002 omits the very hot year of 1998.

As an example, early in the summer of 2006 as well as on later occasions, California faced record heat conditions that strained its ability to meet a peak demand of 50,000 megawatts.  The resources at that time included 2,323 megawatts of wind capacity.  However, wind’s average on-peak contribution over the month of June was only 256 megawatts or barely 10 percent of the nominal amount.[v]   This example shows that data on installed wind capacity is of little or no value in predicting the actual power the system can get from it at peak times.

In August 2012, the California Independent System Operator (ISO) issued a “flex alert” that called for a reduction in use of lights, air conditioning, and appliances, i.e. the CA ISO called for electrical conservation in order to avoid black-outs. At that time, California had 4,297 megawatts of installed wind capacity, but less than 100 megawatts were operating at 11 am on August 9, 2012, or just 0.02 percent of electricity demand. While solar was contributing more at 11 am than wind, by 5 pm when demand was at its highest, solar’s electrical generation output waned and wind’s output was increasing but not enough to meet demand. Wind’s more sizable generation levels do not occur until the late night or very early morning hours when they are least needed. The California Independent System Operator has on many occasions expressed concerns about its ability to maintain reliability in the face of a 33 percent renewable portfolio standard for 2020 that will require a tripling of wind and solar power production.[vi]

Similarly, the Electricity Reliability Council of Texas (ERCOT) is responsible for dispatching the state’s generation, administering its energy markets, and monitoring the adequacy of resources to meet growing demand.  For planning purposes, ERCOT treats a megawatt of wind capacity as equivalent to only 8.7 percent of a megawatt of dispatchable fossil fuel capacity.[vii] In other words, ERCOT counts only 8.7 percent of wind’s nameplate capacity as dependable capacity at peak periods of electricity demand.

Another issue that is not accurately represented is the assumption that by adding more intermittent capacity as the model is designed to do, the system can add proportionately more firm capacity. That is, the authors assume that the nth wind farm added contributes about as much to capacity in any given period as the first wind farm. However, in reality, at higher penetration levels, capacity value declines, and eventually, the marginal plant would have a capacity value that approaches zero. It is difficult to ensure system reliability by just adding intermittent technologies to cover 100 percent of energy load without adding major amounts of storage capacity. With a great deal of storage, it would be unnecessary to overbuild wind, but much more costly than the authors are predicting.

As the authors indicate, one of the factors determining how fast the capacity value declines to zero is the geographic dispersion of the wind resource. Thus, a larger region with a well-dispersed wind resource would have a much slower “decay” rate for capacity value than a compact region where all the wind was in the same place. But even if the entire United States is assumed to be a grid region for balancing load, there would be seasonal and diurnal wind patterns that would make it very difficult to count on any incremental capacity value for wind at high penetration levels.

Further, the prime wind areas in the continental U.S. are already largely taken.  There are still many that will support wind installations, but the quality of wind in them (both speed and intermittency) is lower and will be less profitable for builders of new plants. And, many windy areas are isolated and require transmission to consuming areas, but the authors did not take transmission issues into consideration, which would further increase cost. Also, areas that have a greater density of wind turbines will have lower quality wind.  An operating turbine propagates turbulence that lowers the ability of a downwind unit to produce steadily.

The conclusions of this paper were challenged in a paper that evaluated operational data for 21 large wind farms connected to the eastern Australian grid, which is the largest, most widely dispersed, single interconnected grid in the world. The authors of that paper evaluated data of 5-minute time intervals, a much smaller time interval. They find that the connection of such a widely dispersed set of wind farms poses significant security and reliability concerns to the eastern Australian grid and similarly worldwide.[viii]

Conclusion

In order to create a representation of largely renewable generation sources in the PJM, the authors modeled its electricity system without allowing new fossil fuel or nuclear technologies to be built. The renewable technologies that could compete were solar PV, onshore wind, offshore wind, and storage technologies. The authors found that renewables could satisfy demand by just tripling the amount of capacity required and building some minor amount of storage.

IER found a number of flaws with the methodology, including not accurately representing reserve margin and the concept of firm capacity, and understating the costs imposed by the uncertainty of wind and solar output. The belief was that just adding more intermittent capacity will keep on adding proportionately more firm capacity, which is not the case in reality. For instance, if you buy a car that works only 1/3 of the time, and then buy 2 more cars just like it, it does not mean that one of the cars will be available when you need it.


[i] Science Direct, Cost-minimized combinations of wind power, solar power and electrochemical storage, powering the grid up to 99.9% of the time, March 1, 2013, http://www.sciencedirect.com/science/article/pii/S0378775312014759

[ii] Midwest Energy News, In Iowa, another view on how to solve wind’s variability, March 26, 2013, http://www.midwestenergynews.com/2013/03/26/in-iowa-another-view-on-how-to-solve-winds-variability/

[iii] Energy Information Administration, Updated Capital Cost Estimates for Utility Scale Electricity generating Plants, April 12, 2013, http://www.eia.gov/forecasts/capitalcost/

[iv] Energy Information Administration, Levelized Cost of New Generation Resources in the Annual Energy Outlook 2013, January 28, 2013, http://www.eia.gov/forecasts/aeo/er/electricity_generation.cfm

[v] Robert J. Michaels, “Run of the Mill, or Maybe Not,” New Power Executive, July 28, 2006, 2. The calculation used unpublished operating data from the California Independent System Operator

[vi] California Independent System Operator, Reliable Power for a Renewable Future, 2012-2016 Strategic Plan.  http://www.caiso.com/Documents/2012-2016StrategicPlan.pdf

[vii] Lawrence Risman and Joan Ward, “Winds of Change Freshen Resource Adequacy,” Public Utilities Fortnightly, May 2007, 14 -18, 18; ERCOT, Transmission Issues Associated with Renewable Energy in Texas, Informal White Paper for the Texas Legislature, Mar. 28, 2005, http://www.ercot.com/news/presentations/2006/RenewablesTransmissi.pdf

[viii] Multi-Science, Wind Farms in Eastern Australia-Recent Lessons, January 8, 2013, http://multi-science.metapress.com/content/f1734hj8j458n4j7/

Exit mobile version