1 / 27

Results of the 2012 Loss of Load Study

Results of the 2012 Loss of Load Study. Generation Adequacy Task Force. Overview.

hastin
Download Presentation

Results of the 2012 Loss of Load Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Results of the2012 Loss of Load Study Generation Adequacy Task Force

  2. Overview • A Loss-of-Load study is conducted to develop recommendations regarding changes to the ERCOT Target Reserve Margin (currently 13.75%) and the Effective Load-Carrying Capability (ELCC) of wind generation resources (currently 8.7%) • ERCOT completed the previous study in November 2010 • Completion of this study was delayed due to incorrect wind generation data • Differences between the previous and current studies: • The new study uses 15 years (1997-2011) of load and wind data, with more detailed & consistent modeling which maintains the correlation between wind and load. Old study used 5 weather years with no correlation between wind patterns and load shapes . • Updated unit outage information (although significant gaps in the availability and quality of unit outage data remain) • Separate analyses of the ELCC of wind resources in coastal counties (Cameron, Kenedy, San Patricio, and Willacy counties) and the rest of ERCOT • The primary drivers of the likelihood of loss-of-load conditions in these studies are: • Reliability of the resource fleet • Variability of the weather-driven load

  3. Weather-Driven Load Variability • In this study, the increased variability of recent weather conditions will have a significant impact on results. This chart shows the top 200 hours for seven of the 15 weather years used in this study (all years modeled with 2014 economic conditions). The year 2004 was the mildest year; the year 2011 was the most extreme.

  4. Agenda • Review of loss-of-load study methodology • Description of specific input data and methodology used in this study • Discussion of weather data used to develop hourly loads and the likelihood of reoccurrence • Discussion of study results

  5. Background: What’s the Purpose of a Loss-of-Load Study • Used to define the likelihood of a rare occurrence – in this case, system conditions in which there is insufficient generation available to serve customer demand (called a loss-of-load event) • The more resources that are available compared to the expected load to be served, the less likely there will be loss-of-load events • Resources are typically quantified in terms of reserve margin – the percent of resources above expected peak customer demand • If there are 60,000 MW of resources available and forecasted peak customer demand is 50,000 MW, then there are 10,000 MW of resources above expected peak customer demand, and 10,000/50,000 = 20% reserve margin • Reserve Margin = Reserve resources/Forecasted peak demand • We use a loss-of-load study to define the relationship between reserve margin and the likelihood of loss-of-load events

  6. Background: Probability Review • You can calculate the likelihood of rolling two ones with a pair of dice using probabilities: 1/6 x 1/6 = 1/36 • But if someone handed you a pair of dice, and you didn’t know if they were fair (evenly weighted), you would have to start throwing them and seeing what results you got. • After a few hundred throws, you could add up the occurrences of each number, and divide by the number of throws to get the likelihood of each number. • We can define the likelihood of an event by counting the number of occurrences and dividing by the number of attempts • If the dice is fair, you’ll get the same result as above. • Using probability theory or an iterative process will lead to the same result.

  7. Background: Probability Review II • What if the system were more complex – like how many times will you win if you play several state lottery games every week? • If you can program how the system operates (in this case all the rules and probabilities of the lottery games) in a computer program, you can run the computer simulation for hundreds or even thousands of iterations. In each iteration, the program’s random number generator will give a different result. • If you count up the number of times you win as reported by the computer program, and divide by the number of iterations, you will get the likelihood of winning, the same as rolling the dice.

  8. Background: Loss-of-Load Studies • We can use a computer model to simulate the hourly operation of the ERCOT system, with the model selecting available resources in each hour to serve forecasted customer demand. The model would include: • The impact of weather conditions on customer loads – consistent with recent weather data • The impact of the same weather conditions on variable (wind) generation • Generation outages using unit-specific outage rates (as available) • If we run the computer simulation for thousands of iterations, we can add up the number of loss-of-load events and divide by the total number of iterations and define the likelihood of loss-of-load events • The trick is to set the model up so that it accurately represents the ERCOT system

  9. Background: Developing a Target Reserve Margin • A loss-of-load study establishes the relationship between reserve margin and expected number of loss-of-load events. The following chart shows this relationship from the previous ERCOT study: • In order to establish a Target Reserve Margin, a reliability criteria has to be established. ERCOT has traditionally used a 1 event in 10 years, or 0.1 event per year standard.

  10. Study Process Load Forecast (Hourly Loads) Hourly System Simulation Model Output Data (Count of Outage Events) Variable Generation Data(Hourly Energy Output) Resource Capability and Outage Data This process is implemented for every weather year, for a range of reserve margins, to develop a curve like the one shown on the previous slide

  11. Major Input Assumptions Outage Data(Source: NERC Generation Availability Data System [GADS] database or directly from Resource Owner) Hourly Load Shapes for 15 Weather Years (Source: ERCOT Load Forecasting Team) Hourly Wind Shapes by Wind Farm for 15 weather years(Source: AWS TruePower) Generator Data (e.g. Seasonal Capability)(Source: ERCOT Model Database and RARF entries)

  12. Study Inputs • Generation fleet used in study by fuel type (Summer MW) • Coal 19,126 • Natural Gas 43,743 • Nuclear 5,157 • Biomass 212 • Other (e.g. landfill) 17 • PUNS 8,541 • Wind (West) 10,340 • Wind (Coastal) 1,915 • Added 4,639 MW of flat load representing additional PUN load not included in ERCOT load forecast • The Coastal wind is contained in Cameron, Kenedy, San Patricio, and Willacy counties. • The 2014 expected (average weather year) load is 74,928 MW • The highest peak load is 80,821 MW from the 2011 load profile • The lowest peak load is 69,972 MW from the 2004 load profile • Outage Data used in study: • 34% of the data was unit-specific • 66% of the data was generic • System fleet average Effective Forced Outage Rate (EFORd) • 2010 Study: 4.45% • 2012 Study: 5.47%

  13. Overview of modeling process in 2012 LOL Study • 400 iterations for each of 15 weather years was used • Or a total of 6,000 iterations at each reserve margin level for calculating expected number of loss-of-load events and ELCCs • Each weather year for load was paired with the correlated weather year for wind generation output • For instance, 1997 weather year load was paired up with1997 weather year wind generation • The Monte Carlo draws of the partial and full forced outages was the only variation within each weather year (1997 to 2011)

  14. LOLEV Output Analysis Step 1 Step 2 Step 3 Run system simulation model, which counts loss of load events by iteration, using Load, Wind generation and resource capability input data Probability-weighted LOLEV results for each weather year (and other statistics) are summed by installed capacity level to determine total number of events across all weather years Probability of each weather year is multiplied by the LOLEV for each weather year to give a probability-weighted LOLEV

  15. Effective Load Carrying Capability Analysis Step 1 Step 2 Step 3 Run system simulation model, which counts loss of load events by iteration, using Load, Wind generation and resource capability input data Calculate level of reliability as shown in previous slide Repeat Steps 1 and 2 with Coastal and other Wind resources removed from the model Divide the amount of conventional generation capacity by the amount of wind generation capacity to determine the equivalence percentage Add conventional generation needed to achieve the same reliability level seen in step 2 Step 4 Step 5

  16. Probability Weightings for Weather Years • The Loss of Load Study uses 15 different weather years. The likelihood of reoccurrence of each of these years in the study assumptions has a major impact on results This chart shows the top 200 hours for seven of the 15 weather years used in this study. The year 2004 was the mildest year; the year 2011 was the most extreme.

  17. Probability Weightings for Weather Years - Analysis • It is common in loss-of-load studies to assign specific probabilities to each historical weather year included in the study. • In the LOLP Study completed in 2010, the following probabilities were assigned to the five weather years included in the study: • Extreme weather (2010 weather year): 10% • Hotter than normal weather (2000 weather year): 23.3% • Average weather (1999 weather year): 33.3% • Cooler than average weather (2003 weather year): 23.3% • Much cooler than average (2007 weather year): 10% • With 15 weather years, as in the current study, we could set all years to be equally likely, or use some other criteria

  18. Probability Weightings for Weather Years - Analysis • An analysis of historical weather conditions was conducted in order to quantify the likelihood of reoccurrence of weather conditions in the past 15 years • The analysis was performed only for the summer season (June – August). The summer season was selected due to readily available probability of exceedance weather analysis from the Climate Prediction Center. • Link to climate data: http://www.cpc.ncep.noaa.gov/products/predictions/long_range/poe_graph_index.php?lead=6&climdiv=61&var=t

  19. Probability of Exceedance

  20. Probability of Exceedance For the summer in the North Central Weather Zone: • The mean temperature is 82.24 degrees F • The standard deviation is 1.57degrees F Using the data on this chart, the probability of exceedance can be calculated for any year

  21. Probability of Exceedance

  22. Probability of Exceedance For the summer in the North Central Weather Zone: • The top 10 percent of warmest summers have a mean temperature > 84.25 degrees F • 8 of the last 15 years have exceeded 84.25 degrees F

  23. Limitations on Probability of Exceedance Analysis • Limitations/Cautions/Warnings: • Mean temperatures have been increasing • over the last couple of NOAA 30 year normal calculations • using 15 year rolling average temperatures (ERCOT) • There has been an increase in the magnitude and frequency of extreme temperatures • Impacts the calculation of the standard deviation • Since the distribution of temperatures is changing, assigning probabilities to historical weather years is extremely difficult • ERCOT used the probability of exceedance data from the CPC and judgment in determining probabilities for individual years

  24. Probability Weightings for Weather Years • Weather Year Probabilities based on NOAA data: • Extreme summer weather with a 5% probability • Composed of 2011 • Warmer than average with a 15% probability • Composed of 2010 • Average weather with a 50% probability • Composed of 25% for 2006 and 25% for 2009 • Cooler than average with a 25% probability • Composed of 3.5714% for the following years: 1998 – 2000, 2003, 2005, 2007, 2008 • Much cooler than average with a 5% probability • Composed of 1.25% for the following years: 1997, 2001, 2002, 2004 • The impact of the likelihood of 2011 weather is seen in the graph on the next slide.

  25. Study Results • Study results are dependent on selection of probabilities for the 15 weather years, as shown in the following graph:

  26. Study Findings • Determination of the relationship between reserve margin and likelihood of loss-of-load events is heavily dependent on assumed likelihood of 2011 weather conditions • Using a 1-event-in-10-years loss-of-load criteria leads to a target reserve margin of ~13.7 % to ~18.9% depending on weather year assumptions • Study results indicate an effective load carrying capability of 14.2% for non-coastal wind resources, and 32.9% for coastal wind resources

  27. QUESTIONS / COMMENTS???

More Related