1 / 49

EMS Outages and Lessons Learned QSE

EMS Outages and Lessons Learned QSE. 2014 ERCOT Operations Training Seminar Texas Reliability Entity Jagan Mandavilli, Bob Collins, Mark Henry. Objectives. Upon completing this course of instruction, you will :

gareth
Download Presentation

EMS Outages and Lessons Learned QSE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EMS Outages and Lessons LearnedQSE 2014 ERCOT Operations Training Seminar Texas Reliability Entity Jagan Mandavilli, Bob Collins, Mark Henry

  2. Objectives Upon completing this course of instruction, you will: • Recognize the typical causes and failure modes for Energy Management Systems (EMS) systems and tools • Identify the importance of some of the tools QSE’s use • Identify the EMS applications critical to your operation • Recognize the QSE operator’s role in identifying problems and reporting EMS failures • Identify the components of the procedures for operation of the system during EMS failures

  3. Content • EMS Failures • Communication and Control (EMS) Failures • Inter Control Center Communication Protocol (ICCP) failures • Remote Terminal Unit (RTU) issues • EMS Applications Failures • Automatic Generation Control (AGC) Failure • SCADA failures • Backup Control center operation • Loss of Operator User Interface • EMS failures due to database updates • Training and Live EMS Screens on same display • Analysis of Restorations • Contributing & Root causes with examples • Common themes with examples

  4. Definitions • SCADA – Supervisory Control And Data Acquisition • EMS – Energy Management System • AGC – Automatic Generation Control • LFC – Load Frequency Control • ICCP – Inter Control Center Communications Protocol • RTU – Remote Terminal Unit • EAS – Event Analysis Sub Committee • EMSTF – Energy Management Systems Task Force • SCED – Security Constrained Economic Dispatch

  5. Tools and Their Importance • SCADA • AGC/LFC • ICCP • SCED

  6. ERCOT EMS Overview

  7. EMS Reliability • EMS are extremely reliable • Extremely high industry wide availability • Systems usually have redundancy • Multiple systems are common, with on-the-fly failover • Backup centers, sometimes manned • Communications circuits on highly redundant ring networks • Data handling has built in error detection and correction • Support staff available 24 x 7

  8. What do EMS Problems Look Like? • Trends flatline • Data no longer updates • Color changes • Alarms • Strange application results • Lockup of applications • Loss of Visibility

  9. NERC EMS Failure Event Analysis • NERC and personnel examined events • 81 Category 2b events (Oct 26, 2010 – Sep 3, 2013) reported • 64 events – thoroughly analyzed and reviewed • 54 entities reporting - 20 entities experiencing multiple outages • Restoration time for partial outages: 18 to 411 min • Restoration time for complete outages: 12 to 253 min • Vendor diagnostic failures – Software & Hardware Issues • Several noticeable themes

  10. NERC Lessons Learned from EMS Events #1 • Remote Terminal Units Not on DC Sources • The power supply to an RTU for a High Voltage Direct Current (HVDC) converter station was not designed to be fed from station batteries, resulting in a loss of the RTU when all ACfeeds to the substation were lost due to an event. • Lesson Learned • While the availability of multiple AC sources provides a deep degree of reliability for RTUs, entities should evaluate the practicality and feasibility of powering RTUs needed for control, situation awareness, system restoration and/or post analysis from the station batteries. Operator Training Seminar 2014

  11. NERC Lessons Learned from EMS Events #2 • EMS System Outage and Effects on System Operations • An entity’s EMS began to lose data necessary for visibility of portions of its transmission network causing functionality and/or solution interruptions for some of its EMS operational tools. No loss of load occurred during this event and it was quickly determined to not be a cyber security event. • Lessons Learned • All entities should have a procedure such as “Conservative Operations” which provides possible steps they may have to take to ensure reliability. Training should be conducted routinely on all procedures especially those related to low-probability, high-impact events regardless of how often the procedures are used. Operator Training Seminar 2014

  12. NERC Lessons Learned from EMS Events #3 • EMS Loss of Operators User Interface Application • A control center experienced a loss of control and monitoring functionality of the EMS due to the loss of the operator’s user interface application between its primary EMS computer/host server and the system operator consoles. • Lessons Learned • Create a ‘save case’ of settings before and after any change to the system is made. The ‘save case’ will aid in supplying the necessary documentation needed to perform comparisons. • Analyze EMS performance on a periodic basis and evaluate if the system is meeting the needs as designed and intended. Operator Training Seminar 2014

  13. NERC Lessons Learned from EMS Events #4 • SCADA Failure Resulting in Loss of Monitoring Function • A Transmission Owner (TO)’s control center experienced a SCADA failure which resulted in a loss of monitoring functionality for more than thirty minutes. • Lessons Learned • It is beneficial that Transmission Operators (TOP) and TOs install a “heartbeat monitor” alarm to detect stale or stagnant data. • A periodic evaluation of the mismatch thresholds should be conducted for state estimator alarming specific to each operating area, such that it will allow for the optimum sensitivity while minimizing false mismatch alarms.   Operator Training Seminar 2014

  14. NERC Lessons Learned from EMS Events #5 • Failure of EMS Due to Over-Utilization of Disk Storage • Loss of control functionality due to the hard disk on the SCADA server being fully utilized. • Lessons Learned • SCADA equipment monitoring should include monitoring of hard disk storage utilization. Purging processes need to be set up to perform periodic clean up of disk space. Operator Training Seminar 2014

  15. NERC Lessons Learned from EMS Events #6 • Indistinguishable Screens during a Database Update Led to Loss of SCADA Monitoring and Control • During a planned database update and failover, an EMS Operations Analyst inadvertently changed an online SCADA server database mode from “remote” (online) to “local” (local offline copy), which caused a loss of SCADA monitoring and control of Bulk Electric System (BES) facilities. • Lessons Learned • Changing the database mode on a server is not recommended. A future release of EMS software should eliminate the ability to switch database modes on a server. Operator Training Seminar 2014

  16. NERC Lessons Learned from EMS Events #7 • Inappropriate System Privileges Causes Loss of SCADA Monitoring • An entity experienced a loss of SCADA telemetry –specifically a loss of the channel status indicators – for 76% of its transmission system. This problem occurred during the implementation of a scheduled SCADA database update that caused one of the front-end processors to be in an abnormal state. An incorrect command was used to remedy the situation, which resulted in the channel status indicators being set to a failed state. • Lessons Learned • Entities should consider: • Reviewing the training with respect to change management to ensure that it includes a checklist of steps required; and • Educating SCADA support staff on global impact of commands on the entire SCADA system.   Operator Training Seminar 2014

  17. NERC Lessons Learned from EMS Events #8 • Loss of EMS – IT Communications Disabled • Transmission System Operators lost ability to authenticate to the EMS system, resulting in a loss of monitoring and control functionality for more than 30 minutes. • Lessons Learned • EMS network design should include, where possible, a redundant local authentication server on the same internal network as the primary local authentication server. Operator Training Seminar 2014

  18. NERC Lessons Learned from EMS Events #9 • SCADA Failure Resulting in Reduced Monitoring Functionality • An entity’s primary control center SCADA Management Platform (SMP) servers became unresponsive, which resulted in a partial loss of monitoring and control functions for more than 30 minutes. Because this loss of functionality was a result of a conflict between security software configuration changes and core operating system functions, a cyber-security event was quickly ruled out, and no loss of load occurred during this event. • Lessons Learned • Registered entities should consider a “multi-site hosting” configuration. This configuration provides flexibility and convenience for rapid recovery capability of EMS and SCADA functions. Operator Training Seminar 2014

  19. NERC Lessons Learned from EMS Events #10 • Failure of Energy Management System While Performing Database Update • There was a failure of EMS while performing a database update. • Lessons Learned • When the EMS was purchased, the vulnerability of an integrated system architecture was unknown. To eliminate this now-exposed vulnerability, it is recommended that functional separation of the PCC from the ACC be implemented. Operator Training Seminar 2014

  20. Number of Reports October 26, 2010 – September 3, 2013

  21. Characteristics of EMS Outages

  22. Root Causes by Category

  23. Contributing Causes by Category

  24. Top Root/Contributing Causes (in order) • Software Failure (A2B6C07) • Design output scope LTA (A1B2C01) • Inadequate vendor support of change (A4B5C03) • Testing of Design/Installation LTA (A1B4C02) • Defective or failed part (A2B6C01) • System Interactions not considered (A4B5C05) • Inadequate risk assessment of change (A4B5C04) • Insufficient Job scoping (A4B3C08) • Post Modification Testing LTA (A2B3C03) • Inspection/Testing LTA (A2B3C02) • Attention given to wrong issues (A3B3C01) • Untimely corrective actions to known issue (A4B1C08)

  25. Common Themes • Software Failures • Software Configuration/Installation/Maintenance • Hardware Failures • Hardware Configuration/Installation/Maintenance • Failover Testing Weaknesses • Testing Inadequacies

  26. Software Failures – What is Affected? • Application Software Bug/Defect • Base System – Alarms/Health Check/Syncing etc. • Front End Processing • Supervisory Control Applications (SCADA) • AGC • ICCP • User Interface (UI) • Relational Database Management Systems (RDBMS) • Build Process Scripts • Miscellaneous Scripts • Communication Equipment Firmware/Software Bug/Defect • RTUs • Switches • Modems • Routers • Firewalls • Operating System Software Bug/Defect • Unix/Linux/Windows

  27. Hardware Failures • Application Servers/Nodes • Network Interface cards • Server hard drive control board • Aux Power regulator control • Communication Equipment • RTU • Switches • Routers • Firewalls • Fiber Optic Cables • Time source • Power Sources • Uninterruptible Power Supply (UPS) • External Generators • Power Cables

  28. Failover Testing Weaknesses • Improper settings preventing the failover • Improper procedure to failover • System setup issues preventing failover • Improper patch management between primary/spare/backup servers • Primary server issues reflected on spare/backup as well – No Isolation • Improper failover configurations settings • Improper network device configuration settings for failover • Design requirements not considering failovers

  29. Testing Inadequacies • Inadequate testing • Improper procedures to test • Incomplete scope • Not engaging all the parties involved

  30. Software and Hardware Categories and Restoration Times

  31. Historical Failure Restoration Data Mean Complete Outage Restoration Time: 56 Minutes Mean Partial Outage Restoration Time: 43 Minutes Mean Total Outage Restoration Time: 99 Minutes

  32. Lessons Learned • Publish information about problems and solutions • NERC continues review of events with a working group of stakeholders and Regional personnel • Situational Awareness workshop held in June 2013 with future workshops planned • Dialogue with vendors to inform and improve

  33. Reporting Requirements – NERC Standard EOP-004-2 • Complete loss of voice communication capability affecting a Bulk Electric System (BES) control center for 30 continuous minutes or more (same as Category 2a of EAP) • Complete loss of monitoring capability affecting a BES control center for 30 continuous minutes or more such that analysis capability (i.e., State Estimator or Contingency Analysis) is rendered inoperable (similar to Category 2b of EAP) • Report to ERCOT, TRE, NERC and DOE per TRE web link: http://www.texasre.org/Reliability/EOP-004disturbancereports/Pages/Default.aspx

  34. Reporting Requirements – NERC Events Analysis • Category 1f - Unplanned evacuation from a control center facility with Bulk Power System (BPS) SCADA functionality for 30 minutes or more • Category 1h - Loss of monitoring or control, at a control center, such that it significantly affects the entity’s ability to make operating decisions for 30 continuous minutes or more. Examples include, but are not limited to the following: • Loss of operator ability to remotely monitor, control BES elements, or both • Loss of communications from SCADA RTUs • Unavailability of ICCP links reducing BES visibility • Loss of the ability to remotely monitor and control generating units via AGC • Unacceptable State Estimator or Contingency Analysis solutions

  35. What Can Operators Do? • Watch for failures and unexpected situations • Determine the criticality and impact to the reliability of the grid • Promptly report the failures • Log the date/time of the failure, a description of alarms/events, time of system/function restoration • Expect the EMS failure and prepare to react • Have the necessary back up procedure in place and be familiar with them

  36. ERCOT Procedures • Failover procedure • Loss of AGC – Operation Guides • Constant Frequency operation • Loss of ICCP

  37. Real Time Operating Procedure- Section 3.3 System Failures

  38. Real Time Operating Procedure- Section 3.3 System Failures

  39. Generating Unit Operations During Complete Loss of Communications Excerpt from 2014 DRAFT NERC guideline for units without voice or data links to their QSE but able to generate and monitor frequency

  40. Draft NERC Reliability Guideline – Frequency Chart for the ERCOT Region

  41. References • ERCOT Nodal Protocols, Sect 3.10 • ERCOT Nodal Operating Guides, Sect 7 • ERCOT State Estimator Standards • ERCOT Telemetry Standards • ERCOT Operating Procedure Manual, Shift Supervisor Desk, Sect 10 • NERC Events Analysis Process • NERC Standard EOP-004-2 • NERC EMS Task Force • DRAFT NERC Reliability Guideline: Generating Unit Operations During Complete Loss of Communications

  42. Credits Much of the information contained in this presentation was previously published by North American Electric Reliability Corporation (NERC) in a variety of publications. It is the result of extensive review of actual power system events over a 2 year period by the EMS Event Task Force. Questions?

  43. Please turn your iClicker on and answer each of the following questions. Exam

  44. Which of the following Operator tools can lead to EMS failures? • SCADA • ICCP • AGC • All of the above

  45. What is the top root/contributing cause of EMS failures? • Inadequate vendor support • Hardware failure • Inadequate testing • Software failure

  46. What action should ERCOT take for the loss of their LFC? • Monitor frequency and hope for the best • Place a large QSE on constant frequency • OOME Up units • RUC units off line

  47. Which of the following steps should an Operator take during an EMS failure? • Promptly report the failures • Determine the criticality and impact of the failure to the reliability of the grid • Log the date/time of the failure • Implement backup procedures • All of the above

  48. What is the NERC Standard that requires reporting of EMS failures? • NERC Events Analysis Process • NERC Standard TOP-001-1 • NERC Standard EOP-004-2 • NERC EMS Task Force

More Related