1 / 35

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems. Lecture 10: Advanced Power Flow Topics and Sparsity for Power Systems. Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign overbye@illinois.edu. Announcements.

marlie
Download Presentation

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Lecture 10: Advanced Power Flow Topics and Sparsity for Power Systems Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign overbye@illinois.edu

  2. Announcements • HW 3 is due Oct 4 • HW 4 is due Oct 10 • Midterm exam is October 24 in class; closed book and notes, but one 8.5 by 11 inch note sheet and simple calculators allowed

  3. Topology Processing Algorithm • Since topology processing is performed often, it must be quick (order n ln(n))! • Simple, yet quick topology processing algorithm • Set all buses as being in their own island (equal to bus number) • Repeat • Set ChangeInIslandStatusfalse • Go through all the in-service lines, setting the islands for each of the buses to be the smaller island number; if the island numbers are different set ChangeInIslandStatus true • Until (ChangeInIslandStatus = False) • Determine which islands are viable, assigning a slack bus as necessary

  4. Topology Processing Algorithm • What is the computational order for this algorithm? • What would be a best case system? What would be a worst case system? • Can the algorithm still be used even if under some situations it may not perform well? • Use b7flat_island to demonstrate island formation • In PowerWorld this requires setting the “Dynamically add/remove slack buses as topology is changed” on the Simulator Options, Power Flow Solution, Advanced Options page

  5. Bus Branch versus Node Breaker • Due to a variety of issues during the 1970’s and 1980’s the real-time operations and planning stages of power systems adopted different modeling approaches Real-TimeOperations Use detailed node/breaker model EMS system as a set of integrated applications and processes Real-time operating system Real-time databases Planning Use simplified bus/branch model PC approach Use of files Stand-alone applications • Entire data sets and software tools developed around these two distinct power system models

  6. Circuit Breakers and Disconnects • Circuit breakers are devices that are designed to clear fault current, which can be many times normal operating current • AC circuit breakers take advantage of the current going through zero twice per cycle • Transmission faults can usually be cleared in less than three cycles • Disconnects cannot clear fault current, and usually not normal current. They provide a visual indication the line is open. Can be manual or motorized.

  7. Google View of a 345 kV Substation

  8. Substation Configurations • Several different substation breaker/disconnect configurations are common: • Single bus: simple but a faultany where requires taking out the entire substation; also doing breakeror disconnect maintenance requirestaking out the associated line Source: http://www.skm-eleksys.com/2011/09/substation-bus-schemes.html

  9. Substation Configurations, cont. • Main and Transfer Bus: Now the breakers can be takenout for maintenance withouttaking out a line, but protectionis more difficult, and a faulton one line will take out at least two • Double Bus Breaker:Now each line is fully protectedwhen a breaker is out, so highreliability, but more costly Source: http://www.skm-eleksys.com/2011/09/substation-bus-schemes.html

  10. Substation Configurations, cont. • Ring Bus: Any single breakercan be taken out for maintenance; a line fault will trip two breakers, but others will stay in service • Breaker and a Half: Has two main buses that are normally energized. Any of breakers or even a bus can be taken out for maintenance. Bus faults do notopen the lines. High reliability. Source: http://www.skm-eleksys.com/2011/09/substation-bus-schemes.html

  11. EMS and Planning Models • EMS Model • Planning Model -30 MW -18 Mvar 50 MW 20 Mvar -30 MW -18 Mvar 50 MW 20 Mvar 10 MW 3 Mvar 10 MW 3 Mvar -40 MW -10 Mvar 10 MW 5 Mvar -40 MW -10 Mvar 10 MW 5 Mvar Used for real-time operations Call this Full-Topology model Has node/breaker detail Used for off-line analysis Call this Consolidatedmodel Has bus/branch detail

  12. Integrated Topology Processing Full-topology model and superbuses Dynamic device relocation Consolidated planning case

  13. Power Flow Optimal Multiplier • Classic reference on power flow optimal multiplier is S. Iwamoto, Y. Tamura, “A Load Flow Calculation Method for Ill-Conditioned Power Systems,” IEEE Trans. Power App. and Syst., April 1981 • One UIUC paper is J.E. Tate, T.J. Overbye, “A Comparison of the Optimal Multiplier in Polar and Rectangular Coordinates,” IEEE Trans. Power Systems, Nov. 2005 • Key idea is once NR method has selected a direction, we can analytically determine the distance to move in that direction to minimize the norm of the mismatch

  14. Power Flow with Optimal Multiplier • Consider an n bus power system where S is the vector of the constant real and reactive power load minus generation at all buses except the slack, x is the vector of the bus voltages in rectangular coordinates: Vi = ei + jfi, and f is the function of the power balance constraints

  15. Power Flow with Optimal Multiplier • With a standard NR approach we would get • If we are close enough to the solution the iteration converges quickly, but if the system is heavily loaded it can diverge

  16. Power Flow with Optimal Multiplier • Optimal multiplier approach modifies the iteration as • Scalar m is chosen to minimize the norm of the mismatch F in direction x • Paper by Iwamoto, Y. Tamura from 1981 shows m can be computed analytically with little additional calculation when rectangular voltages are used

  17. Power Flow with Optimal Multiplier • Determination of m involves solving a cubic equation, which gives either three real solutions, or one real and two imaginary solutions • 1989 PICA paper by Iba(“A Method for Finding a Pair of Multiple Load Flow Solutions in Bulk Power Systems”) showed that NR tends to converge alongline joining the high and a low voltage solution

  18. Refer to Hao’s Lecture 8 on Gaussian Elimination

  19. Computational Complexity • Knowing the computational complexity of a problem can help to determine whether it can be solved (at least using a particular method) • Scaling factors do not affect the computation complexity • an algorithm that takes n3/2 operations has the same computational complexity of one the takes n3/10 operations (though obviously the second one is faster!) • With O(n3) factoring a full matrix becomes computationally intractable quickly! • A 100 by 100 matrix takes a million operations (give or take) • A 1000 by 1000 matrix takes a billion operations • A 10,000 by 10,000 matrix takes a trillion operations!

  20. Sparse Systems • The material presented so far applies to any arbitrary linear system • The next step is to see what happens when we apply triangular factorization to a sparse matrix • For a sparse system, only nonzero elements need to be stored in the computer since no arithmetic operations are performed on the 0’s • The triangularization scheme is adapted to solve sparse systems in such a way as to preserve the sparsity as much as possible

  21. Sparse Matrix History • A nice overview of sparse matrix history is by Iain Duff at http://www.siam.org/meetings/la09/talks/duff.pdf • Sparse matrices developed simultaneously in several different disciplines in the early 1960’s with power systems definitely one of the key players (Bill Tinney from BPA) • In addition to the 1967 IEEE Proc. paper by Tinney see N. Sato, W.F. Tinney, “Techniques for Exploiting the Sparsity of the Network Admittance Matrix, Power App. and Syst., pp 944-950, December 1963 • Different disciplines claim credit since they didn’t necessarily know what was going on in the others

  22. Sparse Matrix Computational Order • The computational order of factoring a sparse matrix, or doing a forward/backward substitution depends on the matrix structure • Full matrix is O(n3) • A diagonal matrix is O(n); that is, just invert each element • For power system problems the classic paper is F. L. Alvarado, “Computational complexity in power systems,” IEEE Transactions on Power Apparatus and Systems, ,May/June 1976 • O(n1.4) for factoring, O(n1.2) for forward/backward • For a 100,000 by 100,000 matrix changes computation for factoring from 1 quadrillion to 10 million!

  23. Inverse of A Sparse Matrix • The inverse of a sparse matrix is NOT in general a sparse matrix • We never (or at least very, very, very seldom) explicitly invert a sparse matrix • Individual columns of the inverse of a sparse matrix can be obtained by solving x = A-1b with b set to all zeros except for a single nonzero in the position of the desired column • If a few desired elements of A-1 are desired (such as the diagonal values) they can usually be computed quite efficiently using sparse vector methods (a topic we’ll be considering soon) • We can’t invert a singular matrix (with sparse or not)

  24. Computer Architecture Impacts • With modern computers the processor speed is many times faster than the time it takes to access data in main memory • Some instructions can be processed in parallel • Caches are used to provide quicker access to more commonly used data • Caches are smaller than main memory • Different cache levels are used with the quicker caches, like L1, have faster speeds but smaller sizes; L1 might be 64K, whereas the slower L2 might be 1M • Data structures can have a significant impact on sparse matrix computation

  25. ECE 530 Sparsity Limitations • Sparse matrices arise in many areas, and can have domain specific structures • Symmetric matrices • Structurally symmetric matrices • Tridiagnonal matrices • Banded matrices • ECE 530 is focused on problems that arise in the electric power; it is not a general sparse matrix course

  26. Sparsity References • A good (and free) book on sparse matrices is www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf • Also Sparsity in Large-Scale Network Computation by Alvarado, Tinney and Enns (1993) • W.F. Tinney, J.W. Walker, “Direct Solutions of Sparse Network Equations by Optimally Ordered Triangular Factorization,” Proc. IEEE, November 1967, pp. 1801-1809 • Note that Tinney provides timing values for a 839 node system.

  27. Full Matrix versus Sparse Matrix Storage • Full matrices are easily stored in arrays with just one variable needed to store each value since the value’s row and column are implicitly available from its matrix position • With sparse matrices two or three elements are needed to store each value • The zero values are not explicitly stored • The value itself, its row number and its column number • Storage can be reduced by storing all the elements in a particular row or column together • Because large matrices are often quite sparse, the total storage is still substantially reduced

  28. Sparse Matrix Usage Can Determine the Optimal Storage • How a sparse matrix is used can determine the best storage scheme to use • Row versus column access; does structure change • Is the matrix essentially used only once? That is, its structure and values are assumed new each time used • Is the matrix structure constant, with its values changed • This would be common in the N-R power flow, in which the structure doesn’t change each iteration, but its values do • Is the matrix structure and values constant, with just the b vector in Ax=b changing • Quite common in transient stability solutions

  29. Numerical Precision • Required numerical precision determines type of variables used to represent numbers • Specified as number of bytes, and whether signed or not • For Integers • One byte is either 0 to 255 or -128 to 127 • Two bytes is either smallint (-32,768 to 32,767) or word (0 to 65,536) • Four bytes is either Integer (-2,147,483,648 to 2,147,483,647) or Cardinal (0 to 4,294,967,295) • This is usually sufficient for power system row/column numbers • Eight bytes (Int64) if four bytes is not enough

  30. Numerical Precision, cont. • For floating point values using choice is between four bytes (single precision) or eight bytes (double precision); extended precision has ten bytes • Single precision allows for 6 to 7 significant digits • Double precision allows for 15 to 17 significant digits • Extended allows for about 18 significant digits • More bytes requires more storage • Computational impacts depend on the underlying device; on PCs there isn’t much impact; GPUs can be 3 to 8 times slower for double precision • For most power problems double precision is best

  31. General Sparse Matrix Storage • A general approach for storing a sparse matrix would be using three vectors, each dimensioned to number of elements • AA: Stores the values, usually in power system analysis as double precision values (8 bytes) • JR: Stores the row number; for power problems usually as an integer (4 bytes) • JC: Stores the column number, again as an integer • If unsorted then both row and column are needed • New elements could easily be added, but costly to delete elements • An unordered approach doesn’t make for good computation since elements used next computationally aren’t necessarily nearby • Usually ordered, either by row or column

  32. Spare Storage Example • Assume • Then Note, this example is a symmetric matrix, but the technique is general

  33. Compressed Sparse Row Storage • If elements are ordered (as was case for previous example) storage can be further reduced by noting we do not need to continually store each row number • A common method for storing sparse matrices is known as the Compressed Sparse Row (CSR) format • Values are stored row by row • Has three vector arrays: • AA: Stores the values as before • JA: Stores the column index (done by JC in previous example) • IA: Stores the pointer to the index of the beginning of each row

  34. CSR Format Example • Assume as before • Then

  35. CSR Comments • The CSR format reduces the storage requirements by taking advantage of needing only one element per row • The CSR format has good advantages for computation when using cache since (as we shall see) during matrix operations we are often sequentially going through the vectors • An alternative approach is Compressed Sparse Column (CSC), which identical, except storing the values by column • It is difficult to add values. • We’ll mostly use the linked list approach here, which makes matrix manipulation simpler

More Related