Loading in 5 sec....

Stochastic optimization of energy systemsPowerPoint Presentation

Stochastic optimization of energy systems

- 73 Views
- Uploaded on
- Presentation posted in: General

Stochastic optimization of energy systems

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Stochastic optimization of energy systems

Cosmin Petra

LANS@MCS

Argonne National Laboratory

- Real-time optimization (power dispatch and unit commitment) of power grid in the presence of uncertainty (renewable energy, smart grid, weather)
- Stochastic formulations reduce both short-term (production) and long-term (reserve) costs, stabilize prices, and increase the reliability.
- LANS@ANL team: MihaiAnitescu, Cosmin Petra, Miles Lubin (algorithms and implementation), Victor Zavala and Emil Constantinescu (modeling and data)
- Funding: DOE Applied Math (2009-2012), DOE ASCR MMICC center (2012-2017)
- DOE INCITE Award (2012-2013) - 10 mil core hours for 2012.

- What does the application do, and how?
- Stochastic optimization = decisions taken now are influenced by future random conditions (multiple scenarios)
- Unit Commitment: Determine optimal on/off schedule of thermal (coal, natural gas, nuclear) generators. Day-ahead market prices. (solved hourly)
- Economic Dispatch:Set real-time market prices. (solved every 5-10 min.)
- Scenario-based parallelization
- The “now” decisions cause coupling
- PIPS suite (PIPS-IPM, PIPS-S) - parallel implementations that exploits the stochastic structure at the linear algebra level.

- MPI + OpenMP
- Scenario computations accelerated with OpenMP (sparse linear algebra)
- Inter-scenarios communication with MPI
- Distributed dense linear algebra for the coupling (done with Elemental)

- C++
- Cmake build system
- Runs on “Fusion” cluster, “Intrepid” BG/P
- Asynchronous implementation may require new programming model (X+SMP).
- Yeah, I know … 99.99% X will be MPI

- Standard interior-point method (PIPS-IPM) and dual simplex (PIPS-S)
- In-house parallel linear algebra
- Linear algebra kernels
- Sparse: MA57, WSMP, PARDISO.
- Dense: LAPACK, Elemental

- Next: PIPS-L – Lagrangian decomposition for integer problems
- “Dual decomposition” method
- Based on multi-threaded integer programming kernels (CBC,SCIP) and PIPS-IPM

- Asynchronous – master-worker framework to deal with load imbalance in scenarios

- I/O requirements minimal, one file per MPI process at starting.
- We end up with the optimal cost (a double) and decision variables (vectors of relatively small size)
- Restarting done by saving the intermediate iterates (vectors)
- Future plans: Parallel algebraic specification of the problem
- Generating the input data IN PARALLEL given an algebraic/mathematical description of the problem (AMPL-like script)
- Currently done in serial

- Output is small, no special analysis required
- less

- Bottlenecks to better performance?
- SMP sparse kernels (PIPS-IPM)
- memory bandwidth (PIPS-S)

- Bottlenecks to better scaling?
- Dense kernels (PIPS-IPM)
- load imbalance(PIPS-S, PIPS-L)

- Collaboration with Olaf Schenk - PARDISO – SMP sparse rhs
- PIPS-L – asynchronous optimization algorithms

- How do you debug your code?
- cerr, cout

- PIPS-IPM scaling
- Efficiency likely to decrease with faster SMP scenario computations
- Factors that adversely affect scalability
- Serial bottlenecks: dense linear algebra for the “now” decisions
- Using Elemental improves scaling for some problems

- PIPS-S scaling efficiency is
- 31% on Fusion from 1 to 256 cores
- 35% on Intrepid from 2048 to 8192 cores

- Factors that adversely affect scalability
- Serial bottleneck (“now” decisions)
- Communication ( 10 collectives per iteration, cost of 1 iteration=O(ms) )
- Load imbalance

- Intended to be used on up to few hundred of cores
- PIPS-S is the first HPC implementation of simplex

- 2 years from now?
- Solve grid optimization models with
- Better resolution and larger time horizon
- Larger network: continental US grid
- More uncertainty
- Integer variables