1 / 38

Addressing Complexity in Emerging Cyber-Ecosystems – Exploring the Role of Autonomics in E-Science

Addressing Complexity in Emerging Cyber-Ecosystems – Exploring the Role of Autonomics in E-Science. Manish Parashar Center for Autonomic Computing The Applied Software Systems Laboratory Rutgers, The State University of New Jersey & Office of Cyberinfrastructure National Science Foundation.

etan
Download Presentation

Addressing Complexity in Emerging Cyber-Ecosystems – Exploring the Role of Autonomics in E-Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Addressing Complexity in Emerging Cyber-Ecosystems – Exploring the Role of Autonomics in E-Science Manish Parashar Center for Autonomic Computing The Applied Software Systems Laboratory Rutgers, The State University of New Jersey & Office of Cyberinfrastructure National Science Foundation

  2. Outline of My Presentation • Computational Ecosystems • Unprecedented opportunities, challenges • Autonomic computing – A pragmatic approach for addressing complexity! • Experiments with autonomics for science and engineering • Concluding Remarks

  3. Cyberinfrastructure => Cyber-Ecosystems • 21st Century Science and Engineering: New Paradigms & Practices • Transformed by CI • End-to-end – seamless access, aggregation, interactions • Fundamentally collaborative & data-driven/data intensive • Unprecedented opportunities • New requirements, challenges • New thinking in/approaches to computation science • How can it benefit current applications? • How can it enable new thinking in science?

  4. The Instrumented Oil Field (with UT-CSM, UT-IG, OSU, UMD, ANL) Detect and track changes in data during production. Invert data for reservoir properties. Detect and track reservoir changes. Assimilate data & reservoir properties into the evolving reservoir model. Use simulation and optimization to guide future production. Data Driven Model Driven

  5. Many Application Areas …. • Hazard prevention, mitigation and response • Earthquakes, hurricanes, tornados, wild fires, floods, landslides, tsunamis, terrorist attacks • Critical infrastructure systems • Condition monitoring and prediction of future capability • Transportation of humans and goods • Safe, speedy, and cost effective transportation networks and vehicles (air, ground, space) • Energy and environment • Safe and efficient power grids, safe and efficient operation of regional collections of buildings • Health • Reliable and cost effective health care systems with improved outcomes • Enterprise-wide decision making • Coordination of dynamic distributed decisions for supply chains under uncertainty • Next generation communication systems • Reliable wireless networks for homes and businesses • … … … … • Report of the Workshop on Dynamic Data Driven Applications Systems, F. Darema et al., March 2006, www.dddas.org Source: M. Rotea, NSF

  6. The Challenge: Managing Complexity, Uncertainty (I) • Increasing application, data/information, system complexity • Scale, heterogeneity, dynamism, unreliability, …, disruptive trends, … • New application formulations, practices • Data intensive and data driven, coupled, multiple physics/scales/resolution, adaptive, compositional, workflows, etc. • Complexity/uncertainty must be simultaneously addressed at multiple levels • Algorithms/Application formulations • Asynchronous/chaotic, failure tolerant, … • Abstractions/Programming systems • Adaptive, application/system aware, proactive, … • Infrastructure/Systems • Decoupled, self-managing, resilient, …

  7. The Challenge: Managing Complexity, Uncertainty (II) • The ability of scientists to realize the potential of computational ecosystems is being severely hampered due to the increased complexity and dynamism of the applications and computing environments. • To be productive, scientists often have to comprehend and manage complex computing configurations, software tools and libraries as well as application parameters and behaviors. • Autonomics and self-* can help ? (with the “plumbing” for starters…)

  8. Outline of My Presentation • Computational Ecosystems • Unprecedented opportunities, challenges • Autonomic computing – A pragmatic approach for addressing complexity! • Experiments with autonomics for science and engineering • Concluding Remarks

  9. The Autonomic Computing Metaphor • Current paradigms, mechanisms, management tools are inadequate to handle the scale, complexity, dynamism and heterogeneity of emerging systems and applications • Nature has evolved to cope with scale, complexity, heterogeneity, dynamism and unpredictability, lack of guarantees • self configuring, self adapting, self optimizing, self healing, self protecting, highly decentralized, heterogeneous architectures that work !!! • Goal of autonomic computing is to enable self-managing systems/applications that addresses these challenges using high level guidance • Unlike AI duplication of human thought is not the ultimate goal! “Autonomic Computing: An Overview,” M. Parashar, and S. Hariri, Hot Topics, Lecture Notes in Computer Science, Springer Verlag, Vol. 3566, pp. 247-259, 2005.

  10. Motivations for Autonomic Computing Source:http://www.almaden.ibm.com/almaden/talks/Morris_AC_10-02.pdf 2/27/07: Dow fell 546. Since worst plunge took place after 2:30 pm, trading limits were not activated Source: http:idc 2006 8/3/07: (EPA) datacenter energy use by 2011 will cost $7.4 B, 15 power plants, 15 Gwatts/hour peak 8/1/06: UK NHS hit with massive computer outage. 72 primary care + 8 acute hospital trusts affected. 8/12/07: 20K people + 60 planes held at LAX after computer failure prevented customs from screening arrivals Key Challenge Current levels of scale, complexity and dynamism make it infeasible for humans to effectively manage and control systems and applications

  11. Autonomic Computing – A Pragmatic Approach • Separation + Integration + Automation ! • Separation of knowledge, policies and mechanisms for adaptation • The integration of self–configuration, – healing, – protection,–optimization, … • Self-* behaviors build on automation concepts and mechanisms • Increased productivity, reduced operational costs, timely and effective response • System/Applications self-management is more than the sum of the self-management of its individual components M. Parashar and S. Hariri, Autonomic Computing: Concepts, Infrastructure, and Applications, CRC Press, Taylor & Francis Group, ISBN 0-8493-9367-1, 2007.

  12. Autonomic Computing Theory • Integrates and advances several fields • Distributed computing • Algorithms and architectures • Artificial intelligence • Models to characterize, predict and mine data and behaviors • Security and reliability • Designs and models of robust systems • Systems and software architecture • Designs and models of components at different IT layers • Control theory • Feedback-based control and estimation • Systems and signal processing theory • System and data models and optimization methods • Requires experimental validation (From S. Dobson et al., ACM Tr. on Autonomous & Adaptive Systems, Vol. 1, No. 2, Dec. 2006.)

  13. Autonomics for Science and Engineering ? • Manage application/information/system complexity • not just hide it! • Enabling new thinking, formulations • how do I think about/formalize my problem differently?

  14. Existing Autonomic Practices in Computational Science (GMAC 09, SOAR 09, with S. Jha and O. Rana) Autonomic tuning of the application Autonomic tuning by the application

  15. Spatial Heterogeneity Temporal Heterogeneity Spatial, Temporal and Computational Heterogeneity and Dynamics in SAMR Temperature Simulation of combustion based on SAMR (H2-Air mixture; ignition via 3 hot-spots) OH Profile Courtesy: Sandia National Lab

  16. Autonomics in SAMR • Tuning by the application • Application level: when and where to refine • Runtime/Middleware level: When, where, how to partition and load balance • Runtime level: When, where, how to partition and load balance • Resource level: Allocate/de-allocate resources • Tuning of the application, runtime • When/where to refine • Latency aware ghost synchronization • Heterogeneity/Load-aware partitioning and load-balancing • Checkpoint frequency • Asynchronous formulations • …

  17. Outline of My Presentation • Computational Ecosystems • Unprecedented opportunities, challenges • Autonomic computing – A pragmatic approach for addressing complexity! • Experiments with autonomics for science and engineering • Concluding Remarks

  18. Autonomics for Science and Engineering – Application-level Examples • Autonomic to address complexity in science and engineering • Autonomic as a paradigm for science and engineering • Some examples: • Autonomic runtime management – multiphysics, adaptive mesh refinement • Autonomic data streaming and in-network data processing – coupled simulations • Autonomic deployment/scheduling – HPC Grid/Cloud integration • Autonomic workflows – simulation based optimization (Many system level examples not presented here …)

  19. Coupled Fusion Simulations: A Data Intensive Workflow

  20. Autonomic Data Streaming and In-Transit Processing for Data-Intensive Workflows • Workflow with coupled simulation codes, i.e., the edge turbulence particle-in-cell (PIC) code (GTC) and the microscopic MHD code (M3D) -- run simultaneously on separate HPC resources • Data streamed and processed enroute -- e.g. data from the PIC codes filtered through “noise detection” processes before it can be coupled with the MHD code • Efficiently data streaming between live simulations -- to arrive just-in-time -- if it arrives too early, times and resources will have to be wasted to buffer the data, and if it arrives too late, the application would waste resources waiting for the data to come in • Opportunistic use of in-transit resources “An Self-Managing Wide-Area Data Streaming Service,” V. Bhat*, M. Parashar, H. Liu*, M. Khandekar*, N. Kandasamy, S. Klasky, and S. Abdelwahed, Cluster Computing: The Journal of Networks, Software Tools, and Applications, Volume 10, Issue 7, pp. 365 – 383, December 2007.

  21. Application level Proactive QoS management strategies using model-based LLC controller Capture constraints for in-transit processing using slack metric In-transit level Opportunistic data processing using dynamic in-transit resource overlay Adaptive run-time management at in-transit nodes based on slack metric generated at application level Adaptive buffer management and forwarding In-Transit Level “Reactive” management Slack metric Generator Budget estimation metric updates Simulation Slack metric corrector Coupling LLC Controller Data flow Application Level “Proactive” management Sink Simulation In-Transit node Slack metric corrector Slack metric Generator Slack metric adjustment Autonomic Data Streaming & In-Transit Processing

  22. Simulation Workflow SS = Simulation Service (GTC) ADSS = Autonomic Data Streaming Service CBMS = LLC Controller based buffer management service DTS = Data Transfer service DAS = Data Analysis Service SLAMS = Slack Manager Service PS = Processing Service BMS = Buffer Management Service ArchS = Archiving data at sink Rutgers University SLAMS SLAMS FFT Data In-Transit NERSC ArchS DAS SS ADSS PPPL Sort data DAS DAS CBMS DTS Scale data Data Consumers DAS DAS Data Producers Sink ORNL SLAMS FFT SS ADSS SLAMS VisS DAS BudjS BMS Rutgers University DTS PS Autonomic Streaming: Implementation/Deployment • Simulations executes on leadership class machines at ORNL and NERSC • In-transit nodes located at PPPL and Rutgers

  23. Adaptive Data Transfer • No congestion in intervals 1-9 • Data transferred over WAN • Congested at intervals 9-19 • Controller recognizes this congestion and advises the Element Manager, which in turn adapts DTS to transfer data to local storage (LAN). • Adaptation continues until the network is not congested • Data sent to the local storage by the DTS falls to zero at the 19th controller interval.

  24. Exploring Hybrid HPC-Grid/Cloud Usage Modes [eScience’09] • Production computation infrastructures will be (are) hybrid integrating HPC Grids and Clouds • What are appropriate usage modes for hybrid infrastructure? • Acceleration • Clouds canbe used as accelerators to improve the application timetocompletion • To alleviate the impact of queue wait times • “Strategically Off load” appropriatetasks to Cloud resources • All whilst respecting budget constraints. • Conservation • Cloudscan be used to conserve HPC Grid allocations, givenappropriate runtime and budget constraints. • Resilience • Cloudscan be used to handle: • General: Response to dynamic execution environments • Specific: Unanticipated HPC Grid downtime, inadequateallocations or unexpected Queue delays/QoS change

  25. Reservoir Characterization: EnKF-based History Matching (with S. Jha) • Black Oil Reservoir Simulator • simulates the movement of oil and gas in subsurface formations • Ensemble Kalman Filter • computes the Kalman gain matrix and updates the model parameters of the ensembles • Hetergeneous, dynamic workflows • Based on Cactus, PETSc

  26. EnKF application Application adaptivity Adaptivity Manager Workflow manager Monitor Infrastructure adaptivity Runtime estimator CometCloud Analysis Autonomic scheduler Pull Tasks Pull Tasks Adaptation Grid Agent Cloud Agent Mgmt. Info. Push Tasks Mgmt. Info. Cloud Cloud HPC Grid Cloud HPC Grid Exploring Hybrid HPC-Grid/Cloud Usage Modes using CometCloud

  27. Objective I: Using Clouds as Acceleratorsfor HPC Grids (2/2) The TTC and TCC for Objective I with 16 TG CPUsand queuing times set to 5 and 10 minutes. As expected, morethe number of VMs that are made available, the greater theacceleration, i.e., lower the TTC. The reduction in TTC is roughly linear, but is not perfectly so, because of a complex interplaybetween the tasks in the work load and resource availability

  28. Objective II: Using Clouds for ConservingCPU-Time on the TeraGrid • Explore how to conserve fixed allocation of CPU hours by offloading tasks that perhaps don’t need the specialized capabilities of the HPC Grid Distribution of tasks across EC2 and TG, TTC and TCC, as the CPU-minute allocation on the TG is increased.

  29. Objective III: Response to Changing Operating Conditions (Resilience) (2/4) Allocation of tasks to TG CPUs and EC2 nodes for usagemode III. As the 16 allocated TG CPUs become unavailable afteronly 70 minutes rather than the planned 800 minutes, the bulk ofthe tasks are completed by EC2 nodes.

  30. Objective III: Response to Changing Operating Conditions (Resilience) (3/4) Number of TG cores and EC2 nodes as a function of timefor usage mode III. Note that the TG CPU allocation goes to zeroafter about 70 minutes causing the autonomic scheduler to increasethe EC2 nodes by 8.

  31. Production of oil and gas can take advantage of installed sensors that will monitor the reservoir’s state as fluids are extracted Knowledge of the reservoir’s state during production can result in better engineering decisions economical evaluation; physical characteristics (bypassed oil, high pressure zones); productions techniques for safe operating conditions in complex and difficult areas Detect and track changes in data during production Invert data for reservoir properties Detect and track reservoir changes Assimilate data & reservoir properties into the evolving reservoir model Use simulation and optimization to guide future production, future data acquisition strategy The Instrumented Oil Field “Application of Grid-Enabled Technologies for Solving Optimization Problems in Data-Driven Reservoir Studies,” M. Parashar, H. Klie, U. Catalyurek, T. Kurc, V. Matossian, J. Saltz and M Wheeler, FGCS. The International Journal of Grid Computing: Theory, Methods and Applications (FGCS), Elsevier Science Publishers, Vol. 21, Issue 1, pp 19-26, 2005.

  32. Why is it important Better utilization/cost-effectiveness of existing reservoirs Minimizing adverse effects to the environment Effective Oil Reservoir Management: Well Placement/Configuration Bad Management Better Management Much Bypassed Oil Less Bypassed Oil

  33. Improve knowledge of subsurface to reduce uncertainty Improve numerical model Autonomic Reservoir Management: “Closing the Loop” using Optimization Dynamic Decision System Dynamic Data-Driven Assimilation • Optimize • Economic revenue • Environmental hazard • … • Based on the present subsurface knowledge and numerical model Subsurface characterization Management decision Data assimilation Acquire remote sensing data Update knowledge of model Plan optimal data acquisition Experimental design START Autonomic Grid Middleware Processing Middleware Grid Data Management

  34. Autonomic Formulations/Programming

  35. Sensor/Context Data History/ Archived Data AutoMate Programming System/Grid Middleware An Autonomic Well Placement/Configuration Workflow Oil prices, Weather, etc.

  36. Autonomic Oil Well Placement/Configuration (VFSA) “An Reservoir Framework for the Stochastic Optimization of Well Placement,” V. Matossian, M. Parashar, W. Bangerth, H. Klie, M.F. Wheeler, Cluster Computing: The Journal of Networks, Software Tools, and Applications, Kluwer Academic Publishers, Vol. 8, No. 4, pp 255 – 269, 2005 “Autonomic Oil Reservoir Optimization on the Grid,” V. Matossian, V. Bhat, M. Parashar, M. Peszynska, M. Sen, P. Stoffa and M. F. Wheeler, Concurrency and Computation: Practice and Experience, John Wiley and Sons, Volume 17, Issue 1, pp 1 – 26, 2005.

  37. Summary • CI and emerging computational ecosystems • Unprecedented opportunity • new thinking, practices in science and engineering • Unprecedented research challenges • scale, complexity, heterogeneity, dynamism, reliability, uncertainty, … • Autonomic Computing can address complexity and uncertainty • Separation + Integration + Automation • Experiments with Autonomics for science and engineering • Autonomic data streaming and in-transit data manipulation, Autonomic Workflows, Autonomic Runtime Management, … • However, there are implications • Added uncertainty • Correctness, predictability, repeatability • Validation • New formulations necessary….

  38. Thank You! Email: parashar@rutgers.edu

More Related