1 / 43

Presented by Martin Greenwald MIT – Plasma Science & Fusion Center CRPP, Lausanne, 2005

Visions for Data Management and Remote Collaboration on ITER M. Greenwald, D. Schissel, J. Burruss, T. Fredian, J. Lister, J. Stillerman MIT, GA, CRPP. Presented by Martin Greenwald MIT – Plasma Science & Fusion Center CRPP, Lausanne, 2005.

aron
Download Presentation

Presented by Martin Greenwald MIT – Plasma Science & Fusion Center CRPP, Lausanne, 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Visions for Data Management and Remote Collaboration on ITERM. Greenwald, D. Schissel, J. Burruss, T. Fredian, J. Lister, J. StillermanMIT, GA, CRPP Presented by Martin Greenwald MIT – Plasma Science & Fusion Center CRPP, Lausanne, 2005

  2. ITER is Clearly the Next Big Thing in Magnetic Fusion Research • It will be the largest and most expensive scientific instrument ever built for fusion research • Built and operated as an international collaboration • To ensure its scientific productivity, systems for data management and remote collaboration must be done well.

  3. What Challenges Will ITER Present? • Fusion experiments require extensive “near” real-time data visualization and analysis in support of between-shot decision making. • For ITER, shots are: • ~400 seconds each, maybe 1 hour apart • 2,000 per year for 15 years • Average cost per shot is very high (order $1M) • Today, teams of ~30-100 work together closely during operation of experiments. • Real-time remote participation is standard operating procedure.

  4. Challenges: Experimental Fusion Science is a Demanding Real-Time Activity • Run-time goals: • Optimize fusion performance • Ensure conditions are fully documented before moving on • Drives need to assimilate, analyze and visualize large quantity of data between shots.

  5. Challenge: Long Pulse Length • Requires concurrent writing, reading, analysis • (We don’t think this will be too hard) • Data sets will be larger than they are today • Perhaps 1 TB per shot, > 1 PB per year • (We think this will be manageable when needed) • More challenging – integration across time scales • Data will span range > 109 in significant time scales • Fluctuation time scale  pulse length • Will require efficient tools • To browse very long records • To locate and describe specific events or intervals

  6. Challenge: Long Life of Project • 10 years construction; 15+ years operation • Systems must adapt to decades of information technology evolution • Software, protocols, hardware will all change. • Think back 25 years! • We need to anticipate a complete changeover in workforce. • Backward compatibility must be maintained

  7. Challenges: International, Remote Participation • Scientists will want to participate in experiments from their home institutions dispersed around the world. • View and analyze data during operations • Manage ITER diagnostics • Lead experimental sessions • Participate in international task forces • Collaborations span many administrative domains (more on this later) • Cyber-security must be maintained, plant security must be inviolable.

  8. Challenges: International, Remote Participation • Scientists will want to participate in experiments from their home institutions dispersed around the world. • View and analyze data during operations • Manage ITER diagnostics • Lead experimental sessions • Participate in international task forces • Collaborations span many administrative domains (more on this later) • Cyber-security must be maintained, plant security must be inviolable.

  9. Challenges: International, Remote Participation • Scientists will want to participate in experiments from their home institutions dispersed around the world. • View and analyze data during operations • Manage ITER diagnostics • Lead experimental sessions • Participate in international task forces • Collaborations span many administrative domains (more on this later) • Cyber-security must be maintained, plant security must be inviolable.

  10. We Are Beginning the Dialogue About How to Proceed • This is not yet an “official” ITER activity • What follows is our vision for data management and remote participation systems • Opinions expressed here are the authors alone.

  11. Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes:  Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements

  12. Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes:  Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements

  13. Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes:  Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements

  14. Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes:  Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements

  15. Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes:  Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements

  16. General Features • Extensible, flexible, scalable • We won’t be able to predict all future needs • Capable of continuous and incremental improvement • Requires robust underlying abstraction • Data Accessible from wide range of languages, software frameworks and hardware platforms • The international collaboration will be heterogeneous • Built-in security • Must protect plant without endangering science mission

  17. Data Acquisition Control Relational Database Data Acquisition Systems Service Oriented API Analysis Applications Main Repository Visualization Applications Proposed Top Level Data Architecture Contains data searchable by their contents Contains multi-dimensional data indexed by their independent parameters

  18. Data System – Contents & Structure • Provide coherent, complete, integrated, self-descriptive view of all data through simple interfaces. • All Raw, processed, analyzed data, configuration, geometry calibrations, data acquisition setup, code parameters, labels, comments… • No data in applications or private files • Metadata stored for each data element • Logical relationships and associations among data elements are made explicit by structure (probably multiple hierarchies). • Data structures can be traversed independent of reading data. • Powerful data directories (105 – 106 named items)

  19. Data System – Contents & Structure • Provide coherent, complete, integrated, self-descriptive view of all data through simple interfaces. • All Raw, processed, analyzed data, configuration, geometry calibrations, data acquisition setup, code parameters, labels, comments… • No data in applications or private files • Metadata stored for each data element • Logical relationships and associations among data elements are made explicit by structure (probably multiple hierarchies). • Data structures can be traversed independent of reading data. • Powerful data directories (105 – 106 named items)

  20. Data System – Contents & Structure • Provide coherent, complete, integrated, self-descriptive view of all data through simple interfaces. • All Raw, processed, analyzed data, configuration, geometry calibrations, data acquisition setup, code parameters, labels, comments… • No data in applications or private files • Metadata stored for each data element • Logical relationships and associations among data elements are made explicit by structure (probably multiple hierarchies). • Data structures can be traversed independent of reading data. • Powerful data directories (105 – 106 named items)

  21. Data System – Contents & Structure • Provide coherent, complete, integrated, self-descriptive view of all data through simple interfaces. • All Raw, processed, analyzed data, configuration, geometry calibrations, data acquisition setup, code parameters, labels, comments… • No data in applications or private files • Metadata stored for each data element • Logical relationships and associations among data elements are made explicit by structure (probably multiple hierarchies). • Data structures can be traversed independent of reading data. • Powerful data directories (105 – 106 named items)

  22. Data System – Contents & Structure • Provide coherent, complete, integrated, self-descriptive view of all data through simple interfaces. • All Raw, processed, analyzed data, configuration, geometry calibrations, data acquisition setup, code parameters, labels, comments… • No data in applications or private files • Metadata stored for each data element • Logical relationships and associations among data elements are made explicit by structure (probably multiple hierarchies). • Data structures can be traversed independent of reading data. • Powerful data directories (105 – 106 named items)

  23. Data System – Contents & Structure • Provide coherent, complete, integrated, self-descriptive view of all data through simple interfaces. • All Raw, processed, analyzed data, configuration, geometry calibrations, data acquisition setup, code parameters, labels, comments… • No data in applications or private files • Metadata stored for each data element • Logical relationships and associations among data elements are made explicit by structure (probably multiple hierarchies). • Data structures can be traversed independent of reading data. • Powerful data directories (105 – 106 named items)

  24. Data System – Service Oriented Architectures • Service oriented • Loosely coupled applications, running on distributed servers • Interfaces simple and generic, implementation details hidden • Transparency and ease-of-use are crucial • Applications specify what is to be done, not how • Data structures shared • Service discovery supported • We’re already headed in this direction • MDSplus • TRANSP “FusionGrid” service

  25. Resources Accessible Via Network Services • Resources = Computers, Codes, Data, Analysis Routines, Visualization tools, Experimental Status and Operations • Access is stressed rather than portability • Users are shielded from implementation details. • Transparency and ease-of-use are crucial elements • Shared toolset enables collaboration between sites and across sub-disciplines. • Knowledge of relevant physics is still required of course.

  26. Case Study: TRANSP CODE – “FusionGrid Service” • Over 20 years of development by PPPL (+ others) • >1,000,000 lines of Fortran, C, C++ • >3,000 program modules • 10,000s lines of supporting script code: perl, python, shell-script • Used internationally for most tokamak experiments • Local maintenance has beenverymanpower intensive • Now fully integrated with MDSplus data system • Standard data “trees” developed for MIT, GA, PPPL, JET, … • Standard toolset for run preparation, visualization

  27. PPPL Experimental Site User (anywhere) Production TRANSP System Authorization server may be consulted at each stage

  28. TRANSP Service Has Had Immediate Payoff • Remote sites avoid costly installation and code maintenance • Was ~1 man-month per year, per site • Users always have access to latest code version • PPPL maintains and supports a single production version of code on well characterized platform • Improves user support at reduced costs • Users have access to high-performance production system • 16 processor linux cluster • Dedicated PBS queue • Tools developed for job submission, cancellation, monitoring

  29. TRANSP Jobs Tracked by Fusion Grid Monitor • Java Servlet derived from GA Data Analysis Monitor • User presented with dynamical web display • Sits on top of relational database – can feed accounting database • Provides information on state of jobs, servers, logs, etc.

  30. Usage Continues to Grow • As of July: 5,800 runs from 10 different experiments

  31. This Is Related To, But Not The Same As “Grid” Computing • Traditional computational grids • Arrays of heterogeneous servers • Machines can arrive and leave • Adaptive discovery – problems find resources • Workload balancing and cycle scavenging • Bandwidth diversity – not all machines are well connected • This model is not especially suited to our problems • Instead, we are aiming to move high-performance distributed computing out onto the wide-area network • We are not focusing on “traditional” grid applications – cycle scavenging and dynamically configured server farms

  32. Putting Distributed Computing Applications out on the Wide Area Network Presents Significant Challenges • Crosses administrative boundaries • Increased concerns and complexity for security model (authentication, authorization, resource management) • Resources not owned by a single project or program • Distributed control of resources by owners is essential • Needs for end-to-end application performance and problem resolution • Resource monitoring, management and troubleshooting are not straightforward • Higher latency challenges network throughput, interactivity • People are not in one place for easy communication

  33. Data Driven Applications • Data driven • All parameters in database not imbedded in applications • Data structure, relations, associations are data themselves • Processing “tree” maintained as data • Enable “generic” applications • Processing can be sensitive to data relationships and to position of data within structure • Scope of applications can grow without modifying code

  34. Data System - Higher Level Organization • All part of database • All indexed into main data repository • High level physics analysis • Scalar and profile databases • Event identification, logging & tracking • Integrated and shared workspaces • Electronic logbook • Summaries and status • Runs • Task groups • Campaigns • Presentations & publications

  35. Remote ParticipationCreating an Environment Which Is Equally Productive for Local and Remote Researchers • Transparent remote access to data • Secure and timely • Real-time info • Machine status • Shot cycle • Data acquisition and analysis monitoring • Announcements • Shared applications • Provision for Ad Hoc interpersonal communications • Provision for Structured communications

  36. Remote is Easy, Distributed is Hard • Informal interactions in the control room are a crucial part of the research • We must extend this into remote and distributed operations • Fully engaging remote participants is challenging • (Fortunately we have already substantial experience)

  37. Remote ParticipationAd Hoc Communications • Exploit convergence of telecom and internet technologies (eg. SIP) • Deploy integrated communications • Voice • Video • Messaging • E-mail • Data streaming • Advanced directory services • Identification, location, scheduling • “Presence” • Support for “roles”

  38. Cyber-Security Needs to Be Built In • Must protect plant without endangering science mission • Employ best features of identity-based, application and perimeter security models • Strong authentication mechanisms • Single sign-on – a must if there are many distributed services • Distributed authorization and resource management • Allows stakeholders to control their own resources. • Facility owners can protect computers, data and experiments • Code developers can control intellectual property • Fair use of shared resources can be demonstrated and controlled.

  39. Cyber-Security Needs to Be Built In • Must protect plant without endangering science mission • Employ best features of identity-based, application and perimeter security models • Strong authentication mechanisms • Single sign-on – a must if there are many distributed services • Distributed authorization and resource management • Allows stakeholders to control their own resources. • Facility owners can protect computers, data and experiments • Code developers can control intellectual property • Fair use of shared resources can be demonstrated and controlled.

  40. Cyber-Security Needs to Be Built In • Must protect plant without endangering science mission • Employ best features of identity-based, application and perimeter security models • Strong authentication mechanisms • Single sign-on – a must if there are many distributed services • Distributed authorization and resource management • Allows stakeholders to control their own resources. • Facility owners can protect computers, data and experiments • Code developers can control intellectual property • Fair use of shared resources can be demonstrated and controlled.

  41. Cyber-Security Needs to Be Built In • Must protect plant without endangering science mission • Employ best features of identity-based, application and perimeter security models • Strong authentication mechanisms • Single sign-on – a must if there are many distributed services • Distributed authorization and resource management • Allows stakeholders to control their own resources. • Facility owners can protect computers, data and experiments • Code developers can control intellectual property • Fair use of shared resources can be demonstrated and controlled.

  42. Cyber-Security Needs to Be Built In • Must protect plant without endangering science mission • Employ best features of identity-based, application and perimeter security models • Strong authentication mechanisms • Single sign-on – a must if there are many distributed services • Distributed authorization and resource management • Allows stakeholders to control their own resources. • Facility owners can protect computers, data and experiments • Code developers can control intellectual property • Fair use of shared resources can be demonstrated and controlled.

  43. Summary • While ITER operation is many years in the future, work on the systems for data management and remote participation should begin now We propose: • All data into a single, coherent, self-descriptive structure • Service-oriented access • All applications data driven • Remote participation fully supported • Transparent, secure, timely remote data access • Support for ad hoc interpersonal communications • Shared applications enabled

More Related