1 / 61

The Search for Exotic Mesons – The Critical Role of Computing in Hall D

The Search for Exotic Mesons – The Critical Role of Computing in Hall D. Hall D Collaboration Map. Production of Mesons and Gluonic Excitations Using 6-12 GeV Photons. Fundamental Physics. Role of “glue” in strong QCD. Experimental Goal. Unambiguous identification of gluonic excitations

abel-oliver
Download Presentation

The Search for Exotic Mesons – The Critical Role of Computing in Hall D

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Search for Exotic Mesons – The Critical Role of Computing in Hall D L. Dennis, FSU

  2. Hall D Collaboration Map

  3. Production of Mesons and Gluonic Excitations Using 6-12 GeV Photons Fundamental Physics Role of “glue” in strong QCD Experimental Goal Unambiguous identification of gluonic excitations starting with exotic hybrids Experimental Requirements Hybrids are expected to exist precisely where we have Almost no experimental information – photoproduction Requires 6 – 12 GeV photon beam energies

  4. Formation of Flux Tubes

  5. Hybrids

  6. S = 1 – Use a probe with quark spins aligned - the photon where we have essentially no data S = 0 – For pion and kaon probes where most of our data exist Looking for Hybrids We should observe exotic hybrids precisely where we have no data: PHOTOPRODUCTION

  7. Predicted Meson Spectrum Meson Map • Predictions for exotic mesons come from: • Lattice QCD • Flux Tube Models In flux tube picture, gluons in hadrons are confined to flux tubes. Conventional mesons arise when the flux tube is in its ground state. Hybrid mesons arise when the flux tube is in an excited state.

  8. CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. Hall D Online Data Acquisition 900 MB/s 75 MB/s

  9. Critical Role for Computing in Hall D The quality of Hall D science depends critically upon the collaboration’s ability to conduct it’s computing tasks.

  10. The Challenge • Minimize the effort required to perform computing • Data Intensive Application • Compute Intensive Applications • Information Intensive Analysis • Research Application – methods and algorithms are not fully defined.

  11. Detector 180 kev/s Trigger 15 kev/s 5 kB/ev 75 MB/s Trigger Rates for Hall D Trigger requires ~100 CPU’s* 5 CPU-ms/ev Full Reconstruction (CLAS) 50 ms/ev today. 100 CPU-ms/ev Full Simulation (CLAS) 1-3 s/ev today. 1/3 Assumed detector & accelerator efficiency. *Assume a factor of 10 improvement over existing CPU’s

  12. [15 kev/s] * [1/3] * [2] = 10 kev/s Raw Rate Equipment Duty Factor Duplication Factor Required Sustained Reconstruction Rate 10 kev/s * 5 CPU-ms/ev = 50 CPU’s

  13. [15 kev/s] * [1/3] * [10] * [1/10] = 5 kev/s Raw Rate Equipment Duty Factor Systematics Studies Good Event Fraction Required Sustained Simulation Rate 5 kev/s * 100 CPU-ms/ev = 500 CPU’s PWA error is determined by one’s knowledge of systematic errors. This requires extensive simulations, but not all events simulated are accepted events.

  14. Annual Date Rate to Archive Raw Data 75 MB/sec * (3 *107 s/yr) * (1/3) = 0.75 PB/yr Simulation Data 25 MB/sec * (3 *107 s/yr) = 0.75 PB/yr Reconstructed Data 50 MB/sec * (3 *107 s/yr) = 1.50 PB/yr Total Rate to Archive ~ 3 PB/yr

  15. Requirements Summary

  16. Annual Data Rates

  17. CPU Requirements

  18. Planning First Pass Analysis Acquisition Data Mining Monitoring Physics Analysis Slow Controls Data Archival Partial Wave Analysis Physics Analysis Hall D Computing Tasks Calibrations Simulation Publication

  19. Initial Estimate of Software Tasks & Timeline

  20. Solving the information management problems requires people working on the software and developing a workable computing environment. Meeting the Hall D Computational Challenges Moore’s law:Computer performance increases by a factor of 2 every 18 months. Gilder’s Law:Network bandwidth triples every 12 months. Dennis’ Law: Neither Moore’s Law nor Gilder’s Law will solve our computing problems.

  21. “Chaos of Analysis” • Problem: It is impossible to efficiently complete our computing in a single large, common, democratic computer facility. • Solution: Provide several sites with the resources required to complete specific tasks. Choose those sites which seek to become lead institutions in specific efforts, such as simulations, calibrations or partial wave analysis.

  22. Hall D Grid

  23. Grid Computing Advantages • Common access for Physicists everywhere. • Utilizing all intellectual resources  JLab, universities, remote sites  Scientists, students • Maximize total funding resources while meeting the total computing need. • Reduce Systems’ complexity  Partitioning of facility tasks, to manage and focus resources. • Optimization of computing resources to solve the problem.  Tier-n or “Grid” Model. • Reduce long-term computational management problems.

  24. Hall D Offline Data Flow

  25. Digital Hall D Ground Rules • Distributed Objects • Define all programs and data as objects. • Define “or wrap” everything in XML. • Implement in Object Model de jour (CORBA, Java, COM, SOAP …) • Does not require that we use an Object Database or that we use relational databases inappropriately. • Move and query metadata rather than data whenever possible. • Move the applications to the data. • Assume everybody has wireless access to the “Digital Hall D” through hand-held and conventional computers.

  26. Digital Hall D TechnologiesHallD Grid • Globus provides infrastructure to access computer resources around the world HallD Grid. • Structure access to Digital Hall D as a Portal – myHallD.org • Use a multi-tier software architecture separating resources, servers/brokers, display engines, display devices. • Do not write any HTML – use XML and convert. • Program in C++ or Java.

  27. Hall D Grid

  28. Vision for Grid Environment • Work toward a Grid-based Operating System. • Standard toolkit for manipulating objects. • For example: copy, find, create, delete,… • Standards for developing additional complex Grid based tools. • For example: A tool that builds an acceptance function from available GEANT simulations, whose results are stored in several locations. • Tools to share intermediate results of large computations. Many of these tools exist, it is remain to selecting the appropriate ones and wrap them in standardized interfaces so they can work with Hall D objects.

  29. Interactive Services Batch Services Grid Services Compute Services Data Services Information Services Foundations for Grid Sites Needs Very Reliable, Easy to Install Software at Remote Sites Needs Very Reliable Hardware & Software at Remote Sites

  30. Hall D Grid

  31. Logout Select Configure

  32. Hierarchy of Portals and Their Technology Generic Portals Portal Building Tools and Frameworks (XUL, Ninja, iPlanet, E-Speak, Portlets, WebSphere, www.desktop.com) Collaboration Universal Access Security ……. Generic Services Information Services Databases ……. User customization, component libraries,fixed channels Enterprise Portals Education Services Compute Services Education &Training Portals Science Portals ……... Chem Eng Biology K-12 University

  33. Collaborative Objects • Digital objects shared by more than one person. • Asynchronous sharing: You create/modify an object. Others access/modify it at a later time. • Synchronous Collaboration:Real-time access/modification of objects by several people in distributed locations.

  34. Virtual Experimental Control Room • Could be a big win as (unexpected) real-time decisions need “experts-on-demand.” • Model being considered by NASA for remote spacecraft mission control and real-time scientific analysis of earthquakes. • Need collaborative decision making (vote?) and planning tools. • Needs shared streaming data and shared read-outs of experimental monitors (output of all devices must be distributed objects which can be shared). • Needs to support experts caught on the beach with poor connectivity or in their car with just a cell phone and a PDA.

  35. $’s Prestige Tradition Computing environment we need to be successful Building Computer Science & Physics Teams for Computing System Development Physicists Computer Scientists $’s Prestige Tradition

  36. Conclusions • Hall D provides tremendous opportunities for new physics. • Requires unprecedented computing. • Grid and portal technology provide a unique new method of involving distributed intellectual resources in this important problem. • The resources required to create those solutions are not yet in place.

  37. Collaboration Computing Organization • Attracting physicists to work on software is difficult. • Perceived importance is based on capital “$’s” spent. • Accelerator  Detector  Computing. • Once it works, they have nothing they can show to their dean and say, “I built that!” • “Everyone” thinks it is easy. • One good way to have a really positive impact on the science. • Helps train and attract students for a variety of careers.

  38. Collaboration Computing Organization • Attracting computer scientists to work on physics software is difficult. • Perceived importance is based on computer science research, not computer science applications. • Physics publications don’t help computer scientists get tenure. • “Everyone” thinks it is easy. • A good way to actually test computer science theory. • Science requires experimental testing to progress. • Real world training ground for students.

  39. c1 d2 b1 x1 x2 b2 c2 d3 c3 x3 d4 b3 b4 x4 e1 c4 e2 d1 y1 b5 a1 a2 a3 a4 a5 a1 a2 a3 a4 a5 e3 e4 b1 b2 b3 b4 b5 c1 c2 c3 c4 d1 d2 d3 d4 e1 e2 e3 e4 f1 f2 f3 f4 f1 f2 f3 f4 g1 g2 a1 a5 b4 c3 d3 e3 f3 a2 b1 b5 c4 d4 e4 f4 a3 b2 c1 d1 e1 f1 a4 b3 c2 d2 e2 f2 w3 w4 y2 y3 y4 z1 z2 z3 z4 Equilibrium z1 z2 z3 z4 Start-up x3 y3 z3 w4 x4 y4 z4 x1 y1 z1 x2 y2 z2 2 Tape Drives 4/1 ratio of processing to I/O per tape 1.2 TBytes of Disk Required Shut-down Obtaining Optimum System Performance

  40. Estimated System Efficiency

  41. Efficient Information Access is Key to Using the HallD Grid Simulations • Data Acquisition • Raw Data, • Experimental Conditions Hall D Experimental Information Data Reduction Calibrations Physics Analysis PWA Information From Researchers

  42. Focus  Accurate, Timely Analysis • Provide people with the information and resources they need to conduct their analysis • Provide it reliably • Provide it in the way scientists need it • Provide it efficiently (speed, effort) • Provide flexibility for other applications

  43. Hall D Portal: MyHallD • What’s Involved in MyHallD? • Probably needs some money, but < $30.9442 M, • Commitment to use the “HallD Digital Object Framework”. • Basic functions are available in existing commercial systems. • Start to use these. • Prototype some of the special capabilities needed. • What is involved in making HallD objects collaborative? • First use objects! • Then we have choices – which vary in ease of use and functionality.

  44. MyHallD: The Portal Door to: • Experiment Control Room • Simulation Farms & Data • Calibration Farm & Data • Reconstruction Farm • Analysis Farms & Data • Board Room & Archive • Personalized Electronic Logbook • Hall D Education and Outreach Area

  45. Collaborative Computing Organization • Clearly establishes responsibility for software subsystems. • Gives University groups working on software something to show for their efforts. • Helps to attract people and resources to the computing efforts. • Can leverage other University and National resources. • Infrastructure, personnel, funding, NSF & DOE ITR initiatives. • Eases the creation of customized (Grid) computing systems. • Establishes new capabilities within the JLab/NP community. • These capabilities allow JLab to take advantage of new opportunities.

  46. Critical Software Issues • Early creation of a “core group” of software developers. • Creation of key design elements. • Commitment to key design goals. • Key Software Problems. • Simulations. • Software organization and management. • Data formats for raw and derived data. • Software for defining and accessing raw and derived data. • Event visualization. • Using available software. • Developing & maintaining high-quality software.

  47. Computing Organization Issues • Recommendations. • Online database – rely totally on automated methods. • Offline database – rely totally on automated methods. • Integrated online/offline/simulation database. • Event Analysis – do it at Jefferson Lab. • Calibrations – possible to do elsewhere. • Physics Analysis – possible to do elsewhere. • Simulations – possible to do elsewhere. • PWA – possible to do elsewhere.

  48. Computing Organization Issues (continued) • Recommendations. • Develop infrastructure to easily share computing resources and information. • Develop customized computing approach to Hall D computing. • Provides clear lines of responsibility for software and computing tasks. • These are social decisions – not technical or financial decisions.

  49. Collaboration Computing Organization • The job is too big to be managed without databases. • Provides wider access to experimental information. • Databases are optimized for managing large data sets. We will create 5 – 10 M files every year. • Database use can be organized to minimize it’s impact on time critical applications.

  50. Experiments Database 1/M Run Detector Config. 1/M 1/M M/M 1/M Analysis M/M M/M Calibration Simulation

More Related