1 / 17

Prospects on Texas High Energy Physics Network Needs

Prospects on Texas High Energy Physics Network Needs. Jae Yu Univ. of Texas, Arlington. LEARN Strategy Meeting University of Texas at El Paso Dec. 9, 2004. Outline. High Energy Physics The challenges HEP in Texas Network Needs for HEP Conclusions. High Energy Physics.

abdalla
Download Presentation

Prospects on Texas High Energy Physics Network Needs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Prospects on Texas High Energy Physics Network Needs Jae Yu Univ. of Texas, Arlington LEARN Strategy Meeting University of Texas at El Paso Dec. 9, 2004

  2. Outline • High Energy Physics • The challenges • HEP in Texas • Network Needs for HEP • Conclusions

  3. High Energy Physics • Definition: A field of Physics pursues for fundamental constituents of matter and basic principles of interactions between them  How is universe created, and how does it work? • Use large particle accelerators • Use large particle detectors

  4. CDF p DØ Tevatron p Fermilab Tevatron Chicago  • World’s Highest Energy proton-anti-proton collider • Ecm=1.96 TeV (=6.3x10-7J/p 13M Joules on 10-4m2) • Equivalent to the kinetic energy of a 20t truck at a speed 80 mi/hr

  5. High Energy Physics • Definition: A field of Physics pursues for fundamental constituents of matter and basic principles of interactions between them  How is universe created, and how does it work? • Use large particle accelerators • Use large particle detectors • Large, distributed collaborations • ~600/experiment for currently operating experiments • ~2000/experiment for future experiments • WWW grew out of HEP to expedite communication between collaborators

  6. Typical HEP Collaboration at Present ~700 Collaborators ~80 Institutions 18 Countries

  7. Large Hadron Collider (LHC) CERN, Geneva: 2007 Start • pp s =14 TeV L=1034 cm-2 s-1 • 27 km Tunnel in Switzerland & France CMS TOTEM 5000+ Physicists 250+ Institutes 60+ Countries First Beams: Summer 2007 Physics Runs: from Fall 2007 ALICE : HI LHCb: B-physics Atlas H. Newman

  8. High Energy Physics • Definition: A field of Physics pursues for fundamental constituents of matter and basic principles of interactions between them  How is universe created, and how does it work? • Use large particle accelerators • Use large particle detectors • Large, distributed collaborations • WWW grew out of HEP to expedite communication between collaborators • ~600/experiment for currently operating experiments • ~2000/experiment for future experiments • Multi-peta bytes of data • Present experiments: ~10PB by 2008 • Future experiments: Tens of PB by 2008 and ~ Exa-bytes by 2015 • Shares many of these challenges with other fields • Grid computing is adopted to provide a solution for these challenges

  9. Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center LHC Data Grid Hierarchy CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~100-1500 MBytes/sec Online System Experiment CERN Center PBs of Disk; Tape Robot Tier 0 +1 Tier 1 10 - 40 Gbps FNAL Center IN2P3 Center INFN Center RAL Center ~10 Gbps Tier 2 ~1-10 Gbps Tier 3 Institute Institute Institute Institute Tens of Petabytes by 2007-8.An Exabyte ~5-7 Years later. Physics data cache 1 to 10 Gbps H. Newman Tier 4 Workstations

  10. ot Grid3 Deployment Map n Jan. 2004 • Over 100 users • ~100% utilization • Continued increase • 30 sites, multi-VO • shared resources • ~3000 CPUs Sep. 2004 GriPhyn  Grid3  OSG0 A.Sill

  11. High Energy Physics in Texas • 12 Universities in Texas are involved in HEP • UTA, UT, UH, Rice, TTU, TAMU, UTB, UTEP, SMU, UTD, UTPA, and PVAMU • Many different research facilities used • US: Fermi National Accelerator Laboratory, Jefferson Lab, Brookhaven National Lab, SLAC and Cornell • Europe: CERN in Switzerland and DESY in Germany • Asia: KEK in Japan and BES in China • Natural sources of particle beams • Sizable community, variety of experiments and needs

  12. Universities in Texas w/ HEP Active Program

  13. HEP Experiment Involvements and Activities • Presently operating experiments • DØ: UTA, Rice, SMU • UTA has the only DØ Regional Analysis Center • JY at UTA playing a leadership role in DØ grid computing • CDF: TTU, TAMU, Baylor • A. Sill from TTU is the CDF grid computing coordinator • Babar: UH, UTD • MINOS: UT • LHC Experiments • ATLAS: UTA, SMU, UTD, UTPA • K. De at UTA a grid computing leader • UTA a candidate for a Tier 2 center • CMS: RICE, TTU • ALICE: UH • L. Pinsky at UH the computing coordinator of ALICE • UH is competing for Tier 1 center for ALICE • LHCB: UT • Other Current experiments • STAR (Rice, TAMU) • Belle (UT) • Beyond the next decade: Linear Collider • Texas HEP grid: THEGrid as part of HiPCAT

  14. UTA – RAC (DPCC) • 84 P4 Xeon 2.4GHz CPU = 202 GHz • 7.5TB of Disk space • 100 P4 Xeon 2.6GHz CPU = 260 GHz • 64TB of Disk space • Total CPU: 300k SI2000 • Total disk: 73TB • Total Memory: 168Gbyte • Network bandwidth: 68Gb/sec

  15. DØ and ATLAS Production DPCC online Network Bandwidth Usage at UTA

  16. Network Needs for HEP • For current experiments • DØ Regional Center Resources • 200k SI2000 (UTA)  125Mbit/s peak and 30Mbits/sec average • Other experiments  155 Mbits/s average • For future experiments • Anticipated needs to support all experiments on 2008  Optimal average bandwidth 622Mbit/s • Additional needs if large hubs get located in Texas • Anticipated future ATLAS tier 2 resources • 2005: 300k SI2000 • 2008 and future: 3000k SI2000 • ALICE Tier one will add just as much as ATLAS does • Optimal average: 1 – 2 Gbit/s

  17. Conclusions • Texas HEP community plays leadership roles in virtually all present and future experiments • Grid computing from the necessity promotes interdisciplinary research activities • Good high level work force training • Attract external funds • High bandwidth a key infrastructure for maintaining leadership in HEP and computing • LEARN’s planned network and its expeditious implementation to support HEP activities critical in this endeavor

More Related