1 / 2

Grid Applications for the ATLAS LHC Experiment & Beyond

Grid Applications for the ATLAS LHC Experiment & Beyond.

elie
Download Presentation

Grid Applications for the ATLAS LHC Experiment & Beyond

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Applications for the ATLAS LHC Experiment & Beyond ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter over antimatter in the universe, evidence for Supersymmetry and other new physics, including even micro black hole production! The experiment is being constructed by some 16000 scientists in ~150 institutes in 6 continents. The experiment will be located at the 27km circumference Large Hadron Collider at CERN in Geneva. The ATLAS detector Despite highly efficient filters acting on the raw data read by the detector, the `good’ events will still correspond to several Petabytes of data per year, which will require millions of SpecInt2k to process and analyse. Even now, to design the detector and to understand the physics, many millions of simulated events also have to be produced. A simulated micro black hole decay in the ATLAS detector Only a Grid can satisfy our requirements.ATLAS is a global collaboration with Grid testbeds already deployed worldwide. While building on generic middleware, we are required to develop several components, which may be reusable. We are also required to build tools that con run tasks in a coherent way across several grid deployments These are being exercised and developed in the context of Data Challenges of increasing size and complexity The first of these was performed in ~50 sites in 24 countries and 6 continents. It was a proof of principle for Grid-based production. The next Data Challenge will make more extensive use of Grid tools and extend Grid use to non-scheduled analysis activity. This will be done using 3 regional Grids in 2 continents. US Atlas Testbed EU DataGrid Testbed

  2. GANGA Grappa GUI GRID Services ? JobOptions Virtual Data Algorithms Histograms Monitoring Results Athena/ GAUDIApplication ? The GANGA/GRAPPA project is working to produce an interface between the user, the Grid Middleware and the experimental software framework. It is being developed jointly with the LHCb experiment, and as it is using component technologies will allow reuse elsewhere The large number of Grid sites requires automated and scalable Installation Tools. Coherent rpms and tar files are created from CMT, exposing the package dependencies as PACMAN cache files. PACMAN can pull or push complete installations to remote sites. Scripts have been developed making the process semi-automatic. • ATLAS UK integrates the EDG/LCG middleware. Resolving identified problems, a recent UK mini production had a success rate per job was higher than 90%. Even analysis can now be run this way. • The integration of the LCG-2 release is underway. • User input has proven essential • Jobs must make few assumptions about the system. • Configuration and integration takes as long as writing the middleware. The ATLAS Distributed Analysis system supports distributed users, data and processing. This includes Monte Carlo production, data reconstruction and the extraction of summary data. A prototype system has been created based on the GANGA user interface, the ATLAS production and DIAL job management systems. It will incorporate ARDA middleware when available. In addition to the usual identification of middleware services, the system is further decomposed into client and high-level service layers. A key component of the system is the Analysis Job Definition Language, AJDL that specifies the types of objects used to define the interfaces for the high-level services.

More Related