1 / 14

Comp HEP development for distributed calculation of particle processes at LHC

A.Kryukov, L.Shamardin. ACAT03, Tokyo, 12/4/03. Comp HEP development for distributed calculation of particle processes at LHC. 1. A.Kryukov (kryukov@theory.sinp.msu.ru), L.Shamardin (shamardin@theory.sinp.msu.ru) Skobeltsyn Institute of Nuclear Physics, Moscow State University.

zane-neal
Download Presentation

Comp HEP development for distributed calculation of particle processes at LHC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 CompHEP development for distributed calculation of particle processes at LHC 1 A.Kryukov(kryukov@theory.sinp.msu.ru), L.Shamardin(shamardin@theory.sinp.msu.ru)Skobeltsyn Institute of Nuclear Physics,Moscow State University

  2. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Outlook 2 • Motivation • Modification of CompHEP for Distributive Computation • Conclusions and … • …Future Development

  3. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Motivation 3 • Specific of hadron collider physics is a huge number of diagrams, hundreds of subprocesses. • SM (Feynman gauge): p,p -> W+,W-, 2*jets • p={u,U,d,D,c,C,s,S,b,B,G} • jet={u,U,d,D,c,C,s,S,b,B,G} • QCD background (excluded diagrams with virtual A,Z,W+,W-) • Number of subprocesses is 775 • Total number of diagrams is 16461

  4. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Motivation (cont.) 4 • Possible solution is simplification of the model combinatory [Boos, Ilyin]. • u#-d#-model (Feynman gauge): p,p -> W+,W-, 2*jets • p={u#,U#,d#,D#,b,B,G} • jet={u#,U#,d#,D#,b,B,G} • QCD background (excluded diagrams with virtual A,Z,W+,W-) • Number of subprocesses is 69; • Total number of diagrams is 845;

  5. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 u#,U# -> d#,D#,W+,W- 5

  6. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 CompHEP calculation scheme 6 Diagram generator C-code generator • Generate single executable file for all sub processes • Nothing utilities to make easy tuning of MC integration for different sub processes. Compilation Symbolic calculation Sub process selection Next sub process MC calculation

  7. Compilation sub process 2 Compilation sub process 1 Compilation sub process N MC calculation MC calculation MC calculation A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Modified Scheme of calculation 7 C-code generator Diagram generator • Separate executable file for each sub process. Symbolic calculation ... ...

  8. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 File structure 8 Old file structure was Modified file structure Working Dir. Working Dir. Results Results F1.c Sub1 F2.c F1.c ... F2.c ... ... Sub2 F43.c F44.c ... ... ...

  9. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Results 9 • Hardware: P4 (1.3GHz), RAM=256M • Process: p,p -> W+,W-,2*jet • 69 sub processes, 845 diagrams • u#d# model Standard Distributed, Distributed, (Per sub proc.) mean value total Size of executable file71M (1.02M) 2.9M 200.1M Compilation time 176m (153s) 133s 53m

  10. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Results (cont) 10 Standard Distributed Stnd/Dis. (Per sub proc.) (mean value) Cross section calc. 22m (19s) 15s 1.3 Memory (Virt./RAM) 46.5M/1.8M 7.0M/2.0M 6.7/0.9 Maximum search 60m(52s) 50s 1.1 Memory (Virt./RAM) 46.5M/1.8M 6.8M/1.8M 6.7/1.0 Generation of 1kev. 106m(92s) 60s 1.5 Memory (Virt./RAM) 46.7M/2.1M 7.0M/2.0M 6.7/1.1

  11. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Specific features to support distributed calculation 11 • Copy session data of selected sub process to all other sub processes. • Copy individual parameter of selected sub process to all other sub processes. • Copy session data from specific sub process to selected sub process. • Copy specific parameterfrom selected subprocess.

  12. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Specific features to support distributed calculation (cont.) 12 • Utility for job submission under PBS and GRID (LCG-1). • Modified utility that collect event stream generated by separate subprocess executables into single event sample. d_comphep [-cC][P|G] path_to_results_dir -c compilation only -C collect data into single sample -P PBS (default) -G GRID

  13. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Conclusions and ... 13 • Hadron collider physics require to use computer cluster and/or GRID solution for calculation with more then 2 hard interacted particle in finale state. • It is necessary to develop special tools to support such kind calculations. • Even rather simple 2->4 process has profit if we use distributed computation. Here we do not discuss the problem of convenience for user.

  14. A.Kryukov, L.Shamardin ACAT03, Tokyo, 12/4/03 Future Development 14 In this work we realize rather straightforward approach of distributive computation in CompHEP. In the future we are going to consider more sophisticated method takes into account the phase space as a set of non-overlapping pieces. This approach permit to divide any task on those number of independent (from MC point of view) sub tasks as necessary.

More Related