1 / 36

Experience on the Deployment of Geant4 on FKPPL VO

Experience on the Deployment of Geant4 on FKPPL VO. November 28, 2008 황순욱 , 안선일 , 김남규 KISTI e-Science Division 신정욱 , 이세병 National Cancer Center. Organization 8 divisions 3 centers 3 branch offices Personnel About 340 regular staffs 100 part-time workers Annual Revenue

bella
Download Presentation

Experience on the Deployment of Geant4 on FKPPL VO

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experience on the Deployment of Geant4 on FKPPL VO November 28, 2008황순욱, 안선일, 김남규KISTI e-Science Division신정욱, 이세병National Cancer Center

  2. Organization 8 divisions 3 centers 3 branch offices Personnel About 340 regular staffs 100 part-time workers Annual Revenue About 100M USD mostly funded by a government Introduction to KISTI

  3. Outline • Introduction to EGEE • the World Largest Grid Infrastructure • FKPPL VO Grid Testbed • Deployment of Geant4 on FKPPL VO • Demo

  4. Objectives Build large-scale, production-quality grid infrastructure for e-Science Available to scientists 24/7 EGEE grid Infrastructure 300 sites in 50 countries 80,000 CPU cores 20 PBytes 10,000 User EGEE (Enabling Grids for E-SciencE) • the largest multi-disciplinary grid infrastructure in the world

  5. IN2P3 FNAL KISTI INFN San Diego Taiwan KISTI ALICE Tier2 Center KISTI ALICE Tier2 Center • 112 core, 56TB, 13개 서비스 • KISTI, IN2P3, CERN, ASGC 협력 • 26개국, 86기관 1000여명 참여 • 가동률 96.7%, 22502건 활용

  6. Access CLI API Security Information & Monitoring Authorization Auditing Information &Monitoring Application Monitoring Authentication Workload Management Data Management JobProvenance PackageManager Accounting MetadataCatalog File & ReplicaCatalog Site Proxy ComputingElement WorkloadManagement StorageElement DataMovement EGEE middleware : gLite • Framework for building grid applications • Tapping into the power of distributed computing and storage resources

  7. gLite Main components User Interface (UI): The place where users logon to access the Grid Resource Broker (RB) (Workload Management System (WMS): Matches the user requirements with the available resources on the Grid Information System: Characteristics and status of CE and SE File and replica catalog: Location of grid files and grid file replicas Logging and Bookkeeping (LB): Log information of jobs Computing Element (CE): A batch queue on a site’s computers where the user’s job is executed Storage Element (SE): provides (large-scale) storage for files 7

  8. Basic gLite use case:Job submission File and Replica Catalog User Interface Resource Broker Computing Element Storage Element Site X Information System Submit job (executable + small inputs) query Retrieve status & (small) output files create proxy query publish state Submit job Retrieve output Job status Logging Register file Input file(s) Job status process VO Management Service(DB of VO users) Output file(s) Logging and bookkeeping 8

  9. FKPPL VO Testbed

  10. Goal • Background • Collaborative work between KISTI and CC-IN2P3 in the area of Grid computing under the framework of FKPPL • Objective • (short-term) to provide a Grid testbed to the e-Science summer school participants in order to keep drawing their attention to Grid computing and e-Science by allowing them to submit jobs and access data on the Grid • (long-term) to support the other FKPPL projects by providing a production-level Grid testbed for the development and deployment of their applications on the Grid • Target Users • FKPPL members • 2008 Seoul e-Science summer school Participants

  11. FKPPL VO Testbed VOMS WIKI CE UI CE LFC WMS SE SE FKPPL VO IN2P3 KISTI

  12. VO Registration Detail • Official VO Name • fkppl.kisti.re.kr • Description • VO dedicated to joint research projects of the FKPPL(France Korea Particle Physics Laboraroty), under a scientific research programme in the fields of high energy physics (notably LHC and ILC) and e-Science including Bioinformatics and related technologies • Information about the VO • https://cic.gridops.org/index.php?section=vo

  13. Progress • 2008.8.30 • "fkppl.kisti.re.kr" VO registration done • 2008.9.15 • UI and VOMS Installation and configuration done • 2008.9.30 • WMS/LB Installation and configuration done • 2008.10.10 • SE configuration done • 2008.10.15 • FKPPL VO Service Open • A FKPPL Wiki site Open

  14. FKPPL VO Usage • Application porting support on FKPPL VO • Geant4 • Detector Simulation Toolkit • Working with National Cancer Center • WISDOM • MD part of the WISDOM drug discovery pipeline • Working with the WISDOM Team • Support for FKPPL Member Projects • Grid Testbed for e-Science School • Seoul e-Science summer school

  15. Seoul e-Science Summer School 2008 • 국내 최초의 운영, 개발, 연구자 통합의 intensive 교육 실시(2주간) • 프랑스, 스위스, 대만 등의 대표적 연구자의 직접 강의 및 실습 • 국내 대학, 연구소 중심의 60여명/일 참가 • KISTI와 프랑스 IN2P3, 스위스 CERN과 공동개발 결과 포함

  16. How to access resources in FKPPL VO Testbed • Get your certificate issued by KISTI CA • http://ca.gridcenter.or.kr/request/certificte_request.php • Join a FKPPL VO membership • https://palpatine.kisti.re.kr:8443/voms/fkppl.kisti.re.kr • Get a user account on the UI node for FKPPL Vo • Send an email to the system administrator at ssgyu@kisti.re.kr

  17. User Support • FKPPL VO Wiki site • http://anakin.kisti.re.kr/mediawiki/index.php/FKPPL_VO • User Accounts on UI machine • 17 User accounts have been created • FKPPL VO Registration • 4 users have been registered as of now

  18. Contact Infomation • Soonwook Hwang (KISTI), Dominique Boutigny (CC-IN2P3) • responsible person • hwang@kisti.re.kr, boutigny@in2p3.fr • Sunil Ahn (KISTI), Yonny Cardenas (CC-IN2P3) • Technical contact person, • siahn@kisti.re.kr, cardenas@cc.in2p3.fr • Namgyu Kim • Site administrator • ssgyu@kisti.re.kr • Sehoon Lee • User Support • sehooi@kisti.re.kr

  19. Deployment of Geant4 on FKPPL VO

  20. Geant4 Installation • What are the two pieces of software required for building Geant4? • gcc 3.2.3 or 3.4.5 이상 • CLHEP • Base libraries providing manipulations and four-vector tools, etc • Getting CLHEP • http://proj-clhep.web.cern.ch/proj-clhep/DISTRIBUTION/distributions/clhep-2.0.3.1.tgz

  21. CLHEP Installation • 그리드 상에서 Geant4를 실행시키기 위해선 CLHEP을 static library 형태로 컴파일할 것을 권장함. $ ./configure --prefix=${NCCAPPS}/clhep/2.0.3.1 -disable-shared $ make; make install ; make install-docs $ ls ${NCCAPPS}/clhep/2.0.3.1/ libCLHEP-2.0.3.1.a libCLHEP-Geometry-2.0.3.1.a libCLHEP.a libCLHEP-Matrix-2.0.3.1.a libCLHEP-Cast-2.0.3.1.a libCLHEP-Random-2.0.3.1.a libCLHEP-Evaluator-2.0.3.1.a libCLHEP-RandomObjects-2.0.3.1.a libCLHEP-Exceptions-2.0.3.1.a libCLHEP-RefCount-2.0.3.1.a libCLHEP-GenericFunctions-2.0.3.1.a libCLHEP-Vector-2.0.3.1.a

  22. How to access Geant4 material/interaction data on the Grid? • Option 1 • Staging in all the necessary material data on the grid node on which the geant4 application is to be run. • Option 2 • Allowing remote access to material data from grid nodes using global file system such as AFS • Advantage • don’t need to modify the source code • When submitting G4 applications to the grid, what we need to do is to set environment variables to the AFS directory • export $G4DATA_HOME=/afs/in2p3.fr/grid/toolkit/fkppl.kisti.re.kr/geant4/data • We chose to use the second option on FKPPL VO • put all the material/interaction data on AFS • AFS Directory for the material data • /afs/in2p3.fr/grid/toolkit/fkppl.kisti.re.kr/geant4/data

  23. How to access ROOT I/O library on the grid? • We tried to use static library for ROOT, but failed for some reason. • Currently, we chose to use the ROOT library on the AFS system • Location of ROOT Directory on CERN AFS system • /afs/cern.ch/sw/lcg/release/ROOT/5.20.00/slc4_ia32_gcc34/root • Geant4 applications need to be compiled and built using the shared ROOT library on the AFS • When submitting jobs on the grid, we need to set an environment variable to the AFS directory • ROOTSYS=/afs/cern.ch/sw/lcg/release/ROOT/5.20.00/slc4_ia32_gcc34/root

  24. Use Case:Running GTR2_com code on the FKPPL VO

  25. Overview of GTR2_com Application name: GTR2_com (G4 app for proton therapy sim s/w by developed by NCC) -> GTR2 : Gantry Treatment Room #2, com: commissioning (now GTR2 simulation code is under commissioning phase) -> libraries: Geant4 , root (root.cern.ch) as simulation output library user macro output /user/io/OpenFile root B6_1_1_0.root /GTR2/SNT/type 250 /GTR2/SNT/aperture/rectangle open #Geant4 kernel initialize /run/initialize /GTR2/FS/lollipops 9 5 /GTR2/SS/select 3 /GTR2/RM/track 5 /GTR2/RM/angle 80.26 /GTR2/VC/setVxVy cm 14.2 15.2 /beam/particle proton /beam/energy E MeV 181.8 1.2 /beam/geometry mm 3 5 /beam/emittance G mm 1.5 /beam/current n 3000000
#SOBP /beam/bcm TR2_B6_1 164 /beam/juseyo /user/io/CloseFile GTR2_com GTR2_com 의 input은 nozzle의 configuration 이며, 이 configuration 이 명시된 macro 파일을 읽어서 최종 양성자 빔에 의한 선량분포를 3D-histogram 의 root 파일로 출력

  26. GTR2_com Code • G4 application Name • GTR2_com • User Input File • QA_11_65_8_*.mac • Output File for Analysis from Simulation Run • QA_11_65_8_*.root

  27. Execution of GTR2_com • On Local machine $ GTR2_com QA_11_65_8_24.mac > std_out 2> std_err (waiting) $ ls std_out std_err QA_11_65_8_24.root • On local cluster $vi cluster_run.sh # writing script file $qsub cluster_run.sh # submit a G4 job to local scheduler (waiting) $ ls std_out std_err QA_11_65_8_24.root

  28. Execution of GTR2_com (cont’d) • On Grid $ vi grid_run.jdl # Job description file (생략) $vi grid_run.sh #shell script file that runs on grid (생략) $ glite-wms-job-submit –a –o jobid grid.jdl $ glite-wms-job-status -I jobid $ glite-wms-job-output --dir myresult –i jobid $ ls ./myresult std_out std_err QA_11_65_8_24.out

  29. Example macro

  30. Shell Script to be run on Grid

  31. Example of JDL File Type = "Job"; JobType = "Normal"; Executable = "/bin/sh"; Arguments ="grid_run.sh" ; StdOutput = "grid_run.out"; StdError = "grid_run.err"; InputSandbox = {"grid_run.sh","GTR2_com","QA_11_65_8_24.mac"}; //QA_11_65_8_24.mac: NCC user macro OutputSandbox ={"grid_run.err","grid_run.out","QA_11_65_8_24.root"}; //QA_11_65_8_24.root : root Output ShallowRetryCount = 1 ;

  32. GTR2_com Execution on FKPPL VO • Logon into the UI node • Prepare necessary files • GTRT_com, JDL file, macro file, shell script • Submit job to the grid • glite-wms-job-submit –i jobid JDL_file • Check job status • glite-wms-job-status –i jobid • Get output • Glite-wms-job-output –dir result_dir –i jobid

  33. Do I bother to write thousands of JDL files to run thousands of my G4 applications on the Grid? • No, you can submit thousand of jobs to the grid with only one JDL file.

  34. Demo

  35. Distribution of subjobs’ completion time on FKPPL VO GTR2_com applications was submitted to the grid at 18:05

  36. 감사합니다Thank you for your attention

More Related