construction experience and application of the hep datagrid in korea n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Construction Experience and Application of the HEP DataGrid in Korea PowerPoint Presentation
Download Presentation
Construction Experience and Application of the HEP DataGrid in Korea

Loading in 2 Seconds...

play fullscreen
1 / 22

Construction Experience and Application of the HEP DataGrid in Korea - PowerPoint PPT Presentation


  • 69 Views
  • Uploaded on

CHEP2003, UC San Diego Monday 24 March 2003. Construction Experience and Application of the HEP DataGrid in Korea. Bockjoo Kim (bockjoo@chep12.knu.ac.kr) On behalf of Korean HEP Data Grid Working Group. Outline. Korean Committe HEP Experiments Development of Korean HEP Data Grid

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Construction Experience and Application of the HEP DataGrid in Korea' - arch


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
construction experience and application of the hep datagrid in korea

CHEP2003, UC San Diego

Monday 24 March 2003

Construction Experience and Application of the HEP DataGrid in Korea

Bockjoo Kim (bockjoo@chep12.knu.ac.kr)

On behalf of

Korean HEP Data Grid Working Group

outline
Outline
  • Korean Committe HEP Experiments
  • Development of Korean HEP Data Grid
  • Goals of Korean HEP Data Grid
  • Hardware and Software Resources
    • Network
    • CPU’s
    • Storages
    • Grid Software – EDG testbed, SAMGrid
  • Achievements in Y2002
  • Prospects in Y2003
korean institutions and hep experiments
Korean Institutions and HEP Experiments
  • 12 Institutions are active HEP participants
  • Current Experiments:
    • Belle/ KEK, K2K/KEK,
    • Pheonix / BNL, CDF / Fermilab
  • Near Future Experiments
    • AMS / ISS (MIT, NASA, CERN) : Y2005
    • CMS (CERN, Europe) : Y2007
    • Linear Collider Experiment(s)

Space Station (AMS)

2005

JapanKEK (K2K/Belle)

EuropeCERN (CMS)

2007

USBNL (PHENIX)

US FNAL (CDF)

Korea CHEP

Space Station (AMS)

Other Korean

HEP institutions

(Korean DataGrid Related Experiments Only)

development of korean hep data grid
Development of Korean HEP Data Grid
  • Grid Forum Korea (GFK)’s formed in 2001 and thus KHEPDGWG started
  • Korean HEP Data Grid approved by KISTI / MIC(GFK) on March 22, 2002.
  • NCA supports CHEP with two international networking utilization projects which are closely related with the Korean HEP Data Grid : Europe and Japan/USA  Networking
  • KT/KOREN-NOC supports CHEP with PC clusters for networking
  • Companies like IBM-Korea, CIES agreed to support CHEP 50TB Tape Library and 1TB +Servers)  INDUSTRY
  • CHEP itself supports HEP Data Grid with its own research fund from the Ministry of Science and Technology (MOST)  MOST and CHEP
  • Kyungpook Nat’l Univ. supports CHEP with spaces for the KHEPDGWG
  • KOREN/APAN supports Korean HEP DG with 1 Gbps bandwidth of CHEP to KOREN (2002)
  • l networking (one is CHEP and the other is in Seoul) is under discussion
  • Hyeonhae/Genkai APII (GbE) for HEP (beteewn Korea-Japan) project is in progress
  • 1st International HEP Data Grid Workshop in Nov 2002
goals of korean hep data grid
Goals of Korean HEP Data Grid
  • Implementation of the Tier-1 Regional Data Centerfor the LHC-CMS (CERN) experiment in Asia. The Regional Data Center can be also used as regional data center for other experiments (Belle, CDF, AMS, etc.)
  • Networking
    • Multi-leveled (Tier) hierarchy of distributed servers (both for data and for computation) to provide transparent access to both raw and derived data.
    • Tier0 (CERN) – Tier1 (CHEP) : ~Gbps via TEIN
    • Tier1(CEHP) – Tier1(US and Japan) : ~Gbps via APII/Hyeonhae
    • Tier-1 (CHEP), Tier-2 or 3 (participating institutions): 45Mbps ~ 1 Gbps via KOREN
  • Computing(1000 CPU Linux clusters)
  • Data Storage Capability
    • Storage 1.1 PB Raid Type Disk (Tier1+Tier2)
    • Tape Drive ~ 3.2 PB
    • HPSS Servers
  • Software: Contribute to grid application package development
korean hep data grid network configuration 2002
Korean HEP Data GridNetwork Configuration (2002)
  • Network Bandwidth between institutions
    • CHEP-KOREN: 1 Gbps (ready to Users)
    • SNU-KOREN: 1Gbps ready for test
    • CHEP-SNU: 1Gbps ready for test
    • SKKU-KOREN: 155 Mbps (not yet to Users)
    • Yonsei-KOREN: 155 Mbps (not yet to Users)
  • File Transfer Tests:
    • KNU-SNU, KNU-SKKU : ~50 Mbps
    • KNU-KEK, KNU-Fermilab : 17 Mbps(155Mbps,45Mbps)
    • KNU-CERN : 8 Mbps (10 Mbps)
distributed pc linux farm
Distributed PC-linux Farm
  • Distributed PC-linux Clusters (~206 CPU’s so far)
  • 10 sites for testbed setup or/and tests
    • Center for High Energy Physics(CHEP): 142 CPU’s
    • SNU: 6 CPU’s
    • KOREN/NOC: ~40 CPU’s
      • CHEP to KOREN: 1 GbE test established
    • Yonsei U, SKKU, Konkuk U, Chonnam Nat’l Univ, Ewha WU, Dongshin U: 1 CPU each
  • 4 sites outside of Korea : 18 CPU’s

KEK,FNAL,CERN, and ETH

storages and network equipment
Storages and NetworkEquipment

CHEP/KNU

48 TBStorage

and network

equipment

storage specification

48 TB

S10

L12

S10

L12

Storage Specification
  • IBM TAPE LIBRARY SYSTEM-48 TB (13~18/Nov/2002)
    • 3494-L12 7.6 TB
    • 3494-S10 16 TB
    • 3494-L12 7.6 TB
    • 3494-S10 16 TB
  • Raid Disks
    • Fast T200: 1 TB
    • Raid Disks: 1 TB
  • Disks on Nodes (4.4 TB)
  • SW: TSM (HSM)
  • HSM Server : S7A 262Mhz, 8Way, 4GB Memory
grid software
Grid Software
  • All is globus 2 based software
  • KNU and SNU host one EDG testbed each and are running within Korea at the full scale
    • Application of the EDG testbed to currently running experiments is configured for
      • EDG testbed for CDF data analysis
      • EDG testbed for Belle data analysis (This is in progress)
  • Worker Nodes for the SAM Grid (Fermilab, USA) is also installed for the CDF data analysis at KNU
  • CHEP assigned a few CPU’s for iVDGL testbed setup (Feb 2003)
edg testbeds
EDG Testbeds

EDG Test bed

at KNU

EDG Test bed at SNU

configuration of edg testbed in korea
Configuration of EDG testbed in Korea

K2K

CPU

UI

Real user

RB

In operation

NFS

BigFat

Disk

GSIFTP

GDMP client

(with new VO)

KNU/CHEP

SE

CE

VOuser

Disk

VO user

LDAP

Server

@SNU

NFS

GSIFTP

GDMP server

(with new VO)

NFS

GSIFTP

GDMP client

(with new VO)

MAP on disk

With maximum security

NFS

In preparation

WN

grid-security

VO user

GSIFTP

SNU

SKKU

In operation

CDF

CPU

.

.

.

Web Services:

http://cluster29.knu.ac.kr/

an application of edg testbed
An Application of EDG testbed
  • The EDG testbed functionality is extended to include Korean CDF as a VO
  • The extension is to attach existing CPU’s with CDF softwares to the EDG testbed
  • Add a VO following EDG discussion list
  • CE in the EDG testbed is modified
    • Define a que in a non-CE machine
    • grid-mapfile, grid-mapfile.que1_on_ce,

grid-mapfile.que2_on_nonce (exclusive job submission )

    • ce-static.ldif.que1_on_ce, ce-static.ldif.que1_on_nonce
    • ce-globus.skel
    • globus-script-pbs-submit
    • globus-script-pbs-poll (for ques on non-CE)
  • Experiment Specific Machine (= que on non-CE) is modified
    • Make a minimal WN configuration without greatly modifying existing machine

(pbs install/setup, Pooled accounts, mounting security)

    • /etc/hosts.equiv for pooled account users to submit jobs on non-CE que
  • References

[1]http://www.gridpp.ac.uk/tb-support/existing/

[2]http://neutrino.snu.ac.kr/~bockjoo/EDG_testbed/contents/creating_queues_4_aVO.html

overview of the edg application
Overview of the EDG application

Local LDAP Server

Authorized Users

dguser for RB

VO users for CDF

CAF

Feynman Center

Fermilab

LDAP Servers@ .nl and .fr

VO users for CMS, LHCB, ATLAS, etc

Another site

  • Modified CE
  • Q’s for EDG VO’s
  • Q for CDF VO

UI

Real user

RB

NFS

NFS

/flatfiles/SE00

GSIFTP

SE

Modified CE

VOuser

/home

VO user

NFS

New WN

GSIFTP

GDMP server

(with new VO)

NFS

GDMP client

(with new VO)

GSIFTP

CDF VO Q

EDG WN

/etc/grid-security

grid-mapfile

NFS

CDF

Run2 Softwares

WN

PBS Server

VO user

GSIFTP

PBS Client

NFS

working sample files for cdf job
Working Sample Files for CDF Job
  • JDL

Executable = "run_cdf_tofsim.sh";

StdOutput = "run_cdf_tofsim.out";

StdError = "run_cdf_tofsim.err";

InputSandbox = {"run_cdf_tofsim.sh"};

OutputSandbox = {"run_cdf_tofsim.out","run_cdf_tofsim.err",".BrokerInfo"};

  • Input Shell Script

#!/bin/sh

source ~cdfsoft/cdf2.shrc

setup cdfsoft2 4.9.0int1

newrel -t 4.9.0int1 test1

cd test1

addpkg -h TofSim

gmake TofSim.all

gmake TofSim.tbin

./bin/Linux2-KCC_4_0/tofSim_test tofSim/test/tofsim_usr_pythia_bbar_dbfile.tcl

web service for edg testbed
Web Service for EDG testbed
  • To facilitate access to the EDG testbeds in Korea
  • Mailman python cgi wrapper is utilized
  • EDG job related Python commands are modified for web service
  • At the moment, login is possible through a proxy file
  • Logged user can see the user’s Job ID’s
  • Retrieved job output remains at the web server machine
web service for edg testbed1
Web Service for EDG testbed

Job submission by Loading jdl

Login by Loading Proxy

List of JOB ID’s to get output

  • Job Submission Result Page
  • Job Status can be checked
  • Submitted Job can be cancelled
sam grid
SAM Grid

SAM Grid

Monitoring Home page

DCAF (DeCenteralized

Analysis Farm) in KNU

for SAM Grid

what khepdg achieves in y2002
What KHEPDG achieves in Y2002
  • Infrastructure
      • 206 CPUs/ 6.5 TB Disk/ 48 TB Tape library + Networking Infrastructure
      • HSM system for tape library
  • KNU and SNU host one EDG testbed each which is running within Korean in full scale andaccessible via web
  • KNU installed SAMGrid (US Fermilab products) worknodes (as demonstrated at SC2002)
  • CHEP started discussing on collaboration with iVDGL
  • SNU/KNU implemented an application of the EUDG testbed for the CDF and the implementation is working
  • Network test is performed between Korea-US, Korea-Japan, Korea-EU, and within Korea.
  • 1st Internatonal HEP DataGrid workshop held at CHEP
prospects of khepdgwg for y2003
Prospects of KHEPDGWG for Y2003

More testbed setup (e.g., iVDGL’s WorldGrid)

Extend application of EDG testbed with currently running experiments to, e.g., Belle

Cross Grid Tests between EDG – iVDGL in Korea

Investigate possibility of Globus3

Full operation of HPSS (HSM) with Grid Softwares

Increase number of clusters to 400 CPU or more

Increase Storages to 100 TB

Participate in the CMS data challenge

2nd HEP DataGrid Workshop will be held in August

summary
Summary
  • HEP Data Grid is being considered for most of Korean HEP institutions
  • So far the HEP Data Grid project has received excellent supports from government, industry, and research institutions
  • EDG testbeds and its application are operational in Korea, and we will expand with other testbeds, e.g., iVDGL WorldGrid
  • 1st international workshop on HEP Data Grid was held successfully in November 2002
  • CHEP will host 2nd international workshop on HEP Data Grid in August 2003