1 / 26

iRODS performance test and SRB system at KEK

iRODS performance test and SRB system at KEK. Yoshimi Iida @ KEK Building data grids with iRODS 27 May 2008. Outline. Performance measurement Transfer test between CC-IN2P3 and KEK Scaling test for ICAT Concurrent test for ICAT Compare with iRODS and SRB SRB/iRODS system at KEK.

taline
Download Presentation

iRODS performance test and SRB system at KEK

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. iRODS performance test and SRB system at KEK Yoshimi Iida @ KEK Building data grids with iRODS 27 May 2008

  2. Outline • Performance measurement • Transfer test between CC-IN2P3 and KEK • Scaling test for ICAT • Concurrent test for ICAT • Compare with iRODS and SRB • SRB/iRODS system at KEK Building data grids with iRODS

  3. Transfer between CC-IN2P3 and KEK CC-IN2P3, FR KEK, JP NY, USA • 1GB data transfer between CC-IN2P3 and KEK • Compare with iRODS and bbcp • The network route go through USA • The RTT is about 285ms Building data grids with iRODS

  4. System used • iRODS system at CC-IN2P3 • ICAT enabled iRODS server on Solaris 10 • Thumpers Sun x4500 (AMD processors) • Oracle 10g is on a cluster of dedicated machines • Linux file resource from local disk system • iRODS 0.9 • iRODS system at KEK • iRODS server on RHEL3 • Intel Xeon 3.0GHz ×4 • Linux file resource from local disk system • iRODS 0.9 Building data grids with iRODS

  5. From KEK to CC-IN2P3 • 1GB data transfer during 24 hours • window size 4MB • number of parallel streams 16 • bbcp often fail to connect Building data grids with iRODS

  6. From CC-IN2P3 to KEK • 1GB data transfer during 12 hours • window size 4MB • number of parallel streams 16 • iput is better than bbcp Building data grids with iRODS

  7. Scaling test • Data • Using the same directory at client machine • 1000 files of 1000 bytes each • Measurement • Ingesting directory and listing the collection • Performance measured for every directory operation • 1 collection and 1000 files Building data grids with iRODS

  8. System used • ICAT based on Oracle system at CC-IN2P3 • ICAT enabled iRODS server on Solaris 10 • Thumpers Sun x4500 (AMD processors) • Oracle 10g is on a cluster of dedicated machines • iRODS resource and client on SL4 • Dual AMD Opteron Processor 848 • Linux file resource from local disk system • ICAT based on PostgreSQL system at KEK • ICAT enabled iRODS server on RHEL3 • Dual Intel Xeon 2.8GHz • PostgreSQL 8.2.5 is running on the same machine • iRODS resource and client on RHEL3 • Dual Intel Xeon 2.8GHz • Linux file resource from local disk system Building data grids with iRODS

  9. Ingesting up to 1 million files Running the other process Building data grids with iRODS

  10. Nested collection test • Data • Registering same directory at client machine • 100 files of 100 bytes each • Measurement • Ingesting directory and listing the collection • Making nested collection every 10 collection • Performance measured for every directory operation • 1 collection and 100 files Building data grids with iRODS

  11. Nested collection test coll-1220: USER_PATH_EXCEEDS_MAX nest-121 : OCI_ERROR Building data grids with iRODS

  12. Concurrent test • Data • Ingesting same directory at client machine • 1000 files of 1000 bytes each • Measurement • Runing multi process at the same time • read operation – ils and iget • write operation – iput and ireg • mixed operation – iput, ireg, ils and iget • Performance measured for every directory operation • DB setting • Set the number of maximum connections to 200 Building data grids with iRODS

  13. 10 clients 200 clients 100 clients 300 clients × Concurrent test for reading metadata Error: connectToRhost failed Building data grids with iRODS

  14. 10 clients 100 clients 200 clients Concurrent test for writing metadata • iput – Oracle ICAT • Because of limit of inodes, we cannot put any more files Building data grids with iRODS

  15. 10 clients 100 clients 200 clients Concurrent tests for mix - PostgreSQL Building data grids with iRODS

  16. 10 clients 100 clients 200 clients Concurrent test for mix - Oracle • iput – Oracle ICAT • Because of limit of inodes, we cannot put any more files Building data grids with iRODS

  17. Compare with iRODS and SRB • For the scaling test • Data • Using the same directory at client machine • 1000 files of 1000 bytes each • Measurement • Ingesting directory and listing the collection • Performance measured for every directory operation • For the nested collection test • Data • Registering same directory at client machine • 100 files of 100 bytes each • Measurement • Ingesting directory and listing the collection • Making nested collection every 10 collection • Performance measured for every directory operation Building data grids with iRODS

  18. System used • iRODS system at KEK • ICAT enabled iRODS server on RHEL3 • Dual Intel Xeon 2.8GHz • PostgreSQL 8.2.5 is running on the same machine • iRODS 1.0 • iRODS resource and client on RHEL3 • Dual Intel Xeon 2.8GHz • Linux file resource from local disk system • SRB system at KEK • MCAT enabled SRB server on RHEL3 • Dual Intel Xeon 2.8GHz • PostgreSQL 8.2.5 is running on the same machine • SRB 3.5.0 • SRB resource and client on RHEL3 • Dual Intel Xeon 2.8GHz • Linux file resource from local disk system Building data grids with iRODS

  19. Scaling test - iRODS and SRB Building data grids with iRODS

  20. nested collection –iRODS and SRB coll-1220: USER_PATH_EXCEEDS_MAX nest-47: Error Problem running command Building data grids with iRODS

  21. SRB system for Belle at KEK Melbourne KU CYFRONET LCG user LCG user APAN GEANT2 ASGC NCU GridFTP KEK-DMZ SINET SRB KEK FW KEK-LAN Nagoya SRB-DSI KEK-2 Enhanced GridFTP service SRB server Belle Net dedicated inside Pluggable Extension KEK-1 NFS 3.5PB MCAT HSM SRB client LSF Computing Farm Still not integration with Grid • Both protocols are authorized by GSI • Setting up on GridFTP server • grid-mapfile for SRB user • SRB configuration file for server and resource • Register LCG user DN on MCAT Building data grids with iRODS Belle analysis user

  22. Tokai KEK 60km Plan for iRODS system at KEK • Data transfer for J-PARC project • Generate huge amount of imaging data at Tokai • About 1PB data in a year in total • Store the data at Tokai storage once, then copy to KEK and distribute for collaborators • Storage at Tokai are recycled • Bandwidth between 2sites will be 10Gbps Building data grids with iRODS

  23. Acknowledgements • Special thanks to Jean-Yves Nief from CC-IN2P3 for his help to setup iRODS system and supporting at CC-IN2P3 • Thanks to Adil Hasan from RAL for his help to these tests Building data grids with iRODS

  24. Back up Building data grids with iRODS

  25. Bandwidth Lyon-KEK • iperf with some options; • -w 4M : TCP window size [Bytes] • -P 16 : the number of parallel threads • -i 5 : periodic bandwidth reports [sec] Building data grids with iRODS

  26. Summery of iRODS Performance • Transfer from KEK to CC-IN2P3 is not stable but iput is better than simple transfer software (bbcp) • iRODS can manage 1 million files stability • In case of very nested collection, PostgreSQL ICAT takes long time to register data • Better performance than SRB • Oracle ICAT can handle more than 300 clients at the same time, but it takes time according to increase the number of clients Building data grids with iRODS

More Related