1 / 15

CrossGrid Testbed Node at ACC CYFRONET AGH

CrossGrid Testbed Node at ACC CYFRONET AGH. Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001. Summary. What is ACC CYFRONET AGH? Present and future of ACC Why ACC Cyfronet for CrossGrid Testbed? First installations Next steps. What is ACC CYFRONET AGH?.

makoto
Download Presentation

CrossGrid Testbed Node at ACC CYFRONET AGH

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001 CGW'01

  2. Summary • What is ACC CYFRONET AGH? • Present and future of ACC • Why ACC Cyfronet for CrossGrid Testbed? • First installations • Next steps CGW'01

  3. CGW'01

  4. What is ACC CYFRONET AGH? • Established in 1973 as an independent, non-profit organization • Since 1999 became a separate unit of the University of Mining and Metallurgy in Krakow • About 60 employees in several divisions • Main goals are to provide to universities and research institutes in Krakow: • computer power combined with wide range of software • communication infrastructure and network services CGW'01

  5. LAN in ACC CYFRONET MAN of Krakow CGW'01

  6. Present and future of ACC Cyfronet AGH Computers (parallel): • SGI 2800 - 128 processors R10000 and R12000, 73.6 Gflops of peak performance, 40 GB of operating memory, 190 GB of disk storage • HP S2000 - 16 processors PA 8000, 11.5 Gflops of peak performance, 4 GB of operating memory, 45 GB od disk storage • SPP1600/XA - 32 processors HP PA-RISC 7200, 7.68 Gflops of peak performance, 2.5 GB of operating memory, 40 GB of disk storage • ??? CGW'01

  7. SGI 2800 CGW'01

  8. Data migration Local access viaFTP and/or NFS Remote access via FTP Hierarchical data storage system HP K400 Server, with UCFM 2.3 (UniTree) software Disk cache Cyfronet LAN Tape libraries (ATL) SCSI SCSI Magneto-optical disks library(HP)

  9. Hierarchical data storage system - features • Automatic data migration to magneto-optical disks and tapes • Each file stored in two copies • trashcan - protection against accidental removal • capacities: • disk cache: 110 GB • M-O library: 660 GB • tape libraries: 17 TB • average file access time: • M-O library: 12 s + 30s/100MB • tape libraries: 3 min + 30s/100MB CGW'01

  10. Present and future of ACC Network: • ACC Cyfronet is the leading unit for the Krakow’s MAN: • main network node in soutern Poland • designer and owner of the fiber-optic infrastructure within the city (links several dozen institutions; approximately 70 km long; ATM and/or FDDI standard) • Provides access to interurban and international connections over three WANs (wide area networks): Ebone (3 Mb/s), POLPAK-T (5 Mb/s) and POL-34 (155 Mb/s) • POL-34 (ATM standard) 155 Mb/s --> 622 Mb/s soon CGW'01

  11. CGW'01

  12. Present and future of ACC Network: • Future is PIONIER (Polish Optical Internet) - high-speed academic network in Poland with its own fiber-optic structure; should be ready in two years • Supercomputing center PSNC in Poznan has already access to GEANT - Gigabit (2.4 Gb/s) Europeen Academic NeTwork CGW'01

  13. Why ACC Cyfronet for CrossGrid Testbed? • Good infrastructure: computer and network resources • High-speed connection with other testbed centers in Europe CGW'01

  14. First installations • PC Linux cluster (8 processors + switch at the beginning) with a simple disk storage system • Switch: at least 8 FastEthernet ports + 1 GigabitEthernet • e.g. BATM 48 FastEthernet + 2 GigabitEthernet, price: ~7500$ • e.g. BATM 24 FastEthernet + 1 GigabitEthernet, price: ~4000$ • Software installation (Globus, middle-ware, etc.) CGW'01

  15. Further plans • Real-time calculations for HEP experiments (future LHC accelerator in CERN) • data transfer speed of 1.5 Gb/s is needed. • GigabitEthernet port for our Cisco Catalyst 8540 (connection with POL-34 network); price: ~17000$; not necessary at the first stage • Heterogeneous CrossGrid node: • including small number (e.g. 8) processors of SGI2800 by using miser or mpset programs offered by IRIX 6.5 • incorporating our hierarchical storage system UniTree + ATL CGW'01

More Related