1 / 12

DATE: 0 2014/8/26

How to deploy GPFS nodes massively using Diskless Remote Boot Linux. Authors : Rock Kuo, Waue Chen, Che-Yuan Tu, Barz Hsu Collaborators / Supervisor : Steven Shiau, Jazz Wang. DATE: 0 2014/8/26. Outline. IBM GPFS NCHC DRBL Testbed Architecture Demo Schedule Reference Q&A.

corby
Download Presentation

DATE: 0 2014/8/26

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How to deploy GPFS nodes massively using Diskless Remote Boot Linux Authors : Rock Kuo, Waue Chen, Che-Yuan Tu, Barz Hsu Collaborators / Supervisor : Steven Shiau, Jazz Wang DATE: 02014/8/26

  2. Outline IBM GPFS NCHC DRBL Testbed Architecture Demo Schedule Reference Q&A

  3. GPFS (General Parallel File System) IBM GPFS is High-performance shared-disk file management solution. Provides fast, reliable access to a common set of file data from two computers to hundreds of systems. GPFS Architecture (From IBM)

  4. NCHC DRBL Diskless Remote Boot in Linux (DRBL) provides a diskless or systemless environment for client machines. DRBL Logo (From DRBL)

  5. Why we use DRBL to deploy GPFS? Fast and reconfigurable deployment for GPFS nodes Just install software you need in DRBL server, then client will boot from network using image in server. Use GPFS to utilize client disks effectively Since DRBL is diskless, you can use entire hard disk for GPFS nodes. 3. Use DRBL command to manage your GPFS enabled Storage Cluster 4. Add and remove storages dynamic with fault tolerance support

  6. Our Testbed Each node: CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz RAM: 2GB DDR2 667 Disk: sda->160G, sdb->320G NIC:Intel Corporation 82566DM Gigabit Network Connection

  7. Our Testbed Architecture Total disks: 2.5 T Total disks: 3.1 T

  8. Demo Schedule Install DRBL and GPFS drblsrv -i ; drblpush -i (install and deploy) http://trac.nchc.org.tw/grid/wiki/GPFS_DRBL Demo 1: Run GPFS in DRBL environment Run GPFS on 8 nodes Use 11 disks (6*160G + 5*320G=2.5T) Demo 2: Add new disk (Test Scalability) Dynamic add 3 disks GPFS will merge new and old disks into to one (Total disk spaces:3.1T)

  9. Demo Schedule (con.) Demo 3: Test fault-tolerance Use DRBL command to shutdown gpfs07 (GPFS will assume gpfs07 crash) If you enable GPFS data-replicate option, the GPFS disks will still be available for futural operation

  10. Future Work Write booting scripts for DRBL client to automatically add and remove GPFS nodes Integrate GPFS deploy function to DRBL management interface GPFS testbed to support 3D Fly Circit Image Database

  11. Reference DRBL http://drbl.sourceforge.net/ GPFS http://www-03.ibm.com/systems/clusters/software/gpfs/index.html

  12. Q&A

More Related