1 / 16

NCSA Supercluster Administration

NCSA Supercluster Administration. NT Cluster Group Computing and Communications Division NCSA Avneesh Pant apant@ncsa.uiuc.edu. System Goals. Provide a production level of service Integrate the system into current environment Apply current supercomputer policies and procedures

oni
Download Presentation

NCSA Supercluster Administration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant apant@ncsa.uiuc.edu

  2. System Goals • Provide a production level of service • Integrate the system into current environment • Apply current supercomputer policies and procedures • Account management • Resource usage / allocation • Provide conveniences to the users • Develop an environment where users can prepare and run their own codes effectively • This requires advanced automated administration • Provide feedback to users • Job status to users via email • Provide common applications • Get an account, get your data, and run

  3. NCSA NT 320 Pentium® CPU Cluster 256 CPUs - Parallel MPI 64 HP Kayak XU systems Dual 550 MHz Pentium III Xeon 1 GB RAM 64 HP Kayak XU systems Dual 300MHz Pentium II 512 MB memory 64 CPUs - Serial 32 Compaq PWS 6000 Dual 333 MHz Pentium II 512 MB memory 64 Dual 550 MHz Pentium III Xeon HP Kayaks back-to-back

  4. System Configuration • Software • Microsoft NT 4.0 Server • LSF from Platform for queuing system • MPI • HPVM from Chien’s CSAG group • Networking • Myrinet • MPI communication • Fast Ethernet • Used for network file systems • Fibre Channel • Storage Networks • Giganet • Testing environment

  5. Alliance NT Supercluster, July 1999 Front-End Systemsntsc-tsN.ncsa.uiuc.edu File servers LSF master 128 GB Home 200 GB Scratch Fast Ethernet FTP to Mass Storage Daily backups Internet LSF BatchJob Scheduler • Apps development • Job submission 128 Compute Nodes, 256 CPUs Serial Nodes Myrinet Interconnect and HPVM Fast Ethernet No MPI 64 Dual 550 MHz Systems 64 Dual 300 MHz Systems 24 Dual 333

  6. Accessing the Cluster • Windows Terminal Server Interactive Nodes • Multiuser form of Windows NT • Surprisingly good performance • Access Methods • Windows RDP Client from Microsoft • Windows clients only • Citrix ICA client • Available for most platforms • http://www.citrix .com to download the clients • A java applet client is available • X Windows • Rsh daemon to start sessions

  7. Windows NT on a Web Page

  8. System Setup • System imaging • Initial setup from a network enabled boot floppy • Clears system and clones system using Drive Image Professional • Uses image file on network file server • Manually set hostname/IP in configuration files • Reboot and let it retrieve NT image, change Security ID, and configure • Small Non-volatile DOS partition • Boots from this during subsequent imaging • Stores configuration information • Runs batch scripts from server every boot • All systems can be updated from calling a single script • Scripts on the server contain the re-imaging commands • < 20 minutes to convert to a new configuration on all systems • Simplifies Administration • Systems are identical, adverse behavior usually hardware related • Add new systems or repair a broken system quickly

  9. Updating software • Radical changes through re-imaging • Prepare single system • Set configuration scripts to run at next boot • Boot to DOS and upload image to server • Incremental upgrades • Scripted using batch • Registry files are merged using RCMD • RCMD is a resource kit Remote command tool • Most common upgrade is LSF • OS and MPI do not change often

  10. NT Cluster Monitoring •Scalable Reconfigurable grid Works well over modem Highlights troubled systems Deviation from expected load can be viewed • Shows in real time: • systems status • current load • load by user name • Load by jobid • All running/pending jobs

  11. Node Administration • CRUN Scripts • Runs scripts sequentially for ranges of machines • Used for Rebooting, updating files … • Coupled with other tools like Tlist (like ps) and kill • Can be used to find processes on hosts • RCMD • Provides interactive access to compute nodes • Useful in manual process management • Faster than using LSF’s lsrun

  12. Process Administration • Simply start and stop jobs • Not so simple • Queuing system software may not be fault tolerant • Only some of the processes launch • Not all of the processes get terminated • Shepherding • Makes decisions about jobs and processes • Can kill jobs if processes do not start or quit • Can kill processes if jobs finish • Coupled with process tracking software to find orphans • Uses semi-intelligent Shepherd Agent • Also provides interface for global administration

  13. Account Administration • Integrates into our current systems • Account creation/deletion occurs in our allocation division • Uses command line utilities to manage accounts • Password management can be handled through this system • System Usage Accounting • Custom daemon created • Simple, dedicated CPU / Memory accounting • Actual process CPU usage is not relevant due to our MPI • Processes always use 100% of the CPU • Number of process and time info collected by LSF • Existing Accounting infrastructure used

  14. Storage Administration • Storage systems • Storage Central Disk advisor by W Quinn • For monitoring file system usage • Quota software • No quota software currently in use • Our scratch system is Windows 2000 and has quota Software available • Quotas will be enforced when we switch to W2K • Home directories are on Windows NT 4.0 • Security • Home space is readable by the user only • Upon request, administrators can gain access • Scratch Space file access is maintained by the user

  15. Scalability Issues • Queuing system • LSF is currently working at a scale unexpected a few years ago • Where will difficulties arise? • Batch system falls behind more often when system size grows • Related to the speed and reliability of the network • Platform Computing LSF has adapted in the past • Monitoring tools • Many command line tools are already impractical • Visualization methods need to be researched • GLMon may not be effective for more that 1000 nodes • Detailed monitoring effects system scalability

  16. Future Directions • Better integration with the mass storage system • High performance shared file systems • Improved reliability and process management • Advanced user support • Advancements in interconnects • Better scaling • Better performance

More Related