1 / 13

High Performance Computing at TTU

High Performance Computing at TTU. Philip Smith Senior Director HPCC TTU. Brief History. Founded in 1999 First cluster in 2001 TechGrid in 2002 Statewide grid project in 2004, funded in 2005 Large cluster with IB in 2005. Major Users. Molecular Dynamics (MD)

duke
Download Presentation

High Performance Computing at TTU

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Performance Computing at TTU Philip Smith Senior Director HPCC TTU

  2. Brief History • Founded in 1999 • First cluster in 2001 • TechGrid in 2002 • Statewide grid project in 2004, funded in 2005 • Large cluster with IB in 2005

  3. Major Users • Molecular Dynamics (MD) • Chemistry, Physics, Chemical Engineering • Quantum Dynamics • Numerical Weather Prediction • Mechanics (FEM simulations) • Energy exploration (reservoir modeling) • Beginning to see bioinformatics applications

  4. Science at Texas Tech University

  5. Expansion in 2008 • $1.8 M project to add a new computer room • Completion date 7/15/08 • Move in 8/15/08 • Double or triple our compute capacity in September 2008 with expansion room for several years.

  6. Current Resources Hrothgar Shared 648 cores ~ 2 GB per core Community Cluster 188 cores ~ 2 GB per core 12TB shared storage SDR Infiniband + GigE management network

  7. Current Resources (cont) • Antaeus (shared grid resource) • 264 cores ~2GB/core • 6 TB shared storage, 44 TB dedicated storage • GigE network only • TechGrid • 850 lab machines • Distributed between 5+ sites on campus • 10/100 network

  8. Thank you • Questions?

  9. TechGrid • Cycle scavenging grid • Computing labs in BA, Math and library • ~650 machines • Avaki → Condor in 2007

  10. Limitations on TechGrid • Windows binaries • Most machines only available 11pm-7am • Loosely coupled machines: no mpi • 1GB or less of memory

  11. General porting issues • Getting it to run: • Does it have a Window's Binary? • Does it require modification to compile? • Production usage: • Data foot print (input/output) • Run time • Numbers of iterations • Interaction between iterations

  12. Cluster job mix • 65% MD, 25% QD and 10% other • Currently about 56% MD with balance QD • >90% parallel

  13. Campus grid job mix <10% utilized Open source applications currently used Venus R SAS grid

More Related