1 / 26

Holland Computing Center

Holland Computing Center. David R. Swanson, Ph.D. Director. Computational and Data-Sharing Core. Store and Share documents Store and Share data and databases Computing resources Expertise. Who is HCC?. HPC provider for University of Nebraska System-wide entity, evolved over last 11 years

nerice
Download Presentation

Holland Computing Center

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Holland Computing Center • David R. Swanson, Ph.D. • Director

  2. Computational and Data-Sharing Core • Store and Share documents • Store and Share data and databases • Computing resources • Expertise

  3. Who is HCC? • HPC provider for University of Nebraska • System-wide entity, evolved over last 11 years • Support from President, Chancellor, CIO, VCRED • 10 FTE, 6 students

  4. HCC Resources • Lincoln: • Tier-2 Machine Red (1500 cores, 400 TB) • Campus clusters PrairieFire, Sandhills (1500 cores, 25TB) • Omaha: • Large IB cluster Firefly (4000 cores, 150 TB) • 10 Gb/s connection to Internet2 (DCN)

  5. Staff • Dr. Adam Caprez, Dr. Ashu Guru, Dr. Jun Wang • Tom Harvill, Josh Samuelson, John Thiltges • Dr. Brian Bockleman (OSG development, grid computing) • Dr. Carl Lundstedt, Garhan Attebury (CMS) • Derek Weitzel, Chen He, Kartik Vedelaveni (GRAs) • Carson Cartwright, Kirk Miller, Shashank Reddy (ugrads)

  6. HCC -- Schorr Center • 2200 sq. ft. machine room • 10 full-time staff • PrairieFire, Sandhills, Red and Merritt • 2100 TB storage • 10 gbps network

  7. Three Types of Machines • ff.unl.edu ::: large capacity cluster ... more coming soon • prairiefire.unl.edu // sandhills.unl.edu ::: special purpose cluster • merritt.unl.edu ::: shared memory machine • red.unl.edu ::: grid enabled cluster for US CMS (OSG)

  8. prairiefire • 50 nodes from SUN • 2 socket, quad-core opterons (400 cores) • 2 GB/core (800 GB) • ethernet and SDR Infiniband • SGE or Condor submission

  9. Sandhills • 46 fat nodes • 4 socket opterons • 32 cores/node (128 GB/node) • 1504 cores total • QDR Infiniband • Maui/Torque or Condor submission

  10. Merritt • 64 itanium processors • 512 GB RAM shared memory • NFS storage (/home, /work) • PBS only, interactive for debugging only

  11. Red • Open Science Grid machine • part of US CMS project • 240 TB storage (dCache) • over 1100 compute cores • certificates required, no login accounts

  12. HCC -- PKI • 1800 sq. ft. machine room (500 kVA UPS + generator) • 2 full-time staff • Firefly • 150 TB Panasas storage • 10 gbps network

  13. Firefly • 4000+ Opteron cores • 150 TB Panasas storage • Login or grid submissions • Maui (PBS) • Infiniband, Force10 GigE

  14. TBD • 5800+ Opteron cores • 400 TB Lustre storage • Login or grid submissions • Maui (PBS) • QDR Infiniband, GigE

  15. First Delivery...

  16. Last year’s Usage Approaching 1 Million cpu hours/week

  17. Resources & Expertise • Storage of large data sets (2100 TB) • High Performance Storage (Panasas) • High bandwidth transfers (9 gbps, ~50 TB/day) • 20 gbps between sites, 10 gbps to Internet2 • High Performance Computing: ~10,000 cores • Grid computing and High Throughput Computing

  18. Usage Options • Shared Access • Free • Opportunistic • Storage limited • Shell or Grid deployment • Priority Access

  19. Usage Options • Priority Access • Fee assessed • Reserved queue • Expandable Storage • Shell or Grid deployment

  20. Computational and Data-Sharing Core • Will meet computational demands with a combination of Priority Access, Shared, and Grid resources • Storage will include a similar mixture, but likely consist of more dedicated resources • Often a trade-off between Hardware, Personnel and Software • Commercial Software saves Personnel time • Dedicated Hardware requires less development (grid protocols)

  21. Computational and Data-Sharing Core • Resource organization at HCC • Per research group -- free to all NU faculty and staff • Associate quotas, fairshare or reserved portions of machines with these groups • /home/swanson/acaprez/ ... • accounting is straightforward

  22. Computational and Data-Sharing Core • Start now - facilities and staff already in place • It’s free - albeit shared • Complaints currently encouraged (!) • Iterations required

  23. More information • http://hcc.unl.edu • dswanson@cse.unl.edu • hcc-support@unl.edu • David Swanson: (402) 472-5006 • 118K Schorr Center /// 158H PKI /// Your Office • Tours /// Short Courses

  24. Sample Deployments • CPASS site (http://cpass.unl.edu) • DaliLite, Rosetta, OMMSA • LogicalDoc ( https://hcc-ngndoc.unl.edu/logicaldoc/ )

More Related