1 / 18

Stern Center for Research Computing Overview

Stern Center for Research Computing Overview. Norman White November 17, 2004. Outline of talk. Background Current Status and Plans Feedback from faculty Demo of grid engine for those interested. Background. Stern Research Computing

lou
Download Presentation

Stern Center for Research Computing Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stern Center for Research Computing Overview Norman White November 17, 2004

  2. Outline of talk • Background • Current Status and Plans • Feedback from faculty • Demo of grid engine for those interested

  3. Background • Stern Research Computing • Research Computing has had little attention since Stern signed the WRDS agreement. • Several neglected areas • Computational intensive research • Wharton (WRDS) not really appropriate • Eureka very slow • Desktop not appropriate • Rapidly growing demand • Desktop computing • Faculty offices becoming mini computer centers • Software Licensing Issues

  4. Initial Response • Center for Digital Economy Research • Citigroup grant for small cluster (grid) • Salomon Center • Establishes a small staff and facilities for Financial data bases • Collaboration Between Salomon Center and CEDER • Equipment Consolidation in Copy Center • Stern Center for Research Computing Established

  5. Mission • Foster and support computational based research at Stern • Provide Stern with the ability to do cutting edge research • Leverage Stern’s Scale and Scope

  6. Immediate Goals • Consolidation of existing Research Computing Facilities • Replace Eureka • Establish a grid computing architecture which integrates existing and new hardware • Provide immediate improvement in capabilities (processing, disk, software, backups) • Develop platform for continued improvement • Provide incentives for faculty to participate • Support PhD Research

  7. Medium Term goals • Extend architecture to include • Stern Desktop support • Computation nodes • Data access from desktops • Labs • University facilities • Super computer on order • Add additional processing • ?? Tighter integration with WRDS

  8. Long Term Goals • Global network of capabilities accessible from the faculty desktop.

  9. The “Team” • Faculty Director – Norman White • “Virtual Team” • Scott Joens – IT and Salomon Center • David Frederick – IT • Dan Graham – IT • Vadim Barkalov – Student • …..

  10. Current Status • Hardware • Cluster of machines in Copy Center • Miner, Research, Leda, Rnd (replacement on order) • Total processing power >10 times Eureka • Still to come, ODIN, TAQ, … • 3+ TB of disk, more on order • High speed network backbone on order • Gigabit connection to rest of Stern • Software • Sun Grid Engine running on 2 machines • Soon to be rolled out to all machines • Matlab license server with 14 licenses • Can run on any node • SAS • Sun only • Splus • Sun and Linux • Cplex, GAUSS, Mathematica …

  11. Grid Computing … • Concept • View machines as computing nodes • High speed network connecting machines in a cluster together • Support for heterogeneous nodes • Speed • OS (Solaris, Linux) • Software (SAS, Matlab) • Disk (need > 4GB) • Memory (> 256MB) • 3 types of host machines • Submit Host • Scheduling Host (knows what nodes have what resources) • Execution host

  12. Grid Computing • Submit host • User can submit jobs (batch or interactive) • Scheduling Host(s) • Determines where to run job based on job requirements and current loads • Execution Hosts • Actually run the jobs

  13. Advantages of Grid Computing • Grid Scheduler has intelligence • Knows load on all hosts • Knows hosts resources • Knows availability of hosts • Allows dynamic addition of nodes • Execution hosts can die and grid is unaffected • Understands grid-wide resources (like software licenses) • Provides an architecture for continuous growth

  14. Improvements over Portable Batch System (used on miner) • License Management • Grid knows how many Matlab licenses are available, will only schedule that many jobs. • Interactive Execution • Just type qtcsh to open a session on the most lightly loaded node. • X11 support • Monitor output of job in X11 Window • Graphical User Interface • Don’t need to remember command options • Monitor job status • Support for hierarchies of clusters • Expandable to NYU and other universities

  15. So what about desktop users?? • Two answers • Is your desktop really the appropriate place to keep your data and do your computing, or are you doing it there because you have to? • New environment should make it more efficient and safe to do your computing. • If you need a Windows environment, we can still offer • Software installation • Access to consulting • Data storage and backup

  16. Schedule

  17. Schedule • Now • Pilot users running on grid engine • Miner in production • 3 independent grids • Soon • Some nodes on miner  Grid Engine • Research added to Grid • Pilot users on RND • Winter Recess • Miner converted to Grid Engine • (new) RND and LEDA added to grid • ODIN added to grid • Users moved from Eureka to RND/Leda • More machines … TAQ … • Tape backup • Spring • Grid in full production • Web site for research users

  18. Comments?? • What are your needs? • What isn’t covered here? • Demo of grid for interested users..

More Related