1 / 13

CISL Organizational Structure

CISL Organizational Structure. FY 2008 $37.8M 174 Staff. $1.5M 13 staff. Laboratory Directorate Al Kellie Associate Director of NCAR 6 employees. Laboratory Administration & Outreach Services Janice Kauvar, Administrator 6 employees. NWSC Project Office K rista Laursen, Director.

rowa
Download Presentation

CISL Organizational Structure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CISL Organizational Structure FY 2008 $37.8M 174 Staff $1.5M 13 staff Laboratory Directorate Al Kellie Associate Director of NCAR 6 employees Laboratory Administration & Outreach Services Janice Kauvar, Administrator 6 employees NWSC Project Office Krista Laursen, Director Operations & Services Tom Bettge, Director 6 employees Technology Development Rich Loft, Director 5 employees IMAGe Doug Nychka, Director 6 employees Data Support Steven Worley 9 employees Computer Science Henry Tufo 12 employees Computational Mathematics Doug Nychka 5 employees Earth System Modeling Infrastructure Cecelia DeLuca 5 employees Enterprise Services Aaron Andersen 38 employees Data Assimilation Research Jeff Anderson 5 employees High-end Services Gene Harano 19 employees Visualization & Enabling Technologies Don Middleton 16 employees Geophysical Statistics Project Steve Sain 5 employees $6.9M 38 Staff Network Engineering & Telecommunications Marla Meehl 25 employees Turbulence Numerics Team Geophysical Turbulence Annick Pouquet 5 employees Includes all NSF Base funds budgeted except indirect Includes UCAR Communications Pool & WEG funds Does not include Computing SPER Staff count includes indirect $4.6M 26 Staff $24.8M 97 Staff

  2. Data Support Section (DSS) • What we do – nutshell • Curate and Steward the Research Data Archives (RDA) • Qualified staff – all with MS or greater degrees in Met. or Ocn. • Provide access to RDA and aid users and projects with data issues • Recap of AMS BoF presentation • Many new data assets, JRA-25, TIGGE, ERA-Interim, etc • Accurate data discovery, built on metadata standards • Strong/full project portfolio for the coming years • One metric chart for example

  3. DSS Nuggets and Challenges • A few nuggets from under the hood • Greatly improved data access, systematic across 650 datasets in the RDA • Wide spread usage of DB’s have provided efficiency • Metadata, user registration, user access, archiving • RDA data management adheres to long-lived archive principles. Great position for the future - represents NSF very well. • Challenges • Maintain balance between adding valuable content (research datasets) and further improving access – both are beneficial • Minimize security firewall impacts on open data access • Govern manager’s (me) expectations, not overwhelm the staff

  4. Enterprise Services Section (ESS)High Level Overview • Computer Production Group (CPG) • 7x24 computer operations • 1st level customer support • Distributed Systems Group (DSG) • IT infrastructure that makes everything else work • Infrastructure Support Group (ISG) • Computer room infrastructure, cooling, electrical • Co-location oversight (new task) • Workstation Support Team (WsST) • User support for workstation-related issues • Web Engineering Group (WEG) • Provide the software infrastructure for UCAR/NCAR web presence • Database Support Team (DBST) • Computer allocations, metrics, reporting, decision support

  5. Enterprise Service SectionChallenges • Recruiting staff and planning for succession • Gap between baby-boomer population & early career staff • Unix heavy environment that is not taught in CS & MIS curriculums • Computer room infrastructure capacity • Co-location rooms • At any given time we are finding that equipment in proposals etc. may outstrip capacity • People data is complex and disorganized • Groups, visitors, affiliates, computer users • Authorization • Overall complexity of the computing environment and diversity of demands • Developers, Systems administrators, Administrative

  6. High-End Services Section (HSS)Services • High Performance Computing • Flagship: 127 node (4,064 pe) IBM Power 6 – ~80 TFLOPS peak • Smaller Linux clusters – General and dedicated • General and priority computing resources (seasonal campaigns, AMPS, model development, capability/capacity runs) • Data Archive • 6.8 PetaBytes total data, 4.6 PetaBytes unique data, 52 Million files • Long term data preservation • New HPSS system for TeraGrid use and evaluation as a future archive system • Data Analysis and Visualization • VAPOR (Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers) - design, development, customer support, outreach & training • DAV Customer Support both to NCAR & TeraGrid communities • DAV Resources: DA cluster with Visualization nodes, multi-TB shared filesystem for data sharing within the HPC environment • Consulting Services • Effective usage of high performance computing resources • Program debugging • Porting assistance

  7. High-End Services Section (HSS)Challenges • High Performance Computing • Maintaining compatible levels of software and firmware • System upgrades • Debug tools • Data Archive • Managing growth – Cost of doing business • Technology migration • Data integrity/availability • Data Analysis and Visualization • VAPOR – enhancements and outreach to additional scientific communities • DAV customer support outreach to additional scientific communities • Integration of DAV services with CISL, TeraGrid, and other external data services • Consulting Services (tailoring methods for:) • Petascale computing (scaling parallel programs) • New modeling paradigms such as data assimilation • Improving efficiency on distributed clusters • Balancing security risks with usability

  8. NETS Strategies • Ensure that network cabling across all UCAR campuses is capable of supporting Gigabit Ethernet (1000Mb/s) to all UCAR workspaces by the end of FY2009 • Renew one-third of all UCAR network equipment each year by replacement or upgrade • Renew the UCAR network LAN cabling plant every ten years by replacement or upgrade • Renew the Voice over IP phones every five years • Research and deploy advanced services such as local area and metropolitan area wireless, unified communications, and optical Wave Division Multiplexing (WDM). • Manage and operate regional networks and related infrastructure: The Front Range GigaPoP (FRGP), UCAR Point of Presence (UPoP), Bi-State Optical Network (BiSON), Boulder Point of Presence (BPoP) • Design, implement, and maintain NSC networking

  9. NETS Challenges Next generation network technology such as those being explored by the NSF Global Environment for Network Innovations (GENI) project, NSF’s emerging cyberinfrastructure initiatives, and others Dynamic optical network switching Wireless and sensor networks Large scale regional network aggregation – super aggregation Security

  10. High-End System Procurement Processes Procurement-specific Scientific & Technical Advisory Panels Advise CISL on main requirements and user/application representation Aid in the development of detailed technical requirements Internal / External web presence and process archiving Competitive Best-Value RFP Process Partnership with UCAR Contracts office Technical Evaluation Criteria & Spreadsheets LTD and Benchmark Evaluations, Intercomparisons “BAFO” process & Final Evaluation Subcontract Negotiations & CISL, NCAR, UCAR Approvals & NSF Approval Delivery & Installation Oversight Acceptance Testing & Evaluation vs. Subcontract Subcontract Monitoring (w/ CISL HSS, ReSET) Technical Requirements Performance, Reliability, Service Delivery Metrics Maintain Industry Awareness & Technical Competence

  11. Management of Computing Resources Manage allocations processes Universities (CHAP) CSL (CSLAP) NCAR (NCAR Executive Committee) Work with management and users to deliver resources based on allocations Use cut-offs, queue priorities, allocation groupings Resolve allocation/charging problems; set up special projects/campaigns

  12. Resource Management Challenges Incentives for users to manage their mass storage growth, so we don’t have to resort to quotas or some other system that doesn’t respect scientific priorities. Keeping the supercomputers busy throughout the year, while providing reasonable turnaround for all users Turning away university researchers without NSF funding

More Related