High performance computing and computational science at ahpcc
This presentation is the property of its rightful owner.
Sponsored Links
1 / 24

High Performance Computing and Computational Science at AHPCC PowerPoint PPT Presentation


  • 64 Views
  • Uploaded on
  • Presentation posted in: General

High Performance Computing and Computational Science at AHPCC. Brian T. Smith Professor, Department of Computer Science Director, Albuquerque High Performance Computing Center (AHPCC). High Performance Computing Education & Research Center.

Download Presentation

High Performance Computing and Computational Science at AHPCC

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


High performance computing and computational science at ahpcc

High Performance Computing and Computational Science at AHPCC

Brian T. Smith

Professor, Department of Computer Science

Director, Albuquerque High Performance Computing Center (AHPCC)


High performance computing education research center

High Performance Computing Education & Research Center

UNM strategic center to initiate and focus activities in

high performance computing technology, research, and education

Mission accomplished through two centers

Established in 1994 as a training and resource center for MHPCC; now a national supercomputing Center within the NSF National Computational Science Alliance, serving as an academic center of excellence for research and education in computational science.

Established in 1994 under the auspices of the DoD Modernization Program, through a Cooperative Agreement between the University of New Mexico and the Air Force Research Laboratory. Provides production computing cycles for DoD researchers.


High performance computing education research center1

High Performance Computing, Education & Research Center

Frank L. Gilfeather

Brian T. Smith

John S. Sobolewski

Ernest D. Herrera

Maui High Performance Computing Center

DIRECTOR

ASSOCIATE DIRECTORS

Eugene Bal

Gary Jensen

Steve Karwoski

Margaret Lewis

EXECUTIVE DIRECTOR

CO-DIRECTOR

CO-DIRECTOR

ASSOCIATE DIRECTOR

Albuquerque High Performance Computing Center

DIRECTOR

ASSOCIATE DIRECTORS

Brian T. Smith

Susan R. Atlas

Robert A. Ballance

Ernest D. Herrera


High performance computing and computational science at ahpcc

Supercomputing Capabilities

AHPCC

  • Ranks in the top 5 US academic institutions in supercomputing power (effective 5/00)

  • A member of the NSF Alliance and a node on the National Technology Grid

  • 60 associated faculty, staff, postdocs and students

  • Computing systems

    • 512 processor IBM PIII Linux Supercluster (5/00)

    • 128 processor Alta PII Linux Supercluster

    • 32 processor VA Linux PIII Cluster

    • Vista Azul - advanced IBM hybrid system

    • 8 node SGI Origin 2000

    • 16 processor Alta PII Linux development cluster

    • Visualization laboratory

  • 0ver 500 academic, industry, and government users

MHPCC

  • One of the top 30 supercomputingcenters in the world

  • A DoD Shared Center—a node on the National Technology Grid

  • 65 staff members

  • Computing systems

    • 699 node IBM SP

    • 400 GFLOPS computing power

    • 167 GB total memory

    • 2.1 TB internal disk storage

    • 1.3 external disk storage

    • 20 TB mass storage

    • Visualization laboratory

  • 0ver 1,100 government, industry, and academic users

Both centers support a significant number of users in academia and government,

particularly the DoD and NSF, and are key players in the national supercomputer community.


Loslobos roadrunner superclusters

LosLobos & Roadrunner Superclusters


Research environment at the ahpcc

Research Environment at the AHPCC

  • 38 Graduate Research Assistants

  • 16 Associated Faculty (Physics & Astronomy, Chemistry, Biology, Mechanical Engineering, Computer Science, EECE)

  • 6 Permanent Research Staff

  • 6 Visiting Scientists, Postdoctoral Fellows

  • Undergraduate Workstudy Students; NSF REU

  • Research Facilities: Supercomputers, High Performance Clusters, Workstations, Workshop Area, Seminar Room and Access Grid Studio

  • Educational Programs: SEC Program, Workshops, AHPCC Seminar Series, Alliance Activities, Native American Outreach, NSF AMO Summer School, UNM Course Laboratories


Computer systems research

Computer Systems Research

To anticipate, develop, deploy, and support

high-performance computing technology and systems

  • Superclusters

  • Open computing tools

  • Grid-Based Computing

  • Visualization


Superclusters beyond beowulf

Superclusters: Beyond Beowulf

  • System design and integration

    • Off-the-shelf symmetric multiprocessor subsystems

    • High-speed interconnects

    • Terabyte hierarchical mass storage systems

  • Research Areas

    • Networking– Portals

    • Hybrid (SMP) programming models

    • Cluster Management– Maui Scheduler, PBS

    • Condor high-throughput computing


Grid based computing sharing resources across the matrix

Grid-Based Computing: Sharing Resources Across the Matrix

  • Computational Grid: People to Machines, Machines to Machines

    • Globus

    • Virtual Machine Room (VMR)

    • Wireless networking

  • Access Grid: People to People and Machines

    • Telemedicine

    • Visualization

    • Human Factors

    • Production Studio Deployment

  • Education & Training


Touch telehealth virtual collaboratory dr dale alverson unm dr richard friedman uh

TOUCH Telehealth Virtual CollaboratoryDr. Dale Alverson (UNM), Dr. Richard Friedman (UH)

Access Grid multi-group Internet video conferencing for distance education

Virtual Reality training environment

3D image/model manipulation and simulation environment using large, remote datasets

Problem-based learning

Figure: A user and their “avatar” in the BioSIMMER environment (brain injury patient).

A user and their “avatar” in the BioSIMMER environment - brain injury patient


Scientific visualization computational environments

Scientific Visualization & Computational Environments

  • Visualization Laboratory – Homunculus Project

  • “Flatland” Virtual Reality Environment

  • Vista Azul Scalable Graphics Engine – parallel rendering

  • CoMeT Computational Mechanics Toolkit

  • Scientific Visualization Research


Science and engineering research

Science and Engineering Research

Development of advanced algorithms and parallel software

for application of high-performance computing technology

to problems at the forefront of science and engineering

  • Optics and Imaging

  • Computational Physics

  • Computational Fluid Dynamics

  • Ecological Modeling

  • Chemistry and Materials

  • Computational Biology


Quantum optics optics imaging

Quantum Optics • Optics & Imaging

  • Image Processing and Astrophysical Observation Techniques for Astronomy and Space Surveillance Applications (D. Tyler, S. Prasad, W. Junor, R. Plemmons, T. Schulz, J. Green, J. Seldin, P. Alsing)

  • Quantum Computing and Quantum Optics (I. Deutsch, C. Caves, P. Alsing, G. Brennan, J. Grondalski, S. Ghose, P. Jessen)

  • Optical Pulse Interactions with Nonlinear Materials (P. Bennett)


Quantum computing

Quantum Computing

Quantum Optical Lattices

By shining counter-propagating laser

beams, “crystals of light” can be formed (egg crate structures) which can be used to trap neutral atoms, e.g. cesium. By changing the phase of the light, atoms can be brought together (shift the egg crate minima) and made to interact by an additional catalysis laser. The interacting atoms form qubits and the shifting egg crate potentials act as a computer bus.

Prof. Ivan Deutsch, and Prof. Carl Caves (Physics and Astronomy); Dr. Paul Alsing (AHPCC);


Chemistry materials

Chemistry & Materials

  • Defect Centers in a-SiO2 Using Computational Chemistry Techniques (S.P. Karna, A.C. Pineda)

  • Defects in Al and Cu ULSI Interconnects — Materials/Solid State Physics (S.R. Atlas, S.M. Valone, L.A. Cano)

  • Electron Transfer in Dendrimers (T.S. Elicker, D.G. Evans)

  • Dynamics at Metal Surfaces (D. Xie, H. Guo)

  • Molecular Dynamics of Proteins in Solution (P. Alsing, E. Coutsias)

  • Atom-Ion Collisions (P. Alsing, M. Riley, A. Hira)


Defects in sio 2

Vgate

source

Vdrain

a-SiO2

n-Si

n-Si

p-Si

Vbias

Defects in SiO2

Dr. Andrew Pineda, AHPCC

Dr. Shashi Karna, AFRL

  • Defects are detected experimentally via EPR.

  • Quantum mechanical (Hartree-Fock) calculations provide detailed information candidate structure and formation mechanisms.

  • Same computational techniques are used to model active sites of biological molecules in rational drug design.

  • Computations involve hundreds of electrons and dozens of atoms: 100’s of CPU hours on 8–32 processors of a supercomputer.

a-SiO2 is the dielectric (insulator) material used in today’s semiconductor devices. Defect centers are created in manufacture and by irradiation. They are believed to be the primary charge traps in semiconductors; degrading current/voltage performance and sometimes destroying them.


Molecular dynamics simulation of the role of water in protein folding

Molecular Dynamics simulation of the role of water in protein folding

Dr. Paul M. Alsing (AHPCC); Prof. Evangelos Coutsias (Mathematics & Statistics);

Prof. Jack McIver (Physics and Astronomy)


High performance computing and computational science at ahpcc

Visualization of large data sets from

molecular dynamics simulations in Flatland


Computational genomics

Computational Genomics

  • Systems design and management

    • Storage and manipulation of large microarray and patient datasets

    • Database/annotation design

    • Firewall to protect patient privacy

    • Customized hierarchical mass storage system

  • Visualization

  • Mathematical and computational analysis

    • Molecular classification: clustering and neighborhood analysis

    • Identification of genetic correlations in microarray data

    • Collaboration between biologists, medical scientists, mathematicians, computational scientists will be essential


Computer science research

Computer Science Research

  • Parallel Algorithms and Numerical Mathematics (D.A. Bader, P. Bennett, P. Alsing, B. Minhas)

  • Condor Flocking and Turing Cluster — High Throughput Computing (Z. Chen, B.T. Smith, X. Wang, M. Livny, C.D. Maestas)

  • Scalable Systems Lab (A.B. Maccabe)

  • Research Clusters: Black Bear, Vista Azul, Roadrunner (R. Ballance, P. Kovatch, J.R. Barnes, C. Maestas) — Programming Paradigms for SMP Architectures; Code Development and Optimization; Cluster Management


Activities

Research

  • High Performance Computing

  • Visualization

  • Modeling and Simulation

  • Image Processing

  • Computational Mechanics

  • Computational Physics

  • Computational Chemistry

  • Computational Biology

Providing, Developing and Implementing Services

  • Computing and visualization

  • Distributed computing scheduling

  • Collaborative interactive environments for researchers and training

  • Education and Outreach

    • Graduate-level certificate program for students and professionals at the federal labs

    • Native American education and training

    • Hawaiian schools

    • NCSA activities—educational toolkits

  • Training in High Performance Computing and Applications

    • Regional industry users

    • Federal lab users

    • Students and faculty—local and national

  • Activities


    R d projects

    Area

    Visualization

    Clusters

    Networking

    Collaboration

    Computational Modeling

    Cluster Management

    R&D Projects

    Project

    Flatland

    SMP Programming

    Portals, NGIO

    Access Grid Tools

    CoMeT

    Maui Scheduler


    Production systems

    Production Systems

    • Condor

      • Distributed Workstations

      • Remote Job Submission and Management

    • Roadrunner

      • Alliance Shared Computational Resource

        • Production Linux Cluster from Alta Technology Corporation

        • 64 Nodes, 128 Processors, Myrinet Networking


    Research systems

    Research Systems

    • Black Bear

      • Linux Cluster Provided by VA Linux Systems

        • 16 Nodes, 32 Processors, Myrinet Network

    • Vista Azul

      • Hybrid IBM Linux/SP with in situ Graphics

        • Linux: 8 Nodes, 32 Processors, Graphics-Enabled

        • SP: 8 Nodes, 32 Processors

        • 360 GB Storage, Shared Graphics Framebuffer


  • Login