1 / 13

NAMD Development Goals

NAMD Development Goals. L.V. (Sanjay) Kale Professor Dept. of Computer Science http://www.ks.uiuc.edu/Research/namd/. NAMD Vision. Make NAMD a widely used MD program For large molecular systems, Scaling from PCs, clusters, to large parallel machines For interactive molecular dynamics

jodie
Download Presentation

NAMD Development Goals

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NAMD Development Goals L.V. (Sanjay) Kale Professor Dept. of Computer Science http://www.ks.uiuc.edu/Research/namd/

  2. NAMD Vision • Make NAMD a widely used MD program • For large molecular systems, • Scaling from PCs, clusters, to large parallel machines • For interactive molecular dynamics • Goals: • High performance • Ease of use: • configuration and run • Ease of modification (for us and advanced users) • Maximize reuse of communication and control patterns • Push parallel complexity down into Charm++ runtime • Incorporation of features needed by Scientists

  3. NAMD 3 New Features • Software Goal: • Modular architecture to permit reuse extensibility • Scientific/Numeric Modules: • Implicit solvent models (e.g, generalized Born) • Replica exchange (e.g., 10 on 16 processors) • Self-consistent polarizability with a (sequential) CPU penalty of less than 100%. • Hybrid quantum/classical mechanics • Fast nonperiodic (and periodic) electrostatics using multiple grid methods. • A Langevin integrator that permits larger time steps (by being exact for constant forces). • An integrator module that computes shadow energy.

  4. Design • NAMD 3 will be a major rewrite of NAMD • Incorporate lessons learned in the past years • Use modern features of Charm++ • Refactor software for modularity • Restructure for supporting planned features • Algorithms that scale to even larger machines

  5. Programmability • NAMD3 Scientific Modules: • Forces, integration, steering, analysis • Keep code with a common goal together • Add new features without touching old code • Parallel Decomposition Framework: • Support common scientific algorithm patterns • Avoid duplicating services for each algorithm • Start with NAMD 2 architecture (but not code)

  6. MDAPI New Science modules Replica exchange QM Implicit Solvents Polarizable Force Field Bonds related Force calculation Integration Pair-wise Forces calculation PME NAMD Core Charm++ modules FFT Fault Tolerance Grid Scheduling Collective communication Load balancer Core CHARM++ Clusters Lemieux … Teragrid

  7. MDAPI Modular Interface • Separate “front end” from modular “engine” • Same program or over a network or grid • Dynamic discovery of engine capabilities, no limitations imposed by interface • Front ends: NAMD 2, NAMD 3, Amber, CHARMM, VMD • Engines: NAMD 2, NAMD 3, MINDY

  8. Terascale Biology and Resources PSC LeMieux TeraGrid CRAY X1 NCSA Tungsten ASCI Purple Riken MDGRAPE Red Storm Thor’s Hammer

  9. NAMD on Charm++ • Active computer science collaboration (since 1992) • Object array - A collection of chares, • with a single global name for the collection, and • each member addressed by an index • Mapping of element objects to processors handled by the system User’s view A[0] A[1] A[2] A[3] A[..] System view A[0] A[3]

  10. NAMD3 Features Based on Charm++ • Adaptive load balancing • Optimized communication • Persistent Communication, Optimized concurrent multicast/reduction • Flexible, tuned, parallel FFT libraries • Automatic Checkpointing • Ability to change the number of processors • Scheduling on the grid • Fault tolerance • Fully automated restart • Survive loss of a node • Scaling to large machines • fine-grained parallelism for PME: bonded and nonbonded force evaluations

  11. Efficient Parallelization for IMD • Characteristics • Limited parallelism on small systems • Real time response needed • Fine grained parallelization • Improve speedups on 4K-30K atom systems • Time/step goal • Currently 0.2s/step for BrH on single processor (P4 1.7GHz) • Targeting on 0.003s/step on 64 processors of faster machine, that is 20picosecond/minute • Flexible use of clusters • Migrating jobs (shrink/expand) • Better utilization when machine is idle

  12. Integration with CHARMM/Amber? • Goal: NAMD as parallel simulation engine for CHARMM/Amber • Generate input files in CHARMM/Amber • NAMD must read native file formats • Run with NAMD on parallel computer • Need to use equivalent algorithms • Analyze simulation in CHARMM/Amber • NAMD must generate native file formats

  13. Proud of Programmers

More Related