1 / 39

Cactus: A Framework for Numerical Relativity

Cactus: A Framework for Numerical Relativity. Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute). What is Cactus?. CACTUS is a freely available, modular, portable and manageable environment for collaboratively developing parallel, high-

mira
Download Presentation

Cactus: A Framework for Numerical Relativity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cactus: A Framework for Numerical Relativity Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

  2. What is Cactus? CACTUS is a freely available, modular, portable and manageable environment for collaboratively developing parallel, high- performance multi-dimensional simulations

  3. Cactus remote steering Plug-In “Thorns” (modules) extensibleAPIs ANSI C Fortran/C/C++ parameters driver scheduling equations of state Core “Flesh” input/output errorhandling black holes interpolation makesystem boundaryconditions gridvariables SOR solver coordinates multigrid wave evolvers

  4. Cactus in a Nutshell • Cactus acts a the “main” routine of your code, it takes care of e.g. parallelism, IO, checkpointing, parameter file parsing for you (if you want), and provides different computational infrastructure such as reduction operators, interpolators, coordinates, elliptic solvers, … • Everything Cactus “does” is contained in thorns (modules), which you need to compile-in. If you need to use interpolation, you need to find and add a thorn which does interpolation. • It is very extensible, you can add you own interpolators, IO methods etc. • Not all the computational infrastructure you need is necessarily there, but hopefully all of the APIs etc are there to allow you to add anything which is missing. • We’re trying to provide a easy-to-use environment for collaborative, high-performance computing, from easy compilation on any machine, to easy visualization of your output data.

  5. Modularity: “Plug-and-play” Executables Computational Thorns Numerical Relativity Thorns PUGH PUGH PUGH PAGH PAGH ADMConstraint ADMConstraint IDAxiBrillBH Carpet HLL PsiKadelia Zorro Zorro CartGrid3D CartGrid3D Cartoon2D AHFinder AHFinder Extract Time Time Boundary Boundary Maximal ADM EllSOR EllBase EllBase SimpleExcision SimpleExcision ADM_BSSN ADM_BSSN IOFlexIO IOASCII IOASCII FishEye FishEye ConfHyp IOHDF5 IOHDF5 IOJpeg IDAnalyticBH BAM_Elliptic BAM_Elliptic BAM_Elliptic IOUtil IOUtil IOBasic IOBasic LegoExcision IDLinearWaves HTTPD HTTPD HTTPDExtra TGRPETSc TGRPETSc IDBrillWaves ISCO with AMR ?? ISCO Run Faster elliptic solver ?? ISCO with Excision

  6. Infrastructure: ADMBase, StaticConformal, SpaceMask, ADMCoupling, ADMMacros, CoordGauge Initial Data: IDSimple, IDAnalyticBH, IDAxiBrillBH, IDBrillData, IDLinearWaves Evolution: ADM, EvolSimple, Maximal Analysis: ADMConstraints, ADMAnalysis, Extract, AHFinder, TimeGeodesic, PsiKadelia, IOAHFinderHDF Other thorns available from other groups/individuals E.g. a few from AEI … Excision LegoExcision, SimpleExcision AEIThorns ADM_BSSN, BAM_Elliptic, BAM_VecLap PerturbedBH DistortedBHIVP, IDAxiOddBrillBH, RotatingDBHIVP Einstein Toolkit: CactusEinstein

  7. CactusBase Boundary, IOUtil, IOBasic, CartGrid3D, IOASCII, Time CactusConnect HTTPD, HTTPDExtra CactusElliptic EllBase, EllPETSc, EllSOR CactusPUGH PUGH, PUGHInterp, PUGHSlab, PUGHReduce CactusPUGHIO IOFlexIO, IOHDF5Util, IOHDF5, IOStreamedHDF5, IsoSurfacer, IOPanda CactusIO IOJpeg CactusExternal FlexIO, jpeg6b CactusUtils NaNChecker Computational Toolkit

  8. CactusBench BenchADM CactusTest TestArrays, TestComplex, TestCoordinates, TestInclude1, TestInclude2, TestInterp, TestReduce, TestStrings, TestTimers CactusWave IDScalarWave, IDScalarWaveC, IDScalarWaveCXX, WaveBinarySource, WaveToyC, WaveToyCXX, WaveToyF77, WaveToyF90, WaveToyFreeF90 CactusExamples HelloWorld, WaveToy1DF77, WaveToy2DF77, FleshInfo, TimerInfo Computational Toolkit (2)

  9. Collaborative Portable Large scale ! High throughput Easy to understand and interpret results Supported and developed Produce believed results Flexible Reproducible Have generic computational toolkits Incorporate other packages/technologies Easy to use/program What Numerical Relativists Need From Their Software … Primarily, it should enable the physics they want to do, and that means it must be:

  10. Typical run (but we want bigger!) needs 45GB of memory: 171 Grid Functions 400x400x200 grid Typical run makes 3000 iterations with 6000 Flops per grid point: 600 TeraFlops !! Output of just one Grid Function at just one time step 256 MB (320 GB for 10GF every 50 time steps) One simulation takes longer than queue times Need 10-50 hours Computing time is a valuable resource One simulation: 2500 to 12500 SUs Need to make each simulation count Large Scale Requirements Parallelism Optimization Parallel/Fast IO, Data Management, Visualization Checkpointing Interactive monitoring, steering, visualization, portals

  11. Produce Believable Results • Continually test with known/validated solutions • Code changes • Using new thorns • Different machines • Different numbers of processors • Open community: • The more people using your code, the better tested it will be • Open Source … not black boxes • Source code validates physical results which anyone can reproduce • Diverse applications: • Modular structure helps construct generic thorns for Black Holes, Neutron Stars, Waves, … • Other applications, …

  12. RECENT GROUP RESOURCES Origin 2000 (NCSA) Linux Cluster (NCSA) Compaq Alpha (PSC) Linux Cluster (AHPCC) Origin 2000 (AEI) Hitachi SR-8000 (LRZ) IBM SP2 (NERSC) Institute Workstations Linux Laptops Ed’s Mac Very different architectures, operating systems, compilers and MPI implementations Portability • Develop and run on many different architectures (laptop, workstations, supercomputers) • Set up and get going quickly (new computer, visits, new job, wherever you get SUs) • Use/Buy the most economical resource (e.g. our new supercomputer) • Make immediate use of free (friendly user) resources (baldur, loslobos, tscini, posic) • Tools and infrastructure also important • Portability crucial for “Grid Computing”

  13. Easy to Use and Program • Program in favorite language (C,C++,F90,F77) • Hidden parallelism • Computational Toolkits • Good error, warning, info reporting • Modularity !! Transparent interfaces with other modules • Extensive parameter checking • Work in the same way on different machines • Interface with favorite visualization package • Documentation

  14. Cactus User CommunityUsing and Developing Physics Thorns Numerical Relativity Other Applications AEI Southampton Wash U RIKEN Chemical Engineering (U.Kansas) Goddard Penn State Thessaloniki Climate Modeling (NASA,+) Tuebingen TAC SISSA Portsmouth EU Astrophysics Network NASA Neutron Star Grand Challenge Bio-Informatics (Canada) Geophysics (Stanford) Early Universe (LBL) Many others who mail us at cactusmaint Plasma Physics (Princeton) Astrophysics (Zeus)

  15. Using Cactus If your existing code has this kind of structure • Split into subroutines • Clear argument lists it should be relatively straightforward to put into Cactus. Cactus will take care of the parameter file, and (hopefully) the coord system and IO, and if you’re lucky you can take someone else’s Analysis, Evolution, Initial Data, … modules. program YourCode call ReadParameterFile call SetUpCoordSystem call SetUpInitialData call OutputInitialData do it=1,niterations call EvolveMyData call AnalyseData call OutputData end do end

  16. Thorn EvolveMyData Configuration Files Parameter Files and Test Suites ???? Source Code ???? Fortran Routines C Routines C++ Routines Documentation! Make Information Thorn Architecture Main question: best way to divide up into thorns?

  17. ADMConstraints: interface.ccl # Interface definition for thorn ADMConstraints implements: admconstraints inherits: ADMBase, StaticConformal, SpaceMask, grid USES INCLUDE: CalcTmunu.inc USES INCLUDE: CalcTmunu_temps.inc USES INCLUDE: CalcTmunu_rfr.inc private: real hamiltonian type=GF { ham } "Hamiltonian constraint" real momentum type=GF { momx, momy, momz } "Momentum constraints"

  18. ADMConstraints: schedule.ccl schedule ADMConstraints_ParamCheck at CCTK_PARAMCHECK {LANG: C} "Check that we can deal with this metric_type and have enough conformal derivatives" schedule ADMConstraint_InitSymBound at CCTK_BASEGRID {LANG: Fortran} "Register GF symmetries for ADM Constraints" schedule ADMConstraints at CCTK_ANALYSIS { LANG: Fortran STORAGE: hamiltonian,momentum TRIGGERS: hamiltonian,momentum } "Evaluate ADM constraints"

  19. ADMConstraints: param.ccl # Parameter definitions for thorn ADMConstraints shares: ADMBase USES KEYWORD metric_type shares: StaticConformal USES KEYWORD conformal_storage private: BOOLEAN constraints_persist "Keep storage of ham and mom* around for use in special tricks?" {} "no" BOOLEAN constraint_communication "If yes sychronise constraints" {} "no" KEYWORD bound "Which boundary condition to apply" { "flat" :: "Flat (copy) boundary condition" "static" :: "Static (don't do anything) boundary condition" } "flat" BOOLEAN cartoon "Cartoon BC" { } "no" BOOLEAN excise "Use excision?" { } "no"

  20. ADMConstraints: Using it MyRun.par: ActiveThorns = “ … … ADMConstraints … …” IOASCII::out3d_every = 10 IOASCII::out3d_vars = “ … ADMConstraints::hamiltonian…”

  21. What do you get with Cactus? • Parameters: file parser, parameter ranges and checking • Configurable make system • Parallelization: communications, reductions, IO, etc. • Checkpointing: checkpoint/restore on different machines, processor numbers • IO: different, highly configurable IO methods (ASCII, HDF5, FlexIO, JPG) in 1D/2D/3D + geometrical objects • AMR when it is ready (PAGH/Carpet/FTT/Paramesh/Chomba) • New computational technologies as they arrive, e.g. • New machines • Grid computing • I/O • Use our CVS server

  22. Myths • If you’re not sure just ask: • email cactusmaint@cactuscode.org, users@cactuscode.org • Cactus doesn’t have periodic boundary conditions • It has always had periodic boundary conditions. • Cactus doesn’t run on a **?** machine • Cactus can run on anything with a ANSI C compiler (e.g. Windows, Mac, PlayStation2). Need MPI for parallelisation, F90 for many of our numrel thorns. • To compile Cactus you need to edit this file, tweak that file, comment out these lines, etc, etc • You shouldn’t need to do this! Please tell us. • If you use Cactus you have to let everyone have and use your code • Of course not! • Cactus gives different results on different numbers of processors • It shouldn’t, check your code!! • Cactus makes your computers explode

  23. Cactus Support • Users Guide, Thorn Guides on web pages • FAQ • Pages for different architectures, configuration details for different machines. • Web pages describing different viz tools etc. • Different mailing lists (interface on web pages) … • cactusmaint@cactuscode.org • or users@cactuscode.org • or developers@cactuscode.org • or cactuseinstein@cactuscode.org

  24. Cactus (Infrastructure) Team at AEI • General : • Gabrielle Allen, David Rideout • GridLab: • Tom Goodale, Ian Kelley, Oliver Wehrens, Michael Russell, Jason Novotny, Kashif Rasul, Susana Calica, Kelly Davis • GriKSL: • Thomas Radke, Annabelle Roentgen, Ralf Kaehler • PhD Students: • Thomas Dramlitsch (Distributed Computing), Gerd Lanfermann (Grid Computing), Werner Benger (Visualization) • Extended Family: • John Shalf, Mark Miller, Greg Daues, Malcolm Tobias, Erik Schnetter, Jonathan Thornburg, Ruxandra Bonderescu

  25. Development Plans • See • www.cactuscode.org/Development/Current.html • Currently working on 4.0 Beta 12 (release end of May) • New Einstein thorns • New IO thorns, standardize all IO parameters • Release of Cactus 4.0 planned for July (2002) • We then want to add (4.1): • Support for unstructured meshes (plasma physics) • Support for multi-model (climate modeling) • Dynamic scheduler • Cactus Communication Infrastructure • Better elliptic solvers/infrastructure

  26. Cactus Developer CommunityDeveloping Computational Infrastructure Grants and Projects DFN TiKSL DFN GriKSL EU GridLab NSF KDI ASC NSF GrADS AEI Cactus Group The Users Argonne National Laboratory TAC Clemson NCSA U. Kansas Global Grid Forum Konrad-Zuse Zentrum Wash U Direct Benefits Visualization Parallel I/O Remote Computing Portal Optimization Experts Lawrence Berkeley Laboratory U. Chicago EGrid Compaq Sun Intel Microsoft SGI

  27. Other Development Projects • AMR/FMR • Grid Portal • Grid Computing • Visualization (inc. AMR Viz) • Parallel I/O • Data description and management • Generic optimization • Unstructured meshes • Multi-model

  28. What is the Grid? … … infrastructure enabling the integrated, collaborative use of high-end computers, networks, databases, and scientific instruments owned and managed by multiple organizations … … applications often involve large amounts of data and/or computing, secure resource sharing across organizational boundaries, not easily handled by today’s Internet and Web infrastructures …

  29. … and Why Bother With It? • AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA • They want: • Easier use of these resources • Bigger simulations, more simulations and faster throughput • Intuitive IO and analysis at local workstation • No new systems/techniques to master!! • How to make best use of these resources? • Provide easier access … no one can remember ten usernames, passwords, batch systems, file systems, … great start!!! • Combine resources for larger productions runs (more resolution badly needed!) • Dynamic scenarios … automatically use what is available • Better working practises: Remote/collaborative visualization, steering, monitoring • Many other motivations for Grid computing ... Opens up possibilities for a whole new way of thinking about applications and the environment that they live in (seti@home, terascale desktops, etc)

  30. Thorn which allows simulation to any to act as its own web server Connect to simulation from any browseranywhere … collaborate Monitor run: parameters, basic visualization, ... Change steerable parameters Running example at www.CactusCode.org Wireless remote viz, monitoring and steering Remote Monitoring/Steering:

  31. VizLauncher • From a web browser connected to the simulation, output data (remote files/streamed data) automatically launched into appropriate local visualization client • Application specific networks … shift vector fields, apparent horizons, particle geodesics, …

  32. Cactus ASC Portal www.ascportal.org • Part of NSF KDI Project • Use any Web Browser !! • Portal (will) provides: • Single access to all resources • Locate/build executables • Central/collaborative parameter files, thorn lists etc • Job submission/tracking • Access to new Grid Technologies Astrophysics Simulation Collaboratory

  33. Remote Visualization OpenDX OpenDX Amira Contourplots (download) LCAVision IsoSurfaces and Geodesics Grid Functions Streaming HDF5 Amira

  34. Viz Client (Amira) HDF5VFD DataGrid (Globus) HTTP DPSS FTP Web Server FTP Server DPSS Server Remote Offline Visualization Viz in Berlin VisualizationClient Downsampling, hyperslabs Only what is needed Remote Data Server 4TB at NCSA

  35. GigE:100MB/sec 17 4 12 5 SDSC IBM SP 1024 procs 5x12x17 =1020 Dynamic Adaptive Distributed Computation(T.Dramlitsch, with Argonne/U.Chicago) 2 2 OC-12 line (But only 2.5MB/sec) 12 5 NCSA Origin Array 256+128+128 5x12x(4+2+2) =480 Dynamic Adaptation: Number of ghostzones, compression, … These experiments: • Einstein Equations (but could be any Cactus application) Achieved: • First runs: 15% scaling • With new techniques: 70-85% scaling, ~ 250GF Paper describing this is a finalist for the “Gordon Bell Prize” (Supercomputing 2001, Denver)

  36. SDSC S Brill Wave RZG SDSC LRZ S1 Calculate/Output Invariants S2 Archive data P1 Found a horizon, try out excision P2 Calculate/Output Grav. Waves Look for horizon S2 S1 Archive to LIGO public database Find best resources P2 P1 NCSA Dynamic Grid Computing Add more resources Queue time over, find new machine Free CPUs!! Clone job with steered parameter Physicist has new idea !

  37. Users View

  38. GridLab: www.gridlab.orgEnabling Dynamic Grid Applications • EU Project (under final negotiation with EC) • AEI, ZIB, PSNC, Lecce, Athens, Cardiff, Amsterdam, SZTAKI, Brno, ISI, Argonne, Wisconsin, Sun, Compaq • Grid Application Toolkit for application developers and infrastructure (APIs/Tools) • Develop new grid scenarios for 2 main apps: • Numerical relativity • Grav wave data analysis

  39. GriKSL www.griksl.orgDevelopment of Grid Based Simulation and Visualization Tools • German DFN Funded • AEI and ZIB • Follow-on to TiKSL • Grid awareness of applications • Description/management of large scale distributed data sets • Tools for remote and distributed data visualization

More Related