Performance Measures x.x, x.x, and x.x. Visualization and Analysis Activities May 19, 2009. Hank Childs VisIt Architect. Outline. VisIt project overview Visualization and analysis highlights with the Nek code Why petascale computing will change the rules Summary & future work. Outline.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
PresentationsTerribly Named!! Intended for more than just visualization!
Meshes: rectilinear, curvilinear, unstructured, point, AMR
Data: scalar, vector, tensor, material, species
Dimension: 1D, 2D, 3D, time varying
Rendering (~15): pseudocolor, volume rendering, hedgehogs, glyphs, mesh lines, etc…
Data manipulation (~40): slicing, contouring, clipping, thresholding, restrict to box, reflect, project, revolve, …
File formats (~85)
Derived quantities: >100 interoperable building blocks
+,-,*,/, gradient, mesh quality, if-then-else, and, or, not
Many general features: position lights, make movie, etc
Queries (~50): ways to pull out quantitative information, debugging, comparative analysis
Client-server observations: areas.
Good for remote visualization
Leverages available resources
No need to move data
Additional design considerations:
Multiple UIs: GUI (Qt), CLI (Python), more…
localhost – Linux, Windows, Mac
Graphics HardwareVisIt employs a parallelized client-server architecture.
Parallel vis resources
Slides from the VisIt class
Over 50 person-years of effort
Over one million lines of code
Partnership between: Department of Energy’s Office of Nuclear Energy, Office of Science, and National Nuclear Security Agency, and among others
GNEP funds LLNL
to support GNEP
codes at Argonne
LLNL, LBL, & ORNL
Start dev in repo
AWE enters repo
CEA is developed
UC Davis & UUtah
in VisIt repo
Public SW repo
funds LLNL to
AWE & ASC
leverages effort from
VACET is funded
Entering repo all
217 pin reactor
Run on ¼ of Argonne BG/P.
Observe which channels the for end users.
particles pass through
Observe where particles come outTracing particles through the channels (work in progress)
Place 1000 particles in one channel
White triangle shows current channel for end users.Tracing particles through the channels (work in progress)
Michael Strayer (U.S. DoE Office of Science): “petascale is not business as usual”
Especially true for visualization and analysis!
Large scale data creates two incredible challenges: scale and complexity
Scale is not “business as usual”
Current trajectory for terascale postprocessing will be cost prohibitive at the petascale
We will need “smart” techniques in production environments
More resolution leads to more and more complexity
Will the “business as usual” techniques still suffice?
Shortened out complexity portion of this talk: data anlaysis is key.
SC and dedicated cluster share disk campaign.
Dedicated cluster has good I/O access
SC runs lightweight OS; dedicated cluster runs Linux
Graphics cards on dedicated clusterCurrent modes of terascale visualization and analysis (1)
Simulation writes to disk, vis. job reads from disk
SC runs full OS (AIX)
No graphics cards
Current modes of terascale visualization and analysis (2)
Portion of purple for
Vis & analysis
27 billion elements
Run on ASC BG/L
Visualized on gauss using VisIt
These modes of processing have worked well at the terascale.
No need to move data.
Not scaling up to huge numbers of cores.
Current algorithm used by major vis tools (VisIt, EnSight, ParaView):
Read in all data from disk and keep in primary memory.
Why is it so expensive?
Visualization and analysis is:
Compute has become cheap; memory and I/O are still expensive.
4 years: 5 PF
Trend for next generation of supercomputers is weaker and weaker relative I/O
To maintain performance, we need more I/O
So we will have to use more nodes to get more I/O
Recent series of hero runs for 1 trillion cell data set:
I/O = ~5 minutes, processing = ~10 seconds
Fundamentally, we are I/O-bound, not compute bound
multi-core has limited value-added
To increase I/O, we will need to use more of the machine
Use cases are “bursty” – do we want the supercomputer sitting idle while someone studies results?
Can we afford to devote a large portion of the SC to visualization and analysis?
Lightweight OS’s present challenges
Using a portion of the SC is also problematic at the petascale.
Time to write memory to disk
Production visualization and analysis tools use “pure parallelism”.
Research has established “smart techniques”: viable, effective alternatives to pure parallelism
Out of core processing
In situ processing
Not going to dig in on these, but none is a panacea in isolation.
There are gaps here (production-readiness & more)
These techniques are difficult to implement.
Petascale computing makes them cost effective.
~5% on SC
All visualization and analysis work