1 / 32

Wakefield Computations at Extreme-scale for Ultra-short Bunches using Parallel hp-Refinement

Wakefield Computations at Extreme-scale for Ultra-short Bunches using Parallel hp-Refinement. Lie-Quan Lee, Cho Ng, Arno Candel , Liling Xiao, Greg Schussman , Lixin Ge , Zenghai Li, Andreas Kabel , Vineet Rawat , and Kwok Ko SLAC National Accelerator Laboratory.

oke
Download Presentation

Wakefield Computations at Extreme-scale for Ultra-short Bunches using Parallel hp-Refinement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Wakefield Computations at Extreme-scale for Ultra-short Bunches using Parallel hp-Refinement Lie-Quan Lee, Cho Ng, Arno Candel, Liling Xiao, Greg Schussman, LixinGe, Zenghai Li, Andreas Kabel, VineetRawat, and Kwok Ko SLAC National Accelerator Laboratory SciDAC 2010 Conference, Chattanooga, TN, July 11-15, 2010

  2. Overview • Parallel computing for accelerator modeling • Finite-element time-domain analysis • Moving window technique with finite-element method • Wakefield calculations with p-refinement • Wakefield calculations with online hp-refinement • Benchmarking and Results • PEP-X undulator taper structure • PEP-X wiggler taper structure • Cornel ERL vacuum chamber • ILC/ProjectX Coupler

  3. Advanced Computing for Accelerators • Particle accelerators are billion-dollar class facilities • Proposed International Linear Collider: 6.75 billion dollars • Large Hadron Collider: 6.5 billion dollars • Advanced computing enables virtual prototyping • Cost savings from design optimization through computing can be significant 30 km 27 km

  4. SciDAC for Accelerator Modeling • ComPASS Accelerator Project (2007-2012) • TOPS/LBL (Linear solvers and eigen-solvers) • E Ng, X Li, I Yamazaki • ITAPS/RPI, LLNL, Sandia (Parallel curvilinear meshing and adaptation) • Q Lu, M Shephard, L Diachin • Ultra-scale Visualization Institute/Sandia, UC Davis • K Mooreland, K Ma • ITAPS/CSCAPES/Sandia (Load balancing) • K Devine, E Boman • Collaborations on algorithmic aspects

  5. Parallel Finite Element Code Suite ACE3P Over more than a decade, SLAC has developed the conformal, higher-order, C++/MPI-based parallel finite-element suite of electromagnetic codes, under the supports of AST SciDAC1 and ComPASS SciDAC2 projects ACE3P Modules – Accelerator physics application Frequency Domain: Omega3P – Eigensolver (nonlinear, damping) S3P – S-Parameter Time Domain:T3P – Wakefields and Transients Particle Tracking: Track3P – Multipacting and Dark Current EM Particle-in-cellPic3P – RF gun simulation Visualization:ParaView – Meshes, Fields and Particles Aiming for the Virtual Prototyping of accelerator structures

  6. INCITE Program • INCITE Award: Petascale Computing for Tera-eV Accelerators (2008-2010) • ORNL (Jaguar / Lens / Hpss) • Kitware (for ParaView visualization and analysis software) • Center for Scalable Application Development Software (CScADS) • Provide major computing resources and supports for efficiently using them in tackling challenging accelerator modeling problems

  7. SLAC’s INCITE Allocation Usage at NCCS 2010 total 12 million 2008 total 4.5 million 2010 Allocation Usage (up to 07/2010) Average Job size: 911 Average Job size: 6349 Allocation Time Used Average Job size: 2005 2009 total 8 million Average Job size: 1834 Number of Cores Average Job size: 10335 • ~60% of total time is used with 10% to 50% of the total resources (22k cores to 120k cores) Average Job size: 24835

  8. Solving CEBAF BBU Using Shape Uncertainty Quantification Method High Q modes Field profiles in deformed cavity HOM coupler SciDAC Success as a Collaboration between Accelerator Simulation, Computational Science and Experiment – Beam Breakup (BBU) instabilities at well below the designed beam current were observed in the CEBAF12 GeV upgrade of the Jefferson Lab (TJNAF) in which Higher Order Modes (HOM) with exceptionally highquality factor (Q) were measured. Using the shape uncertainty quantification tool developed under SciDAC, the problem was found to be a deformation of the cavity shape due to fabrication errors. This discovery was achieved as a team effort between SLAC, TOPS, and JLab which underscores the importance of the SciDAC multidisciplinary approach in tacking challenging applications. Method of Solution - Using the measured cavity parameters as inputs, the deformed cavity shape was recovered by solving the inverse problem through an optimization method. The calculations showed that the cavity was 8 mm shorter than designed, which was subsequently confirmed by measurements. The result explains why the troublesome modes have high Qs because in the deformed cavity, the fields shift away from the HOM coupler where they can be damped. This shows that quality control in cavity fabrication can play an important role in accelerator performance. . Cavity Shape - Ideal in silver vs deformed in gold

  9. Finite Element Discretization • Curvilinear tetrahedral elements: High fidelity modeling • High-order hierarchical vector basis functions (H(curl)) • Provide tangential continuity required by physics • Easily set the boundary conditions • Significantly reduce phase error 0.5 mm gap 200 mm Shape functions

  10. Time-Domain Analysis Compute transient and wakefield effects of beams inside cavities Second-order vector wave equation: Denote:

  11. Ni Finite-Element Time-Domain Analysis H(Curl)-conforming Element (discretized in space): ODE in matrix and vector form:

  12. Newmark- Scheme for Time Stepping • Unconditionally stable*when  > 0.25 • A linear system Ax=b needs to be solved for each time step • Matrix in the linear system is symmetric positive definite • Conjugate gradient + block Jacobi / incomplete Cholesky *Gedney & Navsariwala, An unconditionally stable finite element time-domain solution of the vector wave equation, IEEE microwave and guided wave letters, vol. 5, pp. 332-334, 1995

  13. Wakefields Calculations • At given time, electric field and magnetic flux can be calculated: • Wake Potential: Cavity for International Linear Collider Coupler region

  14. Parallel Finite-Element Time-Domain Code: T3P Strong Scaling Study Netcdf mesh file Input parameters 256 million elements p=2 NDOFs=1.6 billion Partition mesh Mesh generation Assemble matrices Solver/time stepping 131072 • Ongoing work: • Preconditioners and multiple right hand sides • Fine-tune mesh partitioning Postprocess: E/B CAD model Analysis & viz

  15. 3D PEP-X Undulator Taper • Need to compute short-range wakefield due to 100 mm smooth transition from 50mm×6mm elliptical pipes to 75mm×25mm • Beam bunch length is 0.5 mm in a 300 mm long structure 75 mm×25 mm 50 mm×6 mm Computer Model of Undulator Taper

  16. Moving Window with p-Refinement • Use a window to limit the computational domain • Fields from the left of the window will not catch up as beam moves with speed of light • Fields on the right of the window should be zero • Use finite element basis function order p to implement window • p is nonzero for elements inside the window • p = 0 for any elements outside of the window • Move the window when the beam is close to the right boundary of the window • Prepare the mesh, repartition the mesh, assemble matrices, transfer solution, … d p=0 p=2 p=0 f b

  17. Solution Transfer when Window Moves • Scatter xn to each element • Keep values in the overlapping region • Drop values in the left of the window • Set zeros in the right of the overlapping region • Redistribute to different processes according to partitioning • Gather values to form new xn

  18. Benchmarking with 4mm Bunch • run on XT5 with 5 nodes (60 cores) • Case 1: the whole structure/mesh with 2 mm size : 21.5 minutes • Case 2: moving window with p refinement (same mesh): 11 minutes • Identicalwakefields within beam (40mm) and its tailing zone(10mm) • Significantly reducing computational resources with moving window • Additional Benefits: • Smaller cores count • Faster execution time • Solve larger problems Wake potential

  19. Movie of Wakefields in PEP-X Undulator Taper

  20. Wakefields of Undulator Taper with 0.5mm Bunch • 60 mm trailing area with 0.5mm beam bunch size • 0.25mm mesh size, 19 million quadratic tetrahedral elements • p=2 inside the window, p=0 outside • 15 windows (90mm) • Maximal NDOFs is 54.9M only, Maximal Nelems 8.6 million Reconstructing wake of 3mm bunch

  21. 400 mm (10*taper_height) R48mm 45mm x 15 mm PEP-X Wiggler Taper Model: 400 mm transition between the rectangular beam pipe (45mm × 15mm) and circular beam pipe (radius 48mm). Calculate: wakefields with 0.5mm bunch size with 60mm tail. Wiggler Taper Wiggler mesh 100mm (10*taper_height) 50mm x 6 mm Undulator Taper 75mm x 25 mm

  22. Challenges in Wakefields Calculation • Number of mesh elements would be roughly 4 to 5 times larger than that of undulator taper • Number of time steps is about 3 to 4 times larger • Meshing problem: cannot easily generate such a large mesh (~100 million curvilinear elements) • Mesh generator is serial (parallel mesh generation is under development at ITAPS) • Need a computer with lot of memory (128GB) • Inefficient use of computer resources • Mesh in the new window need to be read in from the file • Or store in the memory in parallel • Can we do online mesh refinement in the window region? • Prepare mesh, repartition, assemble matrices, transfer solution, …

  23. Mesh Refinement: Edge Splitting • There are many different ways of splitting a tetrahedron • Divide all 6 edges: better suited for resolving short beam bunch • Generate 8 smaller tetrahedral elements Splitting linear tetrahedron Splitting quadratic tetrahedron

  24. Online Parallel Mesh Refinement • Midpoints of 6 edges are new vertices • Need a coordinated way for new vertices on process boundary • Similar to number the DOFs (edge element) using 1st order basis function in our electromagnetic simulation • Regard each new vertices as a DOF • Run our parallel partitioning and numbering routines to get unique IDs cross different processors • Form the dense mesh by local edge splitting of each element owned by the processor • ~500 lines of additional code only

  25. Solution Transfer When Mesh Changed p=0 p=2 p=0 xnxn-1fnfn-1 A Projection Method for Discretized Electromagnetic Fields on Unstructured Meshes, Lie-Quan Lee, CSE09, Miami, Florida, March 2-6, 2009; Tech. pub. SLAC-PUB-13333

  26. Solution Transfer between Meshes • Vectorsxn,xn-1, fn, fn-1 need to be transferred onto the new mesh • Forxnandxn-1, a projection method is used: • Scalable parallel projection is a challenge • Vectors fn and fn-1 • Recalculation on the new mesh 26

  27. Wakefield Benchmarking • 20mm bunch size: window size 0.5 m (b=0.1m,f=0.2m) on XT5 with 5nodes (60 cores) • Case 1: 10 mm mesh size with hp-refinement: 36.5 minutes • Case 2: 5 mm mesh size with p-refinement: 23 minutes • Identical results for s < 0.3m. • Longer time in Case 1 is due to overhead associated with mesh refinement • Using less memory

  28. Wakefields of 1mm Bunch in PEP-X Wiggler Taper • 11 windows (170mm) with 60mm tailing area • Maximal NDOFs: 74.7 million • Maximal Nelems: 11.8 million Reconstructing wake of 3mm bunch

  29. Wakefields in Cornell ERL Vacuum Chamber • Energy Recovery Linac (ERL) Aluminum Vacuum Chamber • Moving window with online mesh refinement; 18000 cores with ~5 hours on Jaguarpf at ORNL • Not possible without algorithmic advance and INCITE-scale resources for calculating wakefields of such a short bunch For 0.6mm bunch length, loss factor is 0.413 V/pC. Computer model of an ERL vacuum chamber device

  30. Movie of Wakefields in ERL Vacuum Chamber

  31. Preliminary Results of Wakefields in ILC Coupler • Without Windows • Would be 256 million elements with 1.6 billion DOFs • 100k cores with 6s per step • With Moving Windows • ~30 million elements are used inside each window • 12k cores with 6s per step (p=2) • 12k cores with 0.4s per step (p=1) Project X shares the same cavity design s= 0.3 mm

  32. Summary • Calculating wakefields due to ultra-short bunches requires efficient computational methods • Moving window with p-refinement • Moving window with hp-refinement • Parallel online mesh refinement • Parallel solution transfer through projection • We are making major impact on accelerator design through extreme-scale computing • Only possible through • SciDAC supports for algorithmic advancement, and • INCITE allocation for large computing resources

More Related