Parallel finite difference time domain computations aided by modal decomposition
This presentation is the property of its rightful owner.
Sponsored Links
1 / 17

Parallel Finite-Difference Time-Domain Computations Aided by Modal Decomposition PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Parallel Finite-Difference Time-Domain Computations Aided by Modal Decomposition. Dmitry A. Gorodetsky Philip A. Wilsey. Outline. Introduction FDTD Distributed Computation Model Order Reduction Conclusion References. Introduction.

Download Presentation

Parallel Finite-Difference Time-Domain Computations Aided by Modal Decomposition

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Parallel Finite-Difference Time-Domain Computations Aided by Modal Decomposition

Dmitry A. Gorodetsky

Philip A. Wilsey


  • Introduction

    • FDTD

    • Distributed Computation

  • Model Order Reduction

  • Conclusion

  • References


  • FDTD: evolutionary algorithm solves Maxwell’s eqs. by marching.

  • Some typical problems:

    • Aircraft Radar Cross Section

    • Microwave ICs, High Speed Electronic Circuits

    • Optical Pulse Propagation

    • Antennas

    • Bioelectromagnetic Systems (Retina, EM hypothermia cancer therapy)

    • Bodies of Revolution

Computed surface electric currents induced on a prototype military jet fighter plane by a radar beam at 100 MHz. The incident plane wave propagates from left to right head-on to the airplane. The surface currents re-emit electromagnetic energy which can be used to create RCS plots [1].

Simulation Complexity

  • Example of 2nd order FDTD evolution:


  • Grid size as well as number of time steps can make the simulation prohibitive.

  • Computational burden grows as ~N4/3 [1]

A single FDTD cell

Reducing Simulation Time

  • Methods to Improve Simulation Time:

    • Distributed Computation [1-3]

      • Domain Decomposition

      • Synchronization

      • Load Balancing

    • Model Order Reduction

      • State Transition Matrix – Modal Approach [4-7] - Exact

        • Entire Domain

        • Sub Domain

      • Linear Estimation Methods [1, 8] - Approximate

        • Prony’s Method (complex exponentials)

        • System Identification Technique

Figure 1. Speedup Efficiency of parallel FDTD [9]

Distributed Computation

  • FDTD requires knowledge of state of adjacent points to compute the current point.

  • Hence it exhibits fine-grain parallelism and its speedup is limited by surface/volume ratio.

  • Surface to volume ratio of FDTD partitions is in effect communication/computation ratio.


  • Introduction

  • Model Order Reduction

    • State Transition Matrix (exact)

      • Entire Domain

        • Expensive Setup

        • Cheap Iteration

        • Setup Parallelization

      • Sub Domain (Macromodel)

  • Conclusion

  • References

Entire Domain

  • After Chen [10], we can express the FDTD update equations as:

    E(n) = D1H(n-1/2) + G1E(n-1)

    H(n+1/2) = D2E(n) + G2H(n-1/2)

  • With these equations, the state transition matrix becomes:

Entire Domain (2)

  • Then can express FDTD as:


    where Q represents the present state.

  • Every step takes N2 multiplications.

  • If we assume that the system starts out from Q(0)=a1v1+a2v2+…+aNvN, then (1) can be written as:


    where vi are the eigenvectors and λiare eigenvalues of A.

Entire Domain (3)Cheap Iteration

  • The advantage of the modal method for FDTD is that time-stepping is de-coupled (see eq.2)

  • Solution can be obtained at any time step without knowledge of previous time step.

  • Time-stepping can be parallelized and does not require communication.

Entire Domain (4)Expensive Setup

  • The matrix A is sparse, diagonally dominant, and banded.

  • With standard techniques (LAPACK), getting the eigendecomposition of A is an O(N3) process.

  • LAPACK uses QR iteration to obtain the Schur form and hence is not easy to parallelize.

Entire Domain (5)Setup Parallelization

  • We can take advantage of the modal make-up of the A matrix because in practice we do not need all the modes [10,11].

  • One alternative method is spectral divide and conquer (SDC) [12].

  • SDC: sign (A-bI), where b represents the x-coordinate of a vertical line in the complex plane.

Entire Domain (6)Setup Parallelization


  • Advantages:

    • Compute only needed eigenvalues.

    • Easy to parallelize.

    • Computation time is kN3 but k depends on the number of eigenvalues.

  • Disadvantages:

    • Requires several iterations before sign function converges.

    • Requires knowledge of where eigenvalues do not lie otherwise sign function may not converge quickly.

Entire Domain (7)Setup Parallelization

Alternatives: Iterative Techniques Simultaneous Iteration

Arnoldi and Lancsoz [12,13]

  • Advantages:

    • Exploit sparsity.

    • Can be parallelized.

  • Disadvantages:

    • Require computation of all eigenvalues.


  • The setup time of this method is expensive for a reason.

  • Very good accuracy results even after eigenmodes are discarded.

  • Setup and time-stepping can be parallelized and need not be limited by communication as conventional FDTD.

  • Imprvmnt = function (#steps x #CPUs)


  • A. Taflove, Computational Electrodynamics: the finite-difference time-domain method, Norwood, MA: Artech House, 1995.

  • N. P. Chrisochoides, E. Houstis, and J. Rice, “Mapping algorithms and software environment for data parallel PDE iterative solvers,” Special issue of the Journal of Parallel and Distributed Computing on Data-Parallel Algorithms and Programming, Vol 21, No 1, pp 75--95, April, 1997.

  • N. P. Chrisochoides and J. R. Rice, “Partitioning heuristics for PDE computations based on parallel hardware and geometry characteristics.” In Advances in Computer Methods for Partial Differential Equations VII, (R. Vichnevetsky. D. Knight and G. Richter, eds) IMACS, New Brunswick, NJ, pp. 127-133, 1992.

  • Z. Chen, “Analytic Johns matrix and its application in TLM diakoptics,” IEEEMTT-S Digest, vol. 2, pp. 777-780, 1995.

  • W. J. Hoefer, “The discrete time domain green’s function or john’s matrix – a new powerful concept in transmission line modeling (TLM),” Int. J. Num. Modeling, vol. 2, pp. 215-225, 1989.

  • P. B. Johns and K. Akhtarzad, “Time domain approximations in the solution of fields by time domain diakoptics,” Int. J. Num. Methods Eng., vol. 18, pp. 1361-1373, 1982.

  • P. B. Johns and K. Akhtarzad, “The use of time domain diakoptics in time discrete models of fields”, Int. J. Num. Methods Eng.,vol. 17, pp. 1-14, 1981.

References (2)

  • W. Kumpel and I. Wolff, “Digital signal processing of time domain field simulation results using the system identification method,” IEEE Trans. Microwave Theory Techniq., vol. 42, no. 4, pp. 667-671, 1994.

  • D. A. Gorodetsky and P. A. Wilsey, “Innovative approaches to parallelizing finite-difference time-domain computations,” IEEE Workshop on Direct and Inverse Problems in Electrodynamics, 2005.

  • Z. Chen and P. P. Silvester, “Analytic solutions for the finite-difference time-domain and transmission-line-matrix methods,” Microwave and Optical Technology Letters, vol. 7, no.1, pp. 5-8, 1994.

  • D. A. Gorodetsky and P. A. Wilsey, “Reduction of FDTD simulation time with modal methods,” Progress in Electromagnetics Research Symposium, 2006, in press.

  • J. W. Demmel, M. T. Heath, and H. A. van der Vortst, Parallel numerical linear algebra, in Acta Numerica 1993, Cambridge, 1993, Cambridge University Press, pp. 111–197

  • Z. Bai, “Progress in the numerical solution of the nonsymmetric eigenvalue problem,” Journal of Numerical Linear Algebra with Applications, vol. 2, pp. 219--234, 1995.

  • Login