1 / 41

Device Simulation for Single-Event Effects

Device Simulation for Single-Event Effects. Mark E. Law Eric Dattoli, Dan Cummings NCAA Basketball Champions - University of Florida SWAMP Center. Objectives. Provide SEE device simulation environment Address SEE specific issues Physics - strain Numerics - automatic operation Long term:

eben
Download Presentation

Device Simulation for Single-Event Effects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Device Simulation for Single-Event Effects Mark E. Law Eric Dattoli, Dan Cummings NCAA Basketball Champions - University of Florida SWAMP Center

  2. Objectives • Provide SEE device simulation environment • Address SEE specific issues • Physics - strain • Numerics - automatic operation • Long term: • Simulate 1000’s of events to get statistics • With SEE appropriate physics • Without extensive human intervention

  3. Outline • Background - FLOODS Code • Numeric Issues and Enhancements • Grid Refinement • Parallel Computing Platforms • Physical Issues and Enhancements • Transient / Base Materials • Mobility • Coupling to MRED / GEANT

  4. FLOOPS / FLOODS • Object-oriented codes • Multi-dimensional • P = Process / D = Device 90% code shared • Scripting capability for PDE’s - Alagator • Commercialized - ISE / Synopsis • Sentaurus - Process is based on FLOOPS • Licensed at over 200 sites world-wide

  5. What is Alagator? • Scripting language for PDE’s • Parsed into an expression tree • Assembled using FV / FE techniques • Stored in hierarchical parameter data base • Models are accessible, easily modified

  6. What is Alagator? • Example use of operators for diffusion equation • Fick’s Second Law of Diffusion • ddt(Boron) - 9.0e-16 * grad(Boron) • ∂C(x,t) / ∂t = D ∂2C(x,t) / ∂x2

  7. Basic Upgrades FLOODS has been used for: Bipolar devices (SiGe) GaN based heterostructures MEM’s Coupled H diffusion to device operation 4 equations , n, p, H Noise simulations for RF bipolar devices Enhancements for modern MOS More flexible contacting options (transients) Accurate mobility - transverse field Alternate channel materials

  8. Outline • Background - FLOODS Code • Numeric Issues and Enhancements • Grid Refinement • Parallel Computing Platforms • Physical Issues and Enhancements • Transient / Base Materials • Mobility • Coupling to MRED

  9. Adaptive Refinement Charge Deposition is not on grid lines Charge Spreads in time Fine grid at zero time Coarser grid as time goes Simulate many hits, we can’t have user defined grid

  10. Element Class Node Edge Face Volume Object Oriented • Modular - Grid / Operators / Fields • Code written for elements works in all dimensions • Example - every element can compute Size

  11. Example - Isotropic Refinement • Local Error Estimate - Bank Weiser Based • Remove • Replace an edge w/ a node • Dose Stays Constant • Position new node at optimal quality position • Addition • Subdivide an edge • Find effected volumes (Voronoi) • Centroidal positioning SRC Supported

  12. Anisotropic Grid - Initial • Rectangular region created at the command line • Remainder of the silicon is smoothed • Silicon Elements 478 • Joint Quality 0.936 • Average Quality 0.944 SRC Supported

  13. Anisotropic Grid • Refinement of both extension and deep source / drain • LevelSet Spacer • Note - etch onto rectangular regions • Silicon Elements 1150 • Joint Quality 0.937 • Average Quality 0.961 • Improved Quality on Add! SRC Supported

  14. Good for Process Simulation • Device Simulation is Different! • Channel Needs Anisotropic refinement • Unrefinement difficult • Global Operations and Data Structures

  15. Device Simulation Driven Refinement • All brick elements (2D example) • Refine and terminate • Unrefinement easier to track • Glue elements together • Remove excess discretization nodes • Requires Multi-point Templates • 4, 5, and 6 point square discretization (2D) • Virtual functions in an Object Oriented Scheme

  16. Object Oriented • Derived Specific Geometry Elements • Working on refinement • Working on Discretization Element Class Node Edge Face Volume Face 2 -Edge 3 -Edge Tri Quad

  17. Parallel Computing • 3D Transient is time consuming • What can be done to accelerate?

  18. Numerical Approximations • Discretization • Replace continuous functions w/ piecewise linear approximations • Grid Spacing, Time • Linearization • Reduce nonlinear terms using multi-dimension Newton’s method • Mobility, Statistics, … • Linear Matrix Problem • Number of PDE’s x number of nodes square • Direct Solver Nonlinear set of PDE Poisson Carrier Continuity Lattice Temperature Temporal and Spatial Discretization Nonlinear algebraic equations Flux = (n1 - n2) / x12 Multi-dimensional Newton Linearization Linear Matrix Problem

  19. CPU Effort and Time • Assembly of Matrix • Calculate the large, linear system • Lots of Data read • Potential for Overlapping writes • Lots of Parallel Potential • Linear in number of elements • Solution of Matrix • Large Sparse System • Established means for parallel solve • Leverage Argonne Nat’l Lab Code • Low power of equations n1.5 Nonlinear set of PDE Poisson Carrier Continuity Lattice Temperature Temporal and Spatial Discretization Nonlinear algebraic equations Flux = (n1 - n2) / x12 Multi-dimensional Newton Linearization Linear Matrix Problem

  20. Alagator Assembly • Equations are split • Edge pieces (current, electric field) • Node pieces (recombination, time derivative) • Element pieces (perpendicular field) • Pieces are vectorized • 128 pieces in tight BLAS loops for performance • Operations are broken down in scripting • Overall CPU linear in # of pieces

  21. Parallel Assembly • Two Options • High Level Parallel • Assemble Different PDE’s on Different CPU’s • Limited Parallel Speedup • Low Level Parallel • Split Grid, assemble pieces • Match to Linear Solve

  22. Parallel Assembly • Partition the work on different processors • Assemble pieces on processor that will solve

  23. Parallel Performance - Assembly • High Level Partition • Poisson on Node 1 • Electrons on Node 0

  24. Linear Solve Speedup - PETSC Package • Amdahl’s Law Clearly Visible

  25. Linear Solve Speedup - Options • Ordering Algorithms are not helpful • Some Parallel Methods increase solve time

  26. Outline • Background - FLOODS Code • Numeric Issues and Enhancements • Grid Refinement • Parallel Computing Platforms • Physical Issues and Enhancements • Transient / Base Materials • Mobility • Coupling to MRED

  27. Today’s Transistor Scaled MOSFETS and alternate materials to extend Moore’s Law S. Thompson et al., IEEE EDL. 191-193, 2004. • Technology scaling is driven by cost per transistor • Channel length scaling is slowing in bulk planar devices • Limited by leakage current • Strained Si devices S. Thompson et al., IEDM Tech. Dig. 61-64, 2003.

  28. Enable Transients for Devices Added transient device command Extended Contacts to allow switching Contact Templates Available Now Example NMOS Switching Transient Gate Ramped from 3V to 0V in 1ps

  29. Enable Transients for Devices 1D Diode Charge added to depletion region at time 0 Simplest possible SEE

  30. Mobility Modeling • Combination of terms • Ionized Dopants • Carrier-Carrier • Surface Roughness • Strain • Combined using Mathiessen’s rule

  31. Low-Field Mobility • Lots of models - implemented Phillips unified model • Includes • Dopant (dependent on dopant type) • Carrier - Carrier scattering • Minority carrier scattering

  32. Low-Field Mobility - Carrier-Carrier • In single event simulation • Dominant term can be carrier - carrier • Serious mistakes by ignoring these terms Donor Density of 1016

  33. Surface Scattering • Acoustic Phonons • Surface Roughness • Both depend on perpendicular field • Decay factor applies only in channel • Tuned to measured MOS results • In progress!

  34. Normal Field Computation SiO2 • Requires element assembly • Increased computation • More complex matrix • Compute field perpendicular to an interface • Fixed geometry • Might interact w/ single event • Field perpendicular to current flow • Convergence difficulties at low current • Assumes current is perpendicular….. • Make sure it doesn’t apply in bulk Current Field

  35. Channel Materials • Heterostructure Boundaries • Fairly Easy, since we had heterostructure experience in FLOODS before • Development of Ge channel simulations 500Å Ge Channel 30Å Gate Nitride Poly Gate Bias Swept Up 0.1m Channel Length Ideal Doping Profiles Note: Concentration Discontinuity at interface

  36. Boundary Conditions • Commercial simulators only allow BC at contacts • FLOODS has large flexibility at boundaries • Example - Sink on sides • pdbSetString ReflectLeft Equation “1.0e-3*(Elec-Doping) • Simulation as function of device simulation size • Reflecting boundaries at edges and back change current collected at contacts Courtesy of Ron Schrimpf, Andrew Sternberg

  37. σ ε Finite Element Method Mechanics • Theory of Elasticity – linear elastic materials - Silicon is modeled as an isotropic material for simplicity • Enhanced Alagator • Added elastic operator for displacement • Added source term operators • Elastic(displacement) + BodyStrain(Boron*k) SRC Supported

  38. Stress Contours 45 nm 140 nm 30 nm -536 120 nm Si0.83Ge0.17 Si0.83Ge0.17 (μm) -83 403 95 STI STI 31 (μm) MPa Source FLOOPS

  39. Future - Strain and SEU Upgrades Anisotropic operators Current direction, strain interaction Mobility has an orientation Density of States Recombination Driving Forces? Connection to Thompson

  40. Trajectory Read Trajectory Read Command

  41. Summary Numerics Started Developing refinement appropriate to SEE Parallel Port, Begun Testing Physics Built some basic capability for SEE Read Tracks Next Year Demonstrate link, run demos on parallel machines

More Related