1 / 21

ABC An algorithm for resolving the Schrödinger equation for triatomic systems

ABC An algorithm for resolving the Schrödinger equation for triatomic systems. DIMITRIS SKOUTERIS Department of Mathematics & Informatics University of Perugia. The problem.

rosalesj
Download Presentation

ABC An algorithm for resolving the Schrödinger equation for triatomic systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ABCAn algorithmforresolving the Schrödingerequationfortriatomicsystems DIMITRIS SKOUTERIS Department of Mathematics & Informatics University of Perugia

  2. The problem • The aim is to resolve the time independent Schrödinger equation for scattering systems of the form A + BC (yes, that’s why). • A triatomic scattering system can undergo elastic, inelastic or (the interesting part) reactive processes. • Generally, coordinates which do well in describing the A + BC channel are awful for other arrangements such as AB+C. • If we want ‘equal footing’ for all three processes, a near-universal set of coordinates is needed.

  3. Jacobi coordinates • The typicalcoordinatesusedfordescribing the A+BC system are Jacobiancoordinates. These consist of: • The B – C distance. • The distancebetween A and the BC centreof mass. • The angle between the twovectors. The advantageof the Jacobicoordinatesisthat the Hamiltonianisexpressedverysimply in termsofthem. HOWEVER…

  4. The coordinate problem • The Jacobian coordinates are very much dependent on the arrangement. Thus, the Jacobian coordinates for the A+BC arrangement do not do well in describing the AB+C arrangement. • For example, at large AB – C distances, the Jacobi angle for the A + BC arrangement can vary within a very narrow range. Huge angular grids are needed. • Therefore, we turn our attention to a more universal set of coordinates…

  5. Hyperspherical coordinates • Even though they do not completely resolve the problem, these coordinates alleviate it considerably. • The system is imagined in spherical polar coordinates. The radius ρ corresponds to the ‘extent’ of the system (in the sense that ρ=0 means all three atoms at the same point in space, whereas infinite ρ implies infinite separation of a pair of atoms). • The hyperangle θ serves to indicate a continuous passage between ‘reactants’ and ‘products’.

  6. The Schrödinger equation • The equation has the usual familiar eigenvalue form: H ψ = E ψ where H is the Hamiltonian operator, ψ is the wavefunction and E is the total energy we are interested in. We know H, we choose E as a parameter and we need to calculate ψ. Actually, we do not even need the whole of ψ…

  7. Boundary conditions • At infinite ρ, the wavefunction has a well defined behaviour. It is a linear combination of components, each of which corresponds to a particular state. For our (chosen) initial state, the wavefunction has the form: ψ→ e -ikR – Sel e ikR (as ρ→∞) consisting of an incoming wave (prepared by us) and an outgoing wave (the result of elastic scattering).

  8. Boundary conditions • On the other hand, for a state which is not our initial state (and is, therefore, the result of inelastic or reactive scattering) the relevant component of the wavefunction has the form at infinity: ψ→ – S e ikR (as ρ→∞) i.e. there is no incoming wave. • It is the coefficient S which is the ‘holy grail’ of any scattering calculation – it is called the ‘S matrix element’ and its square amplitude denotes the probability of scattering to a particular state.

  9. Boundary conditions • At ρ = 0, on the other hand, the wavefunction is equal to 0. Every single component of it has to vanish at the origin. • This provides us with a starting point for our propagation – our problem is an initial value problem. • The unfortunate fact is that we know the value of the wavefunction precisely on the opposite part of where we want it!

  10. The Hamiltonian • The Hamiltonian for a triatomic system, in hyperspherical coordinates, can be written as a sum of terms: H = Tρ + Tθ + T + TJ + V(ρ,θ,) In order of appearance, these terms correspond to the kinetic energy along the hyperradius, the hyperangle, diatomic rotation, overall rotation and potential energy. It is this Hamiltonian which is used to solve the Schrödinger equation.

  11. Propagation • Thus, we need to propagate the wavefunction from ρ=0 up to infinity, according to the Schrödinger equation. There is only a small obstacle – the equation is second order with respect to ρ. In other words, the value of the wavefunction is not enough to start – one has to know its derivative as well! The system is, as yet, under-determined. • On the other hand, some kind of under-determinacy is to be expected – after all, there has been no mention of the initial state in which our triatomic system finds itself at the start of the scattering process.

  12. Close-coupled representation • At each point of the propagation, the wavefunction is represented as a linear combination of basis functions: Ψ = ∑ ci (ρ) ψi (r ; ρ) where r represents the internal coordinates of the system. The basis functions are adiabatic eigenfunctions of the internal Hamiltonian at specific points (‘sectors’) of ρ. Thus the wavefunction is represented as a column vector. At ρ=0 (the start of the propagation) all coefficients ci are identically zero. But we know nothing about their derivatives…

  13. Matrix propagation • Therefore we resort to a trick: instead of propagating a single Nx1 column vector, we propagate a NxN matrix of vectors (denoted by C), whose derivative vectors are linearly independent. This way, effectively, we propagate all possible wavefunctions at once. Here N denotes the number of basis functions in our set (pre-specified by us). • We do not know which column vector is the correct one… … but IGNORANCE IS STRENGTH !

  14. Sector-by-sector • At each value of ρ along a pre-specified grid, the internal Hamiltonian is diagonalised, producing the appropriate ψi basis function for the current sector. (This is the slow step of the calculation, scaling as N3). • Our NxN C matrix undergoes a linear transformation (to adapt to the new basis set) and is then propagated along the sector. After that, it is ready for the next one (linear transformations and multiplications scale much more favourably).

  15. The energy factor • As you may remember, the total energy (E) enters explicitly into the Schrödinger equation. This implies that a separate propagation needs to be carried out (within each sector) for each energy value. • Thus, the CPU time needed for a calculation is proportional to the energy values needed. • The propagation goes on and on and on until we reach… … The Asymptotic Region

  16. Asymptotic matching • There is no strict definition of the asymptotic region – it is simply the region where the various arrangement channels are separated by such enormous barriers that any communication between them is precluded. • As soon as this region is reached, we can use the form that we want our wavefunction and its derivative to have asymptotically. Since we have propagated the most general form of the wavefunction, we can match it now to an asymptotic function corresponding to our desired initial state.

  17. Asymptotic matching • Matchingourpropagatedmatrixto the form ψ→ e ikR – S e –ikR (asρ→∞) we can calculate: • The S matrixelements (whosesquareamplitudesgive the reactionprobabilities). • The columnvectorofcoefficientsrelating the propagatedNxNmatrixto the actualcolumnvectorforourinitial state. Butwe do notreallyneed 2.!!! Isn’t there a way we can losethispieceof information, simplifying the calculation ? THERE IS !

  18. The log-derivative matrix • Instead of propagating the NxN matrix C(ρ) and its derivative, we propagate another matrix Y defined as Y = C’ (ρ) x C-1 (ρ) where the first matrix is the derivative w.r.t. ρ of C and the second one is its inverse matrix. • This is known as the log-derivative matrix. On asymptotic matching, we obtain only the S matrix elements. Thus, we lose a (useless) piece of information and we only need to propagate one matrix instead of two.

  19. Results • Once obtained the S matrixelements, one can take theirsquareamplitude in ordertoobtaindetailed state-to-state probabilities. Alternatively, one can take coherentsumsof S matrixelementsto derive differential cross sections. • From the reactionprobabilitiesone can obtain: • Integral cross sections. • Cumulative reactionprobabilities (summingoverreactantstates). • Rate constants (whatreallyinterests a chemist!)

  20. Acknowledgements • My bosses, Prof. A. Laganà and Prof. O. Gervasi • Alessandro Costantini • Nadia Balucani • Stefano Crocchianti • The University of Perugia (money!)

More Related