1 / 28

Computational Physics (Lecture 7)

Computational Physics (Lecture 7). PHY4370. Eigen Value Problems. Very Important in physics. Ax = λ x λ is the eigenvalue corresponding to eigenvector x of the matrix A, determined from the secular equation | A - λ I |=0 I is a unit matrix.

nijole
Download Presentation

Computational Physics (Lecture 7)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Physics(Lecture 7) PHY4370

  2. Eigen Value Problems • Very Important in physics. • Ax=λx • λ is the eigenvalue corresponding to eigenvector x of the matrix A, determined from the secular equation • |A-λI|=0 • I is a unit matrix. • nn matrix has a total of n eigenvalues. The eigen values don’t have to be different. • If two or more eigenstates have the same eigenvalue, they are degenerate. • General because the matrix can come from many different problems.

  3. An eigenvector v of a matrix B is a nonzero vector that does not rotate when B is applied to it. • Eigen vector v may change length or reverse its directions. • But it won’t turn sideways. • Iterative methods often depend on applying matrix on the vector over and over again. • If | λ |<1, Biv= λiv will vanish • If | λ |>1, Biv= λiv will go infinity

  4. If the function involves the inverse of the matrix and the eigenvalue happens to be zero, we can always add a term ηIto the original matrix to remove the singularity. • The modified matrix has the eigenvalue λ_i− η for the corresponding eigenvector x_i. • The eigenvalue of the original matrix is recovered by taking η → 0 after the problem is solved. • Based on this property of nondefective matrices, we can construct a recursion: (Iterative method) • To extract the eigen value that is closest to the parameter mu, and N_k is a normalization constant to ensure

  5. Eigen Values of a Hermitian matrix • In many physical problems, the matrix is Hermitian. • Complex conjugate of the transpose matrix equals the matrix itself. • Three important properties: • the eigenvalues of a Hermitian matrix are all real; • the eigenvectors of a Hermitian matrix can be made orthonormal; • aHermitian matrix can be transformed into a diagonal matrix with the same set of eigenvalues under a similarity transformation of a unitary matrix that contains all its eigenvectors.

  6. the eigenvalue problem of an n × n complex Hermitian matrix is equivalent to that of a 2n × 2n real symmetric matrix. • A = B + iC, • B and C are the real and imaginary parts of A, • If A is Hermitian, • Bij = Bji • Cij = -Cji • eigenvector z in a similar fashion, we have z = x + iy. • (B + iC)(x + iy) = λ(x + iy) • Therefore, we only need to solve the real symmetric eigen value problem if the matrix is Hermitian.

  7. (1) use an orthogonal matrix to perform a similarity transformation of the real symmetric matrix into a real symmetric tridiagonal matrix; • a matrix that has nonzero elements only on the main diagonal, the first diagonal below this, and the first diagonal above the main diagonal. • (2) solve the eigenvalue problem of the resulting tridiagonal matrix.

  8. The similarity transformation preserves the eigenvalues of the original matrix, • the eigenvectors are the columns or rows of the orthogonal matrix used in the transformation. • Householder method. • Givens method. • The most commonly used for tridiagonalizing • achieved with a total of n − 2 consecutive transformations, each operating on a row and a column of the matrix.

  9. The transformations can be cast into a recursion: • for k = 1, 2, . . . , n − 2, where O_kis an orthogonal matrix that works on the row elements with i= k + 2, . . . , n of the kth column and the column elements with j = k + 2, . . . , n of the kth row. The recursion begins with A(0) = A. • It is an O(N) algorithm. • Storage advantages.

  10. Here: • Where the lth component of the vector w_k is given by • with

  11. Provided in standard math libraries. So we don’t have to reinvent the wheel here. Just take the code from most standard math libraries. • After we obtain the tridiagonalized matrix, the eigenvalues can be found using one of the root search routines available. • Note that the secular equation |A − λI| = 0 is equivalent to a polynomial equation p_n(λ) = 0. • Because of the simplicity of the symmetric tridiagonal matrix, the polynomial pn(λ) can be generated recursively with:

  12. Where a_i = A_ii and b_i = A_ii+1=A_i+1i. • The polynomial p_i(lambda) is given from the submatrix of A_jk with j, k = 1,2…,i with the starting value p_0(lambda) =1 • And p_1(lambda)=a_1 - lambda.

  13. In principle, we can use any of the root searching routines to find the eigenvalues from the secular equation • as soon as the polynomial is generated. • However, two properties associated with the zeros of p_n(λ) are useful in developing a fast and accurate routine to obtain the eigenvalues of a symmetric tridiagonal matrix. • Will not be covered in this lecture. If interested, you can read T. Pang book.

  14. The Faddeev--Leverrier method • A very interesting method developed for matrix inversion and eigenvalue problem. • The scheme was first discovered by Leverrierin the middle of the nineteenth century • modified by Faddeev (Faddeevand Faddeeva, 1963).

  15. The characteristic polynomial of the matrix is given by • where c_n= (−1)^n. • we can introduce a set of supplementary matrices S_kwith

  16. If we multiply the above equation by (lambda I – A, we have: • Comparing the coefficients from the same order of lamda^l for l = 0, 1,…n on the both sides of the equation, we obtain the recursion: • For k = 1, 2, …, n. The recursion is started with S_0 = I is ended at c_0. • With lamda = 0, we can show that

  17. Because the Faddeev–Leverrier recursion also generates all the coefficients c_kfor the characteristic polynomial p_n(λ), • we can use a root search method to obtain all the eigenvalues from p_n(λ) = 0.

  18. After we have found all the eigenvalues λ_k, we can also obtain the corresponding eigenvectors with the availability of the supplementary matrices Sk.

  19. Electronic structures of atoms The Schrodinger equation for a multielectron atom is given by

  20. HartreeFock Approximations • The Born–Oppenheimer approximation is inherently assumed. • Typically, relativistic effects are completely neglected. • The variational solution is assumed to be a linear combination of a finite number of basis functions. • Each energy eigenfunction is assumed to be describable by a single Slater determinant • The mean field approximation is implied. Effects arising from deviations from this assumption, known as electron correlation, are completely neglected for the electrons of opposite spin, but are taken into account for electrons of parallel spin.

  21. the ground state is approximated by the Hartree–Fockansatz, which can be cast into a determinant

  22. To optimize (minimize) E_HF, we can perform the functional variation with respect to • Hartree-Fock equation. • V_H(R) (HF potential) is given by: • Where: is the total density of the electron at r. • The exchange correlation is given by:

  23. The Hartree potential can also be obtained from the solution of the Poisson equation: • The single particle wavefunctions in the atomic systems can be assumed to have the form: • The H-F equation for given l can be further converted into:

  24. We can easily apply the numerical schemes introduced in this lecture to solve this matrix eigenvalue problem.

More Related