1 / 33

Computational Physics (Lecture 8)

Computational Physics (Lecture 8). PHY4370. Inverse of a matrix. The inverse of a matrix A using linear equating approach: A^ −1 _ i j = x_i j , for i , j = 1 , 2 , . . . , n , where x_i j is the solution of the equation Ax_ j = b_ j , with b_i j = δ_i j .

verity
Download Presentation

Computational Physics (Lecture 8)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Physics(Lecture 8) PHY4370

  2. Inverse of a matrix • The inverse of a matrix A using linear equating approach: A^−1 _ij = x_ij , • for i, j = 1, 2, . . . , n, • where x_ij is the solution of the equation Ax_j= b_j, with b_ij = δ_ij . • If we expand b into a unit matrix in the program for solving a linear equation set, • the solution corresponding to each column of the unit matrix forms the corresponding column of A^−1

  3. for (int i=0; i<n-1; ++i) for (int j=i+1; j<n; ++j) for (int k=0; k<n; ++k) b[index[j]][k] -= a[index[j]][i]*b[index[i]][k]; // Perform backward substitutions for (int i=0; i<n; ++i) { x[n-1][i] = b[index[n-1]][i]/a[index[n-1]][n-1]; for (int j=n-2; j>=0; --j) { x[j][i] = b[index[j]][i]; for (int k=j+1; k<n; ++k) { x[j][i] -= a[index[j]][k]*x[k][i]; } x[j][i] /= a[index[j]][j]; } } return x; } public static void gaussian(double a[][], int index[]) {...} } // An example of performing matrix inversion through the // partial-pivoting Gaussian elimination. import java.lang.*; public class Inverse { public static void main(String argv[]) { double a[][]= {{ 100, 100, 100}, {-100, 300, -100}, {-100, -100, 300}}; int n = a.length; double d[][] = invert(a); for (int i=0; i<n; ++i) for (int j=0; j<n; ++j) System.out.println(d[i][j]); } public static double[][] invert(double a[][]) { int n = a.length; double x[][] = new double[n][n]; double b[][] = new double[n][n]; int index[] = new int[n]; for (int i=0; i<n; ++i) b[i][i] = 1; // Transform the matrix into an upper triangle gaussian(a, index); // Update the matrix b[i][j] with the ratios stored

  4. LU decomposition • A scheme related to the Gaussian elimination is called LU decomposition, • which splits a nonsingular matrix into the product of a lower-triangular and an upper-triangular matrix. • the Gaussian elimination corresponds to • a set of n − 1 matrix operations that can be represented by a matrix multiplication • A^(n−1) = MA, • where • M = M^(n−1)M^(n−2) · · ·M^(1), • with eachM^(r ), for r = 1, 2, . . . , n − 1, completing one step of the elimination.

  5. It s easy to show that M is a lower-triangular matrix with all the diagonal elements being −1 and M^k_ij = −A^( j−1)_k_i j /A^( j−1)_k j jfor i > j . • So A = LU,where U = A^(n−1) is an upper-triangular matrix and L = M^−1 is a lower-triangular matrix.

  6. Furthermore, because M_ii= −1, we must have L_ii= 1 and L_i j = −M_i j = A^( j−1)_ki j /A^( j−1)_k j jfor i > j , • which are the ratios stored in the matrix in the method introduced earlier in this section to perform the partial-pivoting Gaussian elimination.

  7. The method performs the LU decomposition with nonzero elements of U and nonzero, nondiagonal elements of L stored in the returned matrix. • The choice of L_ii= 1 in the LU decomposition is called the Doolittle factorization. • An alternative is to choose U_ii= 1; this is called the Crout factorization. • The Crout factorization is equivalent to rescaling U_ij by Uiifor i < j and multiplingL_ij by U_jj for i > j from the Doolittle factorization.

  8. In general the elements in L and U can be obtained from comparison of the elements of the product of L and U with the elements of A. • Theresult can be cast into a recursion with where L_i1 = A_i1/U_11 and U_1 j = A_1 j /L11 are the starting values.

  9. We can apply the same pivoting technique and store the L and U values at the zero component of the matrix. • The determinant of A is given by the product of the diagonal elements of L and U because |A| = |L||U|

  10. As soon as we obtain L and U for a given matrix A, we can solve the linear equation set Ax = b with one set of forward substitutions and another set of backward substitutions • the linear equation set can now be rewritten as • Ly = b, Ux= y. • If we compare the elements on both sides of these equations, we have • with y_1 = b_1/L_11 and i= 2, 3, . . . , n.

  11. We can obtain the solution of the original linear equation set from for i = n − 1, n − 2, . . . , 1, starting with x_n = y_n/U_nn.

  12. Monte Carlo Simulations • Statistical mechanics of complex physical systems • Difficult to solve by analytical approaches • Numerical simulation: indispensable tool • MD and MC • One of the major numerical techniques developed in the last century • for evaluating multidimensional integrals • solving integral equations

  13. Brief History of MC • The name: • the resemblance of the technique of playing and recording your results in a real gambling casino • Comte de Buffon’s needle problem. • Curant: random walk and PDE • Fermi in 1930: random sampling to calculate the interaction between neutrons and the concentrated materials, the diffusion and the transport of neutrons. • Newmann, Fermi, Ulam and Metropolis in the 40’s and 50’s • Rapid progress to solve statistical physics, transport, economical modeling…

  14. Theory and application in statistical physics • Statistical physics: • System with large degrees of freedom • A typical problem: • Suppose we know the Hamiltonian, calculate the average macroscopic observables. • Example: Magnetic system: • Ising model • Ferromagnetic system • Anisotropic along different directions.

  15. Here, on a lattice point I, the spin is either point up or down. • Exchange energy is only between neighbors. • I is the Zeeman energy • However, If the spin direction is in an x-y plane: (xy model)

  16. The spin direction isotropic: (Heisenberg Model) • Many more complex models are possible

  17. Statistical Average • Average energy and magnetism for one degree of freedom: • To any observable A(x), • x is a vector in the phase space:

  18. The normalized Boltzmann term: is the probability density. • However, we don’t know how to calculate the integral. (N degrees of freedom)

  19. Simple Sampling • Equilibrium statistical mechanism: • Basic goal: • Calculate A(x) according to P(x). • Approximate by using a characteristic subset{x_1, x_2,… x_N} • Randomly select the points X from the phase space uniformly (‘simple sampling’)

  20. Random Walk • transport processes such as diffusion • Addition of vectors whose orientation is random • can be generated both on lattices and in the continuum, • either choose a uniform step length of the walk, • or choose the step length from a suitable distribution. • desirable if one wishes to consider • complicated geometries • or boundary conditions of the medium where the diffusion takes place • Straightforward to include competing processes • For example, in a reactor, diffusion of neutrons in the moderator competes with loss of neutrons due to nuclear reactions, radiation going to the outside, etc, or gain of neutrons due to fission events. • This problem of reactor criticality (and related problems for nuclear weapons!) was the starting point for the first largescale applications of Monte Carlo methods by Fermi, von Neumann, Ulam, and their coworkers!

  21. Self-avoiding walks • widely studied as a simple model for the configurationalstatistics of polymer chains in good solvents. • Considers a square or simple cubic lattice with coordination number z. • For a random walk (RW) with N steps • ZRW = z^Nconfigurations • many of these random walks intersect themselves and thus would not be self-avoiding. • For SAWs, • only expects of the order of ZSAW configurations, • Where ZSAWαNγ−1zeffN, where N -> ∞. • Here γ> 1 is a characteristic exponent (which is believed to be 43/32 in d =2dimensions (Nienhuis 1984) • while in d =3 dimensions it is only known approximately, 1.16

  22. zeffis an ‘effective’ coordination number (also not known exactly) • Obvious that an exact enumeration of all configurations would be possible for rather small N only • most questions of interest refer to the behavior for large N. • the use of these methods is fairly limited, and is not discussed here further. • only concerned with Monte Carlo techniques to estimate quantities such as γor zeff or other quantities of interest • such as the end-to-end distance of the SAW

  23. For Monte Carlo simulation: • Based on a sample of M << Zsaw • In the simple sampling generation of SAWs, the M configurations are statistically independent and hence standard error analysis applies. • Relative error:

  24. Random variables • Type 1: Discrete randome variables  : for each value: x1, x2, , xn,  , • The probability is p1, p2, , pn,  . • Pi is the probability distribution of  • Another type of random variable: is continuous, • Suppose the probability of in [x, x+x] is • p(x << x+x), • f(x)is the probability distributional density of , The probability of  in [a,b]is given by

  25. Normalized: • to discrete random variables • to continuous random variables • Distribution function:

  26. Mathematical expectation value: • or • Square variant: • or

  27. Two Important theorems: • Large number theorem: Suppose1, 2,,n,is a sequence of independent • Random variables with the same distribution, and E(i)=a, to any > 0, we have • This theorem shows that no mater what the random variable distribution is • If nis large enough, the mathematic average converges at the mathematic • expected value.

  28. Central limit theorem: • Suppose1, 2,,n,are a sequence of independent random variables, with the same distribution, E(i)=a, D(i)=2exist, • The difference of mathematic average and the expected value is normally distributed.

  29. Markov Chain • Construct a process, from any micro state of the system, it migrates to a new state. • We use xito show the microstate. If we start from x0, this process will create a sequence of states: • x1,x2,, xi,  , these state form a chain. • Markov process is a process that each state is only related to the previous state. • From state rto state s, the transfer probability is • w(xr!xs). Markov process creates Markov chain. • Also called memoryless chain. • To realize normal distribution, we can constructthe Markov chain, so that no matter which state we start from, there exists a large number M, after we discard M states, the remainder still satisfies normal distribution.

  30. If w(xr!xs) follows the equilibrium condition: • P(x)is the desired distribution. Here it is normal distribution. • To prove it, we consider N parallel Markov chains, for a certain step, • Nrchains are in rth state, Nschains are in sstate. • Therefore, the number of the next state that is from r to s • Is: • From s to r is: • The number of the net transfer states is:

  31. If w(xr!xs)follows the equilibrium condition, • This is a very important result, which indicates that if there are two states that • don’t satisfy normal distribution, the Markov process will tend to help it to satisfy • the normal distribution.

  32. Normal sampling: • Choose a transfer probability that satisfies normal distribution; • Create a Markov chain, throw away Mstates; • Calculate the physical quantities of the remaining states. • This algorithm was proposed by Metropolis in the 50’s. • Metropolis algorithm. • Suppose from rstate to sstate, if the energy difference is • Then:

  33. Metropolis algorithm: • Another popular choice: • The choice of wis not unique. • However, different wsconverge very differently. How to choose a good • W is still the frontier of Monte Carloalgorithm.

More Related