Chapter 5 simultaneous linear equations
1 / 58

Chapter 5 Simultaneous Linear Equations - PowerPoint PPT Presentation

  • Updated On :
  • Presentation posted in: General

Chapter 5 Simultaneous Linear Equations. Many engineering and scientific problems can be formulated in terms of systems of simultaneous linear equations.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentationdownload

Chapter 5 Simultaneous Linear Equations

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Chapter 5 simultaneous linear equations l.jpg

Chapter 5 Simultaneous Linear Equations

  • Many engineering and scientific problems can be formulated in terms of systems of simultaneous linear equations.

  • In a system consisting of only a few equations, a solution can be found analytically using the standard methods from algebra, such as substitution.

Slide2 l.jpg

  • Example: Electrical Circuit Analysis

Kirchhoff’s law

I: current R: resistance

V: voltage

Slide3 l.jpg

Assume R1=2, R2=4, R3=5, V1=6, and V2=2. We get the following system of three linear simultaneous equations.

The solution to these three equations produces the current flows in the network.

General form for a system of equations l.jpg

General Form for a System of Equations

aij : known coefficient

Xj : unknown variable

Ci : known contant

  • Assume # of unknowns = # of equations

  • Assume the equations are linearly independent; that is, any one equation is not a linear combination of any of the other equations.

Slide5 l.jpg

The linear system can be written in a matrix-vector form:

Combining A and C, it can be expressed as:

Slide6 l.jpg

can be expressed as:

Solution of two equations l.jpg

Solution of Two Equations

It can be solved by substitution.

With substitution, we get

Classification of systems of equations l.jpg

Classification of Systems of Equations

  • Systems that have unique solutions

Slide9 l.jpg

  • Systems without solutions (parallel lines)

Slide10 l.jpg

  • Systems with an infinite number of solutions (same line)

Slide11 l.jpg

  • A system that has a solution, but has ill-conditioned parameters.

Permissible operations l.jpg

Permissible Operations

  • Rule 1: The solution is not changed if the order of the equations is changed.

Slide13 l.jpg

  • Rule 2: Any one of the equations can be multiplied or divided by a nonzero constant without changing the solution.

  • Rule3: The solution is not changed if two equations are added together and the resulting equation replaces either of the two original equations.

Gaussian elimination l.jpg

Gaussian Elimination

  • Gaussian elimination procedure:

    Phase 1: forward pass

    Phase 2: back substitution

  • The the forward pass is to apply the three permissible operations to transform the original matrix to an upper-triangular matrix:

Slide15 l.jpg

Written in terms of individual equations:

  • Back substitution:

Example gaussian elimination procedure l.jpg

Example: Gaussian Elimination Procedure

Represented in the matrix form:

Slide17 l.jpg

  • As the step 1 in the forward pass, we will convert the element a11 (a11 is called the pivot for row 1) to 1 and eliminate, that is set to zero, all the other elements in the first column.

Slide18 l.jpg

  • Step 2:

  • Step 3:

Slide19 l.jpg

  • Step 4:

    It represents:

Slide20 l.jpg

  • Step 1 of backward substitution:

  • Step 2:

Slide21 l.jpg

  • Step 3:

  • The solution is:

X1 = 1, X2 = 2, X3 = 3, X4 = 4

Gauss jordan elimination l.jpg

Gauss-Jordan Elimination

  • The Gaussian elimination procedure requires a forward pass to transform the coefficient matrix into an upper-triangle form.

  • In Gauss-Jordan elimination, all coefficients in a column except for the pivot element are eliminated.

  • In Gauss-Jordan elimination, the solution is obtained directly after the forward pass; there is no back substitution phase.

  • The Gauss-Jordan method needs more computational effort than Gaussian elimination.

Example gauss jordan elimination l.jpg

Example: Gauss-Jordan Elimination

  • Step1:

  • Step 2:

Slide24 l.jpg

  • Step 3:

  • Step 4:

Accumulated round off errors l.jpg

Accumulated Round-off Errors

  • Problems with round-off and truncation are most likely to occur when the coefficients in the equations differ by several orders of magnitude.

  • Round-off problems can be reduced by rearranging the equations such that the largest coefficient in each equation is placed on the principal diagonal of the matrix.

  • The equations should be ordered such that the equation having the largest pivot is reduced first, followed by the equation having the next largest, and so on.

Programming gaussian elimination l.jpg

Programming Gaussian Elimination

  • Forward pass:

  • Loop over each row i, making each row i in turn the pivot row.

  • Normalize the elements of the pivot row (row i) by dividing each element in the row by aii as follows:

Slide27 l.jpg

3.Loop over rows( i + 1) to n below the pivot row and reduce the elements in each row as follows:

  • Back substitution

1. For the last row n:

2. For rows (n-1) through 1,

Lu decomposition l.jpg

LU Decomposition

  • A matrix A can be decomposed into L and U, where L is a lower-triangular matrix and U is a upper-triangular matrix.


Slide29 l.jpg

  • L and U can be determined as follows:

Slide30 l.jpg


LUX=C, we have LE=C and UX=E

  • To calculate LE=C (forward substitution)

  • To calculate UX=E (back substitution)

Relation between lu decomposition and gaussian elimination l.jpg

Relation between LU Decomposition and Gaussian Elimination

  • In the LU decomposition, matrix U is equivalent to the upper triangular matrix obtained in the forward pass in Gaussian elimination.

  • The calculation of UX=E is equivalent to the back substitution in Gaussian elimination.

Example lu decompostion l.jpg

Example: LU Decompostion

Applying LU decomposition:

Slide34 l.jpg

Thus, the L and U matrices are

Forward substitution:

Slide35 l.jpg

Back substitution:

Cholesky decomposition for symmetric matrices l.jpg

Cholesky Decomposition for Symmetric Matrices

  • A symmetric matrix A:

  • Cholesky decomposition for a symmetric matrix A

Slide37 l.jpg

Matrix L can be computed as follows:

Example cholesky decomposition l.jpg

Example: Cholesky Decomposition

We can obtain the following:

Slide40 l.jpg

Therefore, the L matrix is

The validity can be verified as

Iterative methods l.jpg

Iterative Methods

  • Elimination methods like the Gaussian elimination procedure are often called direct equation-solving methods. An iterative method is a trial-and error procedure.

  • In iterative methods, we can assume a solution, that is, a set of estimates for the unknowns, and successively refine our estimate of the solution through some set of rules.

  • A major advantage of iterative methods is that they can be used to solve nonlinear simultaneous equations, a task that is not possible using direct elimination methods.

Jacobi iteration l.jpg

Jacobi Iteration

  • Each equation is rearranged to produce an expression for a single unknown.

Example jacobi iteration l.jpg

Example: Jacobi Iteration

Rearrange each equation as follows:

Slide44 l.jpg

Assume an initial estimate for the solution: X1=X2=X3=1. First iteration:

Second iteration:

The solution is shown in the next table.

Slide45 l.jpg

Table: Example of Jacobi Iteration

Gauss seidel iteration l.jpg

Gauss-Seidel Iteration

  • In the Jacobi iteration procedure, we always complete a full iteration cycle over all the equations before updating our solution estimates. In the Gauss-Seidel iteration procedure, we update each unknown as soon as a new estimate of that unknown is computed.

  • Example: Gauss-Seidel Iteration

Slide47 l.jpg

Assume an initial solution estimate of X1=X2=X3=1.

First iteration:

Second iteration:

Table example of gauss seidel iteration l.jpg

Table: Example of Gauss-Seidel Iteration

Jocobi iteration method requires 13 iterations to reach the accuracy of 3 decimal places. Gauss-Seidel iteration method needs 7 iterations.

Convergence considerations of the iterative methods l.jpg

Convergence Considerations of the Iterative Methods

  • Both the Jacobi and Gauss-Seidel iterative methods may diverge.

  • Interchange the order of equations in the above example, and solve it by the Gauss-Seidel method:

Table divergence of gauss seidel iteration l.jpg

Table: Divergence of Gauss-Seidel Iteration

  • The solution in the above table would not converge. That is, we can not get a solution.

  • The divergence of the iterative calculation does not imply there is no solution, since it is a permissible operation to interchange the order in a set of equations.

Convergence and divergence of gauss seidel iteration l.jpg

Convergence and Divergence of Gauss-Seidel Iteration

If we solve both equations individually for X1 we get

Slide53 l.jpg

  • It will converge when the absolute value of the slope of f1 is less than the absolute value the slope of f2. Thus the equations should be arranged such that X1 is expressed in terms of X2 with the following conditions:

  • For a system with more than 2 equations, we should select the equation with the largest coefficient as the first equation, the equation with the largest coefficient in the remaining equations as the second equation, andso on.

Cramer s rule l.jpg

Cramer’s rule

Cramer’s rule for obtaining Xi:

An example of |Ai| :

Example cramer s rule l.jpg

Example: Cramer’s Rule

Slide56 l.jpg

The solution is

Matrix inversion l.jpg

Matrix Inversion

  • The inverse of a matrix P is defined by the following equation

    in which I is the identity or unit matrix and both P and I are square matrices.

  • The values of P-1can be computed by solving a set of n2 simultaneous equations.


Slide58 l.jpg

Let the elements of P-1 and P be denoted asqij and pij.

  • Login