- 90 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Linear Equations' - quilla

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Numerical Errors

- Round off Error
- Truncation Error

Linear Algebraic Equations

- In Matrix Format
- Solution:

Cramer’s Rule

- Mij is the determinant of the Matrix A with the ith row and jth column removed.
- (-1)ij is called the cofactor of element aij.

Cramer’s Rule

- 3n2 operations for the determinant.
- 3n3 operations for every unknown.
- Unstable for large Matrices.
- Large error propagation.
- Good for small Matrices (n<20).

Gaussian Elimination

- Divide each row by the leading element.
- Subtract row 1 from all other rows.
- Move to the second row an continue the process.

Gaussian Elimination

- Back substitution

Gaussian Elimination

- Division by zero: May occur in the forward elimination steps.
- Round-off error: Prone to round-off errors.

Consider the system of equations:

Use five significant figures with chopping

=

At the end of Forward Elimination

=

Back Substitution

Compare the calculated values with the exact solution

Increase the number of significant digits

Decreases round off error

Does not avoid division by zero

Gaussian Elimination with Partial Pivoting

Avoids division by zero

Reduces round off error

Gaussian Elimination with partial pivoting applies row switching to normal Gaussian Elimination.

How?

At the beginning of the kth step of forward elimination, find the maximum of

If the maximum of the values is

In the pth row,

then switch rows p and k.

What does it Mean?

Gaussian Elimination with Partial Pivoting ensures that each step of Forward Elimination is performed with the pivoting element |akk| having the largest absolute value.

Consider the system of equations

In matrix form

=

Solve using Gaussian Elimination with Partial Pivoting using five significant digits with chopping

Forward Elimination: Step 1

Examining the values of the first column

|10|, |-3|, and |5| or 10, 3, and 5

The largest absolute value is 10, which means, to follow the rules of Partial Pivoting, we switch row1 with row1.

Performing Forward Elimination

Forward Elimination: Step 2

Examining the values of the first column

|-0.001| and |2.5| or 0.0001 and 2.5

The largest absolute value is 2.5, so row 2 is switched with row 3

Performing the row swap

Compare the calculated and exact solution

The fact that they are equal is coincidence, but it does illustrate the advantage of Partial Pivoting

Gauss Jordan Elimination

- Start with the following system of Matrices.
- Divide each row by the leading element.
- Subtract row 1 from all other rows.
- In addition to subtracting the line whose diagonal term has been made unity from all those bellow it, also subtract from the equations above it as well.

Matrix Factorization

- Assume A can be written as A=VU where V and U are triangular Matrices.

Proof

If solving a set of linear equations

If Then

Multiply by

Which gives

Remember which leads to

Now, if then

Now, let

Which ends with

and

(1)

(2)

How is this better or faster than Gauss Elimination?

Let’s look at computational time.

n = number of equations

To decompose [A], time is proportional to

To solve and

time is proportional to

Therefore, total computational time for LU Decomposition is proportional to

or

Gauss Elimination computation time is proportional to

How is this better?

What about a situation where the [C] vector changes?

In VU factorization, VU decomposition of [A] is independent of the [C] vector, therefore it only needs to be done once.

Let m = the number of times the [C] vector changes

The computational times are proportional to

VU factorization = Gauss Elimination=

Consider a 100 equation set with 50 right hand side vectors

VU factorization = Gauss Elimination =

Another Advantage

Finding the Inverse of a Matrix

VU Factorization Gauss Elimination

For large values of n

Method: Decompose [A] to [V] and [U]

[U] is the same as the coefficient matrix at the end of the forward elimination step.

[V] is obtained using the multipliers that were used in the forward elimination process

Finding the [U] matrix

Using the Forward Elimination Procedure of Gauss Elimination

Finding the [U] matrix

Using the Forward Elimination Procedure of Gauss Elimination

Finding the [V] matrix

Using the multipliers used during the Forward Elimination Procedure

From the first step of forward elimination

From the second step of forward elimination

Does ?

Example: Solving simultaneous linear equations using VU factorization

Solve the following set of linear equations using VU Factorization

Using the procedure for finding the [V] and [U] matrices

Example: Solving simultaneous linear equations using VU factorization

Complete the forward substitution to solve for

Example: Solving simultaneous linear equations using VU factorization

Set

Solve for

The 3 equations become

Example: Solving simultaneous linear equations using VU factorization

From the 3rd equation

Substituting in a3 and using the second equation

Example: Solving simultaneous linear equations using VU factorization

Substituting in a3 and a2 using the first equation

Hence the Solution Vector is:

An iterative method.

- Basic Procedure:
- Algebraically solve each linear equation for xi
- Assume an initial guess solution array
- Solve for each xi and repeat
- Use absolute relative approximate error after each iteration to check if error is within a prespecified tolerance.

Why?

The Gauss Method allows the user to control round-off error.

Elimination methods such as Gaussian Elimination and VU Factorization are prone to round-off error.

Also: If the physics of the problem is understood, a close initial guess can be made, decreasing the number of iterations needed.

Algorithm

A set of n equations and n unknowns:

If: the diagonal elements are non-zero

Rewrite each equation solving for the corresponding unknown

ex:

First equation, solve for x1

Second equation, solve for x2

. .

. .

. .

Algorithm

Rewriting each equation

From Equation 1

From equation 2

From equation n-1

From equation n

Solve for the unknowns

Use rewritten equations to solve for each value of xi.

Important: Remember to use the most recent value of xi. Which means to apply values calculated to the calculations remaining in the current iteration.

Assume an initial guess for [X]

Calculate the Absolute Relative Approximate Error

So when has the answer been found?

The iterations are stopped when the absolute relative approximate error is less than a pre-specified tolerance for all unknowns.

Gauss-Seidel Method

- Improved method

Diagonally dominant: The coefficient on the diagonal must be at least equal to the sum of the other coefficients in that row and at least one row with a diagonal coefficient greater than the sum of the other coefficients in that row.

Which coefficient matrix is diagonally dominant?

Most physical systems do result in simultaneous linear equations that have diagonally dominant coefficient matrices.

Given the system of equations

The coefficient matrix is:

With an initial guess of

Will the solution converge using the Gauss-Seidel method?

Checking if the coefficient matrix is diagonally dominant

The inequalities are all true and at least one row is strictly greater than:

Therefore: The solution should converge using the Gauss-Seidel Method

The absolute relative approximate error

The maximum absolute relative error after the first iteration is 100%

After Iteration #1

After Iteration #2

Substituting the x values into the equations

Iteration #2 absolute relative approximate error

The maximum absolute relative error after the first iteration is 240.62%

This is much larger than the maximum absolute relative error obtained in iteration #1. Is this a problem?

Repeating more iterations, the following values are obtained

The solution obtained

is close to the exact solution of

Comparison of different methods

- Gauss is slow convergent; however, more stable.
- Gauss-Seidel might not be stable; however, when stable, converges fast.
- Hotelling and bodewig uses an improved iterative method for matrix inversion.
- In Relaxation techniques small corrections will be applied to each element at each iteration.

Transformations and Eigenvlues

- Many problems of the form:
- Can be written as:
- Where S is a diagonal Matrix.
- Where the primed vectors represent the original vectors in the new space.

Download Presentation

Connecting to Server..