linear equations l.
Download
Skip this Video
Download Presentation
Linear Equations

Loading in 2 Seconds...

play fullscreen
1 / 64

Linear Equations - PowerPoint PPT Presentation


  • 83 Views
  • Uploaded on

Linear Equations. PSCI 702 September 21, 2005. Numerical Errors. Round off Error Truncation Error. Linear Algebraic Equations. Linear Algebraic Equations. In Matrix Format Solution:. Cramer’s Rule. M ij is the determinant of the Matrix A with the ith row and jth column removed.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Linear Equations' - quilla


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
linear equations

Linear Equations

PSCI 702

September 21, 2005

numerical errors
Numerical Errors
  • Round off Error
  • Truncation Error
linear algebraic equations4
Linear Algebraic Equations
  • In Matrix Format
  • Solution:
cramer s rule
Cramer’s Rule
  • Mij is the determinant of the Matrix A with the ith row and jth column removed.
  • (-1)ij is called the cofactor of element aij.
cramer s rule8
Cramer’s Rule
  • 3n2 operations for the determinant.
  • 3n3 operations for every unknown.
  • Unstable for large Matrices.
  • Large error propagation.
  • Good for small Matrices (n<20).
gaussian elimination
Gaussian Elimination
  • Divide each row by the leading element.
  • Subtract row 1 from all other rows.
  • Move to the second row an continue the process.
gaussian elimination12
Gaussian Elimination
  • Back substitution
gaussian elimination13
Gaussian Elimination
  • Division by zero: May occur in the forward elimination steps.
  • Round-off error: Prone to round-off errors.
slide14

Gaussian Elimination

Consider the system of equations:

Use five significant figures with chopping

=

At the end of Forward Elimination

=

slide15

Gaussian Elimination

Back Substitution

slide16

Gaussian Elimination

Compare the calculated values with the exact solution

slide17

Improvements

Increase the number of significant digits

Decreases round off error

Does not avoid division by zero

Gaussian Elimination with Partial Pivoting

Avoids division by zero

Reduces round off error

slide18

Partial Pivoting

Gaussian Elimination with partial pivoting applies row switching to normal Gaussian Elimination.

How?

At the beginning of the kth step of forward elimination, find the maximum of

If the maximum of the values is

In the pth row,

then switch rows p and k.

slide19

Partial Pivoting

What does it Mean?

Gaussian Elimination with Partial Pivoting ensures that each step of Forward Elimination is performed with the pivoting element |akk| having the largest absolute value.

slide20

Partial Pivoting: Example

Consider the system of equations

In matrix form

=

Solve using Gaussian Elimination with Partial Pivoting using five significant digits with chopping

slide21

Partial Pivoting: Example

Forward Elimination: Step 1

Examining the values of the first column

|10|, |-3|, and |5| or 10, 3, and 5

The largest absolute value is 10, which means, to follow the rules of Partial Pivoting, we switch row1 with row1.

Performing Forward Elimination

slide22

Partial Pivoting: Example

Forward Elimination: Step 2

Examining the values of the first column

|-0.001| and |2.5| or 0.0001 and 2.5

The largest absolute value is 2.5, so row 2 is switched with row 3

Performing the row swap

slide23

Partial Pivoting: Example

Forward Elimination: Step 2

Performing the Forward Elimination results in:

slide24

Partial Pivoting: Example

Back Substitution

Solving the equations through back substitution

slide25

Partial Pivoting: Example

Compare the calculated and exact solution

The fact that they are equal is coincidence, but it does illustrate the advantage of Partial Pivoting

gauss jordan elimination
Gauss Jordan Elimination
  • Start with the following system of Matrices.
  • Divide each row by the leading element.
  • Subtract row 1 from all other rows.
  • In addition to subtracting the line whose diagonal term has been made unity from all those bellow it, also subtract from the equations above it as well.
matrix factorization
Matrix Factorization
  • Assume A can be written as A=VU where V and U are triangular Matrices.
slide29

Matrix Factorization

Proof

If solving a set of linear equations

If Then

Multiply by

Which gives

Remember which leads to

Now, if then

Now, let

Which ends with

and

(1)

(2)

slide30

Matrix Factorization

How can this be used?

Given

Decompose into and

Then solve for

And then solve for

slide31

Matrix Factorization

How is this better or faster than Gauss Elimination?

Let’s look at computational time.

n = number of equations

To decompose [A], time is proportional to

To solve and

time is proportional to

slide32

Matrix Factorization

Therefore, total computational time for LU Decomposition is proportional to

or

Gauss Elimination computation time is proportional to

How is this better?

slide33

Matrix Factorization

What about a situation where the [C] vector changes?

In VU factorization, VU decomposition of [A] is independent of the [C] vector, therefore it only needs to be done once.

Let m = the number of times the [C] vector changes

The computational times are proportional to

VU factorization = Gauss Elimination=

Consider a 100 equation set with 50 right hand side vectors

VU factorization = Gauss Elimination =

slide34

Matrix Factorization

Another Advantage

Finding the Inverse of a Matrix

VU Factorization Gauss Elimination

For large values of n

slide35

Matrix Factorization

Method: Decompose [A] to [V] and [U]

[U] is the same as the coefficient matrix at the end of the forward elimination step.

[V] is obtained using the multipliers that were used in the forward elimination process

slide36

Matrix Factorization

Finding the [U] matrix

Using the Forward Elimination Procedure of Gauss Elimination

slide37

Matrix Factorization

Finding the [U] matrix

Using the Forward Elimination Procedure of Gauss Elimination

slide38

Matrix Factorization

Finding the [V] matrix

Using the multipliers used during the Forward Elimination Procedure

From the first step of forward elimination

From the second step of forward elimination

slide40

Matrix Factorization

Example: Solving simultaneous linear equations using VU factorization

Solve the following set of linear equations using VU Factorization

Using the procedure for finding the [V] and [U] matrices

slide41

Matrix Factorization

Example: Solving simultaneous linear equations using VU factorization

Complete the forward substitution to solve for

slide42

Matrix Factorization

Example: Solving simultaneous linear equations using VU factorization

Set

Solve for

The 3 equations become

slide43

Matrix Factorization

Example: Solving simultaneous linear equations using VU factorization

From the 3rd equation

Substituting in a3 and using the second equation

slide44

Matrix Factorization

Example: Solving simultaneous linear equations using VU factorization

Substituting in a3 and a2 using the first equation

Hence the Solution Vector is:

slide45

Gauss Method

An iterative method.

  • Basic Procedure:
  • Algebraically solve each linear equation for xi
  • Assume an initial guess solution array
  • Solve for each xi and repeat
  • Use absolute relative approximate error after each iteration to check if error is within a prespecified tolerance.
slide46

Gauss Method

Why?

The Gauss Method allows the user to control round-off error.

Elimination methods such as Gaussian Elimination and VU Factorization are prone to round-off error.

Also: If the physics of the problem is understood, a close initial guess can be made, decreasing the number of iterations needed.

slide47

Gauss Method

Algorithm

A set of n equations and n unknowns:

If: the diagonal elements are non-zero

Rewrite each equation solving for the corresponding unknown

ex:

First equation, solve for x1

Second equation, solve for x2

. .

. .

. .

slide48

Gauss Method

Algorithm

Rewriting each equation

From Equation 1

From equation 2

From equation n-1

From equation n

slide49

Gauss Method

Algorithm

General Form of each equation

slide50

Gauss Method

Gauss Algorithm

General Form for any row ‘i’

How or where can this equation be used?

slide51

Gauss Method

Solve for the unknowns

Use rewritten equations to solve for each value of xi.

Important: Remember to use the most recent value of xi. Which means to apply values calculated to the calculations remaining in the current iteration.

Assume an initial guess for [X]

slide52

Gauss Method

Calculate the Absolute Relative Approximate Error

So when has the answer been found?

The iterations are stopped when the absolute relative approximate error is less than a pre-specified tolerance for all unknowns.

gauss seidel method
Gauss-Seidel Method
  • Improved method
slide54

Gauss-Seidel Method: Pitfall

Diagonally dominant: The coefficient on the diagonal must be at least equal to the sum of the other coefficients in that row and at least one row with a diagonal coefficient greater than the sum of the other coefficients in that row.

Which coefficient matrix is diagonally dominant?

Most physical systems do result in simultaneous linear equations that have diagonally dominant coefficient matrices.

slide55

Gauss-Seidel Method: Example

Given the system of equations

The coefficient matrix is:

With an initial guess of

Will the solution converge using the Gauss-Seidel method?

slide56

Gauss-Seidel Method: Example

Checking if the coefficient matrix is diagonally dominant

The inequalities are all true and at least one row is strictly greater than:

Therefore: The solution should converge using the Gauss-Seidel Method

slide57

Gauss-Siedel Method: Example

Rewriting each equation

With an initial guess of

slide58

Gauss-Siedel Method: Example

The absolute relative approximate error

The maximum absolute relative error after the first iteration is 100%

slide59

Gauss-Siedel Method: Example

After Iteration #1

After Iteration #2

Substituting the x values into the equations

slide60

Gauss-Siedel Method: Example

Iteration #2 absolute relative approximate error

The maximum absolute relative error after the first iteration is 240.62%

This is much larger than the maximum absolute relative error obtained in iteration #1. Is this a problem?

slide61

Gauss-Siedel Method: Example

Repeating more iterations, the following values are obtained

The solution obtained

is close to the exact solution of

comparison of different methods
Comparison of different methods
  • Gauss is slow convergent; however, more stable.
  • Gauss-Seidel might not be stable; however, when stable, converges fast.
  • Hotelling and bodewig uses an improved iterative method for matrix inversion.
  • In Relaxation techniques small corrections will be applied to each element at each iteration.
transformations and eigenvlues
Transformations and Eigenvlues
  • Many problems of the form:
  • Can be written as:
  • Where S is a diagonal Matrix.
  • Where the primed vectors represent the original vectors in the new space.