- 104 Views
- Uploaded on
- Presentation posted in: General

Chapter 1 Introduction to Linear systems and Matrices

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

A general linear system of m equations in n unknowns looks like

A system with at least one solution is called consistent, a system with

no solution at all is called inconsistent.

A matrix is a linear array of constants. These constants are called entries of the matrix.

Example:

If we consider the system

then there are 3 matrices associated with this system.

1. The coefficient matrix

3. The augmented matrix

2. The matrix of constants

- Elementary Operations on a Linear System
- Add a multiple of one equation to another.
- Interchange two equations.
- Multiply an equation by a non-zero constant.

- Elementary Row Operations on a Matrix
- Add a multiple of one row to another.
- Interchange two rows.
- Multiply a row by a non-zero constant.

An entry of a matrix is called a pivot if it is the 1st nonzero element in that row.

In the following example, the bigger stars are the pivots of the matrix.

- A matrix is in row echelon form (ref) if
- All rows that contain only zeros are grouped at the bottom of the matrix.
- For any row that has nonzero entries, the pivot of that row appears strictly to the right of the pivot in the previous row.

The following matrices are all in Row Echelon Form.

The following matrices are not in ref.

Advantage of having matrices in ref:

If the coefficient matrix of a linear system is in ref, then it can

be solved by backward substitution.

Example:

- More definitions
- The variables correspond to the pivots of the augmented matrix are
- called leading variables, or dependent variables.
- (2) The remaining variables are free variables or independent variables.

u, w, and y are leading variables,

v and x are free variables.

add 3 times

row 1 to row 2

add -2 times row 1 to row 3

add -2 times

row 2 to row 3

Fact:

Every matrix can be transformed into a matrix in ref using (a finite number of) elementary row operations. (note: final result not unique)

The method to do this is called the Gaussian Elimination.

Example:

Reduced Row Echelon Form

- A matrix is in rref if
- it is in ref
- all the pivots are equal to 1, and
- all entries above any pivot are zero.

Example:

Definition:

Two matrices of the same size are said to be row equivalent if we can transform one of them to the other by a finite number of elementary row operations.

Theorem:

Two matrices in rref are row equivalent if and only if they are equal.

(in other words, the rref of any matrix is unique, and we can say the

rref of a matrix A.)

Theorem:

If two linear systems have row equivalent augmented matrices, then they have exactly the same set of solutions.

Consequences:

Given a system of linear equations, we can first transform the augmented

matrix into its rref using Gaussian elimination, then solve the simplified

system of linear equation.

And by the above theorem, the solution set to the simplified system will be

the same as that to the original system.

Three different Types of Linear Systems

Let AX = B be a system of linear equations, and [A|B] be its augmented

matrix. We can classified the system according to its solution set.

- The system has a unique solution
- This happens exactly when any ref of [A|B] has a pivot in every column except the last one.

2. The system has infinitely many solutions

This happens exactly when any ref of [A|B] has no pivot in the last

column and has at least one more column with no pivot.

3. The system has no solution

This happens exactly when any ref of [A|B] has a pivot in the

last column.

Examples

1. If [A|B] is row equivalent to

then the system has a unique solution.

2. If [A|B] is row equivalent to

then the system has infinitely many solutions.

3. If [A|B] is row equivalent to

then the system has no solution.

1.3 The Algebra of Matrices

Definition:

If m and n are positive integers, then an mÃ—n matrix is a rectangular array of m rows and n columns of the form

where each aij is a number called the (i,j)-th entry or element of the matrix. The numbers m and n are called the dimensions of the matrix.

If A is a mÃ—n matrix where m = n, then the matrix A is called a square matrix.

In this case, the entries a11, a22, â€¦ ann, form the diagonal of A, and each one is called a diagonal element of A.

Definition:

If A = [aij] and B = [bij] are matrices, then we say that A = B if they

have the same dimensions and the corresponding entries are the same.

i.e. aij = bij for all i, j within the range.

If all entries of a mÃ—n matrix are zero, then it is called the mÃ—n zero

matrix and is denoted by 0

Addition of matrices.

Scalar multiplication to a matrix.

Product of Matrices

If A is a mÃ—n matrix and B is a nÃ—p matrix, then the product

C = AB

is defined to be a mÃ—p matrix such that cij is the inner product of

the i-th row of A and the j-th column of B.

An Application to the Multiplication of Matrices

The RGB color specification is the standard model for color video recording. However, for transmission efficiency and downward compatibility with B&W television, it has to be recoded into the YIQ specification.

The Y component of YIQ is not yellow but luminance, and only this component of a color TV signal is shown on a B&W TV sets; the chromaticity is encodes in I and Q.

The RGB-to-YIQ mapping is defined as follows:

Properties

1. Matrix multiplication is not commutative.

In general, AB is defined does not imply that BA is also defined.

Even if AB and BA are both defined, they may not be of the same

dimensions, ex.

A is a 2Ã—3 matrix and B is a 3Ã—2 matrix, then

AB is a 2Ã—2 matrix while BA is a 3Ã—3 matrix.

AB and BA have the same dimensions if and only if they are both nÃ—n square matrices.

And even in this case, AB can be very different from BA.

ex.

but

2. Matrix multiplication is associative,

i.e. A(BC) = (AB)C

3. Matrix multiplication is distributive over addition

i.e. A(B + C) = AB + AC

4. If 0 is the mÃ—n zero matrix and B is a nÃ—p matrix then

0B = 0 (the mÃ—p zero matrix)

Definition

For any positive number n, there is an identity matrix of size nÃ—n

5. If A is any mÃ—n matrix and B is any nÃ—p matrix then

AIn = A and InB = B

Matrix Notation for a linear system

Given a system of linear

equations

If we let

then the system can be written as

AX = B

Other Descriptions of the Product

Theorem

If A = [A1|A2| â€¦ |An] is a mÃ—n matrix where Ai is the i-th column of A, and X is a nÃ—1 matrix, then

In other words, AX is a linear combination of the columns of A.

Outer Products

To speed up the multiplication of matrices with the use of parallel

processors, we can express the product in the form

where A is a mÃ—n matrix,

B is a nÃ—p matrix,

Ai is the i-th column of A, and

Bi is the i-th row of B.

In this case, we can use n processors to compute the n products AiBi

simultaneously.

Note: In computer graphics (for video games etc.), we do need

very high speed computation of matrix products.

1.4 Inverses and Elementary Matrices

Definition:

If A is an nÃ—n matrix, an (two-sided) inverse of A is an nÃ—n matrix A-1 with the property that

A(A-1) = In = A-1A

where In is the nÃ—n identity matrix

If a square matrix A does have an inverse, then it is said to be invertible or

non-singular.

Theorem:

If an nÃ—n matrix does have an inverse, then that inverse is unique.

Theorem:

If A and B are both invertible nÃ—n matrices, then AB is also invertible and

(AB)-1 = B-1A-1

An important application of the inverse

Suppose that we are given a system of n linear equations in n unknowns,

AX = B

where A is the nÃ—n coefficient matrix.

Then this system has a unique solution if and only if A is invertible and this solution can be expressed in matrix form as

Aâ€“1B

Definition:

Let e be an elementary row operation. Then the nÃ—nelementary matrixE associated with e is the matrix obtained by applying e to In.

Thus

E = e(In)

Theorem:

Let e be an elementary row operation and let E be the associated elementary mÃ—m matrix. Then for every mÃ—n matrix A,

e(A) = EA

i.e. the elementary row operation can be performed on A by multiplying A with the corresponding elementary matrix on the left.

Theorem:

Each elementary matrix is invertible and its inverse is an elementary matrix of the same type.

Corollary:

An elementary permutation matrix is its own inverse; i.e. P = Pâ€“1

Diagonal matrices

Definition:

A diagonal matrix is a square matrix in which all the non-diagonal elements are zero.

(Note that the diagonal elements in a diagonal matrix can be zero as well.)

Theorem:

Let be an nÃ—n diagonal matrix.

- If A is an nÃ—p matrix, then DA is the same as multiplying the i-th row of A by di , for 1 â‰¤ i â‰¤n .
- If B is an mÃ—n matrix then BD is the same as multiplying the i-th column of B by di , 1 â‰¤ i â‰¤ n .
- D is invertible if and only if every diagonal element of D is non-zero and in this case,

Theorem:

Every mÃ—n matrix A is row equivalent to a mÃ—n matrix U in row echelon form.

- Theorem:
- Let A be an nÃ—n matrix, then the following are equivalent.
- A is invertible.
- AX = B has unique solution for any nÃ—1matrix B.
- A is non-singular, meaning that det(A) â‰ 0 .
- A is row equivalent to In .
- A is a product of elementary matrices.

An efficient method to find Aâ€“1

Let A be an nÃ—n matrix, we will find a sequence e1, â€¦ ek of elementary row operations to bring A into In (if possible). If this is impossible, then A is not invertible.

If E1, â€¦ Ek are the corresponding elementary matrices, then this is the same as saying that

Ek â€¦ E1A = In

Now it is obvious that the product of elementary matrices (Ek â€¦ E1) will be exactly the inverse of A.

In practice, we will first create the augmented matrix [A | In ], and then perform elementary row operations on this matrix until the left hand side becomes In.

If we succeed, we will have [ In | Aâ€“1]

Example:

Find the inverse of if possible.

Solution:

First we let

Adding 2 times the 1st row to the 2nd row and we get

Adding -3/2 times the 1st row to the 3rd row and we get

Example:

Find the inverse of if possible.

Solution:

First we let

Adding 2 times the 1st row to the 2nd row and we get

Adding -3/2 times the 1st row to the 3rd row and we get

Adding -3/2 times the 1st row to the 3rd row and we get

Adding Â½ times the 2nd row to the 3rd row and we get

Adding 6 times the 3rd row to the 2nd row and we get

Adding Â½ times the 2nd row to the 3rd row and we get

Adding 6 times the 3rd row to the 2nd row and we get

Adding -3/2 times the 1st row to the 3rd row and we get

Adding 6 times the 3rd row to the 2nd row and we get

Adding -1 times the 2nd row to the 1st row, we get

Finally, dividing the 1st row by 2 and multiplying the 3rd row by 2, we get

1.6 Transposes, Symmetry, and

Triangular Matrices

Definition:

A matrix A = [aij] is called symmetric if it is square and aij = aji for all

i and j.

Examples:

Definition:

If A is an mÃ—n matrix, its transpose, AT, is then the nÃ—m matrix obtained from A by putting each row of A into the corresponding column of AT.

Example:

Theorem:

An nÃ—n matrix is symmetric if and only if A = AT.

- Theorem:
- (AT)T = A
- (A + B)T = AT + BT
- (AB)T = BTAT
- If A is nÃ—n and invertible, then AT is also invertible and
- (AT)-1 = (A-1)T

Definition:

A nÃ—n matrix is upper triangular if all its entries below the diagonal are zero.

Example:

A nÃ—n matrix is lower triangular if all its entries above the diagonal are zero.

- Theorem:
- The product of two lower triangular matrices of the same dimension is a lower triangular matrix.
- The product of two upper triangular matrices of the same dimension is a upper triangular matrix.