Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Introduction to Matrix Algebra II

Introduction to Matrix Algebra II

These lecture notes were part of the instruction material for SE-409 (Fall Semester of 2013).

http://www.uia.no/portaler/student/studierelatert/studiehaandbok/11-12/emner/se-409

Andrew Musau

August 28, 2013
Tweet

More Decks by Andrew Musau

Other Decks in Education

Transcript

  1. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.1 Lecture 4 Matrix Algebra II SE-409– Quantitative Methods in Economics and Finance– Fall Semester 2013 Aug. 28, 2013 Andrew Musau University of Agder
  2. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.2 Agenda 1 Matrix Inverses 2 Linear dependence and rank 3 Determinants 4 Transposition and Cramer’s Rule
  3. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.3 Inverting a matrix Consider the general system of equations Ax=b where A is a square matrix and b=0. This system has the trivial solution x=0. However, we may ask whether this system has other solutions. This leads us to the following definition: DEFINITION: Nonsingular Matrices A is nonsingular if the unique solution of Ax=0 is x=0. If Ax has a non-zero solution, we say that A is singular. In the previous lecture, we set out a procedure of establishing whether a square matrix A is either singular or non-singular. This involved transforming the matrix to its echelon form E. We saw that if E was a type 1 echelon matrix (RUT), then A is nonsingular and if E was a type 4 echelon matrix, then A is singular. Therefore, we have the following fact about non-singular matrices:
  4. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.4 Inverting a matrix FACT: Nonsingular Matrices If A is a nonsingular n × n matrix and y is an n−vector, then there is exactly one n−vector x such that Ax=y. Related to the idea of a function and its inverse which we saw in Lecture 1, this fact may be more clear if we think of A as a mapping that transforms x into y. If A is non-singular, there exists a mapping that transforms y back into the vector x from which it originates. This is the inverse of matrix A.
  5. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.5 Inverting a matrix DEFINITION: Invertible Matrix A square matrix A is said to be invertible if there is a square matrix A−1 with the property that Ax = y iff x = A−1y. The matrix A−1 is called the inverse of A. The definition tells us that an invertible matrix is the same thing as a non-singular matrix. To show this, we claim that every invertible matrix is non-singular. Proof: Suppose that A is an invertible matrix and x is a vector such that Ax = 0. From the definition x = A−10 ⇒ x = 0. Hence, A is non-singular.
  6. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.6 Finding the inverse of a matrix: General procedure The Gauss-Jordan elimination procedure we learnt in the previous lecture can be used for finding the inverse of a square matrix A if it is non-singular (or invertible). Procedure: If A is an n × n matrix 1 Augment the n × n identity matrix I to the right of A, forming a n × 2n block matrix [A|I]. 2 Apply elementary row operations to find the reduced echelon form of the n × 2n matrix. 3 The matrix A is invertible if and only if the left block can be reduced to the identity matrix I; in this case the right block of the final matrix is A−1. If the algorithm is unable to reduce the left block to I, then A is not invertible.
  7. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.7 Finding the inverse of a matrix: Example We use Gauss-Jordan elimination procedure in the simple 2 × 2 case to show that the inverse of 3 −1 2 1 is 1 5 1 5 −2 5 3 5 Solution.
  8. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.8 Finding the inverse of a matrix: Example We use Gauss-Jordan elimination procedure to show that the following matrices are singular. 2 4 4 8   1 2 3 2 4 6 4 8 12   Solution.
  9. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.9 Finding the inverse of a matrix: n > 2 Consider the following matrix 3 × 3 matrix. We find the inverse using the Gauss-Jordan elimination procedure.   2 −1 0 −1 2 −1 0 −1 2   (1) First set up the n × 2n augmented matrix   2 −1 0 1 0 0 −1 2 −1 0 1 0 0 −1 2 0 0 1   (2) We want to turn the first entry in the second row into a zero. Take 1 2 Row 1 + Row 2   2 −1 0 1 0 0 0 3 2 −1 1 2 1 0 0 −1 2 0 0 1   (3) Next, we want a zero below the second leading entry (i.e. 3 2 in Row 2). Take 2 3 Row 2 + Row 3
  10. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.10 Finding the inverse of a matrix: n > 2   2 −1 0 1 0 0 0 3 2 −1 1 2 1 0 0 0 4 3 1 3 2 3 1   (4) We want a zero above the third leading entry (i.e. 4 3 in Row 3). Take 3 4 Row 3 + Row 2   2 −1 0 1 0 0 0 3 2 0 3 4 3 2 3 4 0 0 4 3 1 3 2 3 1   (5) Next, we want a zero above the second leading entry. Take 2 3 Row 2 + Row 1   2 0 0 3 2 1 1 2 0 3 2 0 3 4 3 2 3 4 0 0 4 3 1 3 2 3 1  
  11. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.11 Finding the inverse of a matrix: Example (6) Finally, we want ones on the diagonals of the left block. Take 1 2 × Row 1, 2 3 × Row 2, and 3 4 × Row 3   1 0 0 3 4 1 2 1 4 0 1 0 1 2 1 1 2 0 0 1 1 4 1 2 3 4   We therefore have the identity matrix on the left hand side of the final augmented matrix and A−1 on the right hand side.
  12. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.12 The inversion formula for 2 × 2 matrices. If A = a b c d and ad = bc, then A is invertible and A−1 = 1 ad − bc d −b −c a . Example. We use the formula to show that the inverse of 3 −1 2 1 is 1 5 1 5 −2 5 3 5
  13. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.13 Properties of inverses Facts about inverses 1 AA−1 = A−1A = I. 2 If A is invertible, so is A−1, and (A−1)−1 =A. 3 If A and B are invertible so is AB, and (AB)−1 = B−1A−1. Points on notation. Note that the adjectives non-singular, singular, and invertible only apply to square matrices. AB−1 = A(B−1) = (AB)−1.
  14. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.14 Revisiting linear dependence Recall that we introduced the notion of linear dependence in the last lecture when we considered whether vectors were either linearly dependent or independent. We can now add onto this. Let v1, v2, and v3 be three n-vectors and let λ, γ, and µ be scalars. From the rules of matrix-vector multiplication, we can write λv1 + γv2 + µv3 = v1 v2 v3   λ γ µ   Now let A denote the matrix [v1 v2 v3]. From the criterion we established in the previous lecture when defining linear dependence, we have that v1, v2, and v3 are linearly dependent iff Ax=0 has some non-zero solution. In particular, this implies that the columns of a square matrix are linearly dependent iff the matrix is singular. We have just outlined the procedure of finding out whether a matrix is singular or invertible. The test of whether the columns of A are linearly dependent is similar.
  15. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.15 Revisiting linear dependence Procedure. 1 First, reduce A by elementary row operations to an echelon matrix E. 2 If E is of type 1 or type 3, the system Ax=0 has the unique solution x=0, and the columns of A are linearly independent. The converse holds for type 2 and type 4. 3 If A has more columns than rows, that is for an m × n matrix, n > m, then E must be type 2 or type 4. Thus we have the following general result about linear dependence of vectors: General result If we have a set of more than n−vectors in Rn, then these vectors must be linearly dependent.
  16. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.16 Rank of a matrix DEFINITION: The rank of a matrix The rank of a matrix A is the maximal number of linearly independent columns of A. Finding the rank Recall that in echelon matrices, the number of leading entries (pivots) is equal to the number of non-zero rows which is in turn equal to the number of basic columns. This number is precisely the rank of the matrix. Procedure. Reduce the matrix to its echelon form using Gauss-Jordan elimination. Count the number of leading entries (or non-zero rows).
  17. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.17 Rank of a matrix Example. Find the rank of the following matrix. A =   2 −4 1 −8 4 −8 7 −6 −1 2 1 7   Solution. Reducing the matrix A to its echelon form E, we obtain E =   2 −4 1 −8 0 0 5 10 0 0 0 0   Since E has exactly 2 rows not consisting entirely of zeros, its rank is 2.
  18. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.18 The determinant of a matrix Determinants of matrices are useful for determining (hence the name) whether a matrix has an inverse and also for solving systems of linear equations. As with inverses, determinants apply to square matrices. The determinant of a matrix A is denoted |A| or detA. Defining it depends on the size of the matrix. In the simplest 1 × 1 case, |A|= (a11). In a 2 × 2 matrix where A = a11 a12 a21 a22 we have that |A| = a11 a22 − a12 a21. In the 3 × 3 case, the procedure involves a few more steps. Consider the general 3 × 3 matrix A A =   a11 a12 a13 a21 a22 a23 a31 a32 a33  
  19. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.19 The determinant of a matrix Procedure. We can get a submatrix of A by deleting a row and column. For example, deleting the second row and the first column, we have A21 = a12 a13 a32 a33 In general, submatrix Aij is obtained from A by deleting row i and column j. Note that there is one submatrix for each element, and you can get that submatrix by eliminating the element’s row and column from the original matrix. Every element also has something called a cofactor which is based on the element’s submatrix. Specifically, the cofactor of aij is the number cij and is given by cij = (−1)i+j |Aij | i.e., the determinant of the submatrix Aij multiplied by −1 if i + j is odd and multiplied by 1 if i + j is even. Using these definitions we can finally get the determinant of a 3 × 3 matrix, or any other square matrix of higher dimension. There are two ways to do this.
  20. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.20 The determinant of a matrix 1 The most common is to choose a column j. Then |A| = a1j c1j + a2j c2j + ... + anj cnj . 2 Alternatively, we can choose a row. If we choose row i, then the determinant is given by |A| = ai1 ci1 + ai2 ci2 + ... + ain cin. The freedom to choose any row or column allows one to use zeros strategically. Example. Find the determinant of A. A =   6 8 −1 2 0 0 −9 4 7   Solution. It would be best to choose the second row because it has two zeros, and the determinant is simply a21 c21 = 2(−60) = −120.
  21. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.21 The determinant of a matrix We can also find determinants of square matrices by using elementary row operations. However, we have to take into consideration the following three points: DEFINITION: Determinant of a triangular matrix (D1) The determinant of a triangular matrix is the product of its diagonal entries. (D2) If two rows of a matrix are exchanged, the determinant is multiplied by −1. (D3) If a multiple of one matrix is subtracted from another row, the determinant remains unchanged. For square matrices with large dimensions, the best approach is to reduce the original matrix to upper triangular form U using Gauss-Jordan elimination and |A| is the product of the diagonal entries.
  22. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.22 The determinant of a matrix Example. Find the determinant of A. A =   2 3 4 2 3 7 5 8 6   . Solution. (1) Take −1R1 + R2 and −2.5R1 + R3 and get   2 3 4 0 0 3 0 1 2 −4   . (2) Exchange the second and third columns and get   2 3 4 0 1 2 −4 0 0 3   .
  23. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.23 The determinant of a matrix We denote the triangular matrix U. Checking D1 − D3, we note that we had one row exchange and thus |A| = (−1)1|U| = −(2 × 1 2 × 3) = −3 Therefore, this leads us to the following fact: (D4) A square matrix is singular iff its determinant is zero. Properties of determinants If A is an n × n matrix, |(λA)| = λn|A| where λ is a scalar. |(AB)| = |A| × |B|. |A−1| = 1 |A| .
  24. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.24 Transposition The transpose of a (not necessarily square) matrix is generated by switching the rows and columns of the original matrix. Because of this, the transpose of an n × k matrix is a k × n matrix. For a given matrix A, the transpose is denoted AT or A . As an example, the transpose of the following matrix A A = 2 3 4 0 7 −4 . is AT =   2 0 3 7 4 −4   .
  25. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.25 Transposition Properties of Transposes (λA + µB)T = λAT + µBT for some scalars λ and µ. (AB)T = BT AT . (AT )T = A. If A is an invertible square matrix, then AT (A−1)T = (A−1A)T = IT = I. Hence AT is also invertible and (AT )−1 = (A−1)T . rank of AT = rank of A. |AT | = |A|
  26. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.26 Cramer’s Rule The process of using determinants to solve the system of equations given by Ax = b is known as Cramer’s rule. First let us consider a famous formula for the inverse of a matrix often called adjoint-over-determinant. If A is invertible, A−1 = 1 |A| A∗ where A∗ is the transpose of the adjoint matrix of A i.e. the cofactor matrix of an n × n matrix A (refer back to our discussion where we found the determinant of a matrix using its cofactors). Cramer’s rule says that if A is invertible, then the solution of the system Ax=b is xj = |Bj | |A| for j = 1, ..., n where for each j, Bj is the matrix obtained from A by replacing its jth column by b.
  27. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.27 Cramer’s Rule: Example Let us consider a simple example. Our system of equations is given by: 4x1 + 3x2 = 18 5x1 − 3x2 = 9 and solving these equation one gets 9x1 = 27 ⇒ x1 = 3, x2 = 2. Now, let’s solve the system using Cramer’s rule. We have the matrices A = 4 3 5 −3 b = 18 9 Generate the matrices B1 = 18 3 9 −3 B2 = 4 18 5 9 Now compute determinants to get |A| = −27, |B1| = −81, |B2| = −54.
  28. Matrix Algebra II Matrix Inverses Linear dependence and rank Determinants

    Transposition and Cramer’s Rule 4.28 Cramer’s Rule: Example Applying Cramer’s rule, we get x = |B1| |A| |B2| |A| . x = −81 −27 −54 −27 = 3 2 .
  29. Matrix Algebra II 4.29 References Neilson William S. (2009). Must-have

    Math Tools for Graduate Study in Economics. Unpublished. Pemberton, Malcolm, and Nicholas Rau. (2007). Mathematics for economists: an introductory textbook. Manchester University Press.