14. Inverse matrix and Cramer’s rule

Now we will make use of determinants and along the way introduce the notion of inverse matrix.

Inverse matrix

A matrix B is inverse to matrix A, if A\cdot B=I, where I is the identity matrix (the matrix with ones on the diagonal and zeros everywhere else). The inverse matrix is denoted as A^{-1}. Since \det A\cdot B=\det A\cdot \det B and \det I=1, we see that \det A^{-1}=\frac{1}{\det A}. This implies that only matrices with non-zero determinants can have their inverses. Therefore we call such matrices invertible.

How to calculate the inverse of a given matrix? We have mentioned recently that the operations on rows of a matrix leading to the reduced “stair-like” for is actually multiplication by a matrix. Imagine that we transform the matrix [A|I] consisting of matrix A along with the identity matrix into the reduced “stair-like” form. Since A is a square matrix with non-zero determinant, we will get identity matrix on the left side: [I|B]. But notice that if C is the matrix of rows operations, then C\cdot [A|I]=[I|B]. Therefore C\cdot A=I and C\cdot I=B. The first equation implies that C=A^{-1}. The second that B=C=A^{-1}. So we get the inverse matrix on the right after those operations!

E.g. let us calculate the inverse of the following matrix:



    \[\left[\begin{array}{ccc|ccc}1&2&1&1&0&0\\2&5&2&0&1&0\\-1&-2&0&0&0&1\end{array}\right]\overrightarrow{w_2-2w_1,w_3+w_1} \left[\begin{array}{ccc|ccc}1&2&1&1&0&0\\0&1&0&-2&1&0\\0&0&1&1&0&1\end{array}\right]\overrightarrow{w_1-w_3}\]

    \[\left[\begin{array}{ccc|ccc}1&2&0&0&0&-1\\0&1&0&-2&1&0\\0&0&1&1&0&1\end{array}\right]\overrightarrow{w_1-2w_2} \left[\begin{array}{ccc|ccc}1&0&0&4&-2&-1\\0&1&0&-2&1&0\\0&0&1&1&0&1\end{array}\right]\]

And therefore:


Determining one element of the inverse matrix

If you do not need the whole matrix but some elements only, the following method seems useful. It uses the adjugate matrix to the given one. The adjugate matrix is a matrix in which in j-th row and i-th column we have the determinant of matrix A_{i,j} (matrix A without i-th row and j-th column, there is no mistake there, a transposition plays a role here) multiplied by (-1)^{i+j}. The following equation holds:

    \[A^{-1}=\frac{A^{D}}{\det A}.\]

Therefore if we would like to calculate the value in the second row and first column of A^{-1} from the previous example we cross out the second column an the first row of A and calculate the determinants, and get:


which agrees with the result obtained by the first method!

Cramer’s rule

Given a system of n equations with n variables we may try to solve it with Cramer’s rule. Let A be the matrix of this system without the column of free coefficients. Let A_{i} be the matrix A, in which instead of i-th column we put the column of free coefficients. Then:

  • if \det A\neq 0, the system has exactly one solution. The solution is given by the following formula: x_{i}=\frac{\det A_i}{\det A},
  • if \det A=0, and at least one of \det A_{i} is not equal to 0, the system has no solutions,
  • if \det A=0 and for every i, \det A_{i}=0, there can be zero or infinitely many solutions — Cramer’s method does not give any precise answer.

E.g. let us solve the following system of equations:

    \[\begin{cases} x+2y+z=1\\2x+5y+2z=-1\\-x-2y=0\end{cases}\]



Since \det A=1, this system has exactly one solution. To determine it we calculate the other determinants:

    \[\det A_1= \left|\begin{array}{ccc}1&2&1\\-1&5&2\\0&-2&0\end{array}\right|=6\]

    \[\det A_2= \left|\begin{array}{ccc}1&1&1\\2&-1&2\\-1&0&0\end{array}\right|=-3\]

    \[\det A_3= \left|\begin{array}{ccc}1&2&1\\2&5&-1\\-1&-2&0\end{array}\right|=1\]

And so x=\frac{6}{1}=6, y=\frac{-3}{1}=-3, z=\frac{1}{1}=1.