Skip to content

12. The Inverse of a Matrix

Dated: 17-04-2025

Inverse of a Square Matrix

If \(A, C\) and \(I\) are matrices1 of order1 \(n \times n\) then \(C\) is called the multiplicative inverse of \(A\) if

\[AC = CA = I\]

Invertible Matrices

\(A\) is called an invertible matrix if its multiplicative inverse exists.

Uniqueness

\[B = BI\]
\[= B(AC)\]
\[= (BA)C\]
\[= IC\]
\[= C\]

This shows that inverse is unique.
It is denoted by \(A^{-1}\)

\[AA^{-1} = A^{-1}A= I\]

A non-invertible may be called a singular matrix1 sometimes but an invertible is called a non singular matrix.1

The notation \(A^{-1}\)
\[A^{-1} \ne \frac 1 A\]

Example

\[A = \begin{bmatrix} 2 & 5 \\ -3 & -7 \end{bmatrix} \text{ and } C = \begin{bmatrix} -7 & -5 \\ 3 & 2 \end{bmatrix}\]
\[AC = \begin{bmatrix} 2 & 5 \\ -3 & -7 \end{bmatrix} \begin{bmatrix} -7 & -5 \\ 3 & 2 \end{bmatrix} = \begin{bmatrix} -14+15 & -10+10 \\ 21-21 & 15-14 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\]
\[\text{and}\]
\[CA = \begin{bmatrix} -7 & -5 \\ 3 & 2 \end{bmatrix} \begin{bmatrix} 2 & 5 \\ -3 & -7 \end{bmatrix} = \begin{bmatrix} -14+15 & -35+35 \\ 6-6 & 15-14 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\]
\[\therefore C = A^{-1}\]

Theorem

If \(A\) is a matrix1 such that

\[A = \begin{bmatrix} a & b\\ c & d \end{bmatrix} \]
Adjoint of \(A\)
\[\text{Adj}(A) = \begin{bmatrix} d & -b\\ -c & a \end{bmatrix} \]

and \(\text{det}(A) = 0\) then \(A\) is a non invertible matrix, otherwise, it is an invertible matrix and

\[A^{-1} = \frac {\text{Adj}(A)}{\text{det}(A)}\]

Theorem

  • If \(A^{-1}\) exists then \((A^{-1})^{-1} = A\) also exists.
  • If \(A\) and \(B\) are invertible matrices of order1 \(n \times n\) then \((AB)^{-1} = B^{-1}A^{-1}\).
  • If \(A^{-1}\) exists then \((A^T)^{-1} = (A^{-1})^T\).

\(\(((A_1)(A_2)(A_3)…(A_n))^{-1} = A_n^{-1} A_{n-1}^{-1} … A_3^{-1} A_2^{-1} A_1^{-1}\)\)

Theorem

If \(A\) is invertible matrix and \(n \in \mathbb W\) then

Elementary Matrices

It is a matrix1 which results by applying a single elementary row operation2 to an identity matrix.1

Example

\[ \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \to \begin{bmatrix} 1 & 0\\ 0 & -3 \end{bmatrix} \]

Theorem

If \(A\) is an identity matrix1 of order1 \(n \times n\) and \(E\) is an elementary matrix which results after performing a row operation2 then \(EA\) results into a matrix1 which can be produced by performing the same row operation1 on \(A\).

\[EA = E \quad \because A = I\]

Theorem

An elementary matrix is invertible matrix and the inverse is also an elementary matrix.

Theorem

\[A^{-1} \text{ exists} \iff A \sim I_n\]

The steps of row operations2 which transform \(A\) to \(I_n\) also transform \(I_n\) to \(A^{-1}\).

Algorithm to Find \(A^{-1}\)

If \(\begin{bmatrix}A & I\end{bmatrix}\) is an augmented matrix2 and \(A \sim I\) then \(\begin{bmatrix}A & I\end{bmatrix} \sim \begin{bmatrix}I & A^{-1}\end{bmatrix}\). Otherwise, \(A^{-1}\) doesn't exist.

References

Read more about notations and symbols.