Skip to content

Iterative Estimates for Eigenvalues1

Dated: 18-06-2025

Eigenvalues1 And Eigenvectors1

\[AX = \lambda X\]

\(\(AX - \lambda X = 0\)\)

\(\(AX - \lambda IX = 0\)\)

\(\((A-\lambda I)X = 0\)\)

To find the eigenvalues1 we have to solve the equation

\[|A - \lambda I| = 0\]

Power Method

Let \(A\) be a diagonalizable matrix2 with \(n\) linearly independent3 eigen vectors1 \(\vec v_1, \ldots \vec v_n\) with corresponding eigen values1 \(\lambda_1, \ldots, \lambda_n\) such that

\[|\lambda_1| > |\lambda_2| \ge \ldots \ge |\lambda_n|\]

Since \(\{\vec v_1, \ldots, \vec v_n\}\) is a basis for \(\mathbb R^n\), any vector4 \(\vec x\) can be written as

\[\vec x = c_1 \vec v_1 + c_2 \vec v_2 + \cdots + c_n \vec v_n\]

Multiply by \(A\) on both sides

\[Ax = A(c_1 v_1 + c_2 v_2 + \dots + c_n v_n)\]

\(\(= A(c_1 v_1) + A(c_2 v_2) + \dots + A(c_n v_n)\)\)

\(\(= c_1(Av_1) + c_2(Av_2) + \dots + c_n(Av_n)\)\)

\(\(= c_1(\lambda_1 v_1) + c_2(\lambda_2 v_2) + \dots + c_n(\lambda_n v_n)\)\)

Again, if we multiply by \(A\) on both sides, we get

\[A^2 x = c_1 (\lambda_1^2 v_1) + c_2 (\lambda_2^2 v_2) + \dots + c_n (\lambda_n^2 v_n)\]

Continuing this process, will give us

\[A^k x = c_1 (\lambda_1^k v_1) + c_2 (\lambda_2^k v_2) + \dots + c_n (\lambda_n^k v_n) \quad k \in \mathbb Z^+\]

\(\(\left(\frac{1}{\lambda_1}\right)^k A^k x = c_1 v_1 + c_2 \left(\frac{\lambda_2}{\lambda_1}\right)^k v_2 + \dots + c_n \left(\frac{\lambda_n}{\lambda_1}\right)^k v_n \quad k \in \mathbb Z^+\)\)

\[k \to \infty \implies (\lambda_1)^{-k} A^k x \to c_1 \vec v_1 \quad \because |\lambda_1| > |\lambda_2| \ge \ldots \ge |\lambda_n|\]

Procedure

Step 1

Choose the initial vector4 such that the largest element is unity.

Step 2

The normalized vector4 \(\vec v^{(0)}\) is pre multiplied by the matrix5 \(A\).

Step 3

The resultant vector4 is again normalized.

Step 4

This process of iteration is continued and the new normalized vector4 is repeatedly pre-multiplied by the matrix5 \(A\) until the required accuracy is obtained.

\[u^{(k)} = [A]\vec v^{(k -1)} = q_k \vec v^{(k)}\]

Here \(q_k\) is the desired largest eigen value1 and \(\vec v^{(k)}\) is the corresponding eigen vector.1

Steps for Finding the Eigenvalue1 and the Eigenvector1

  1. Select an initial vector4 \(\vec x_0\) whose largest entry is \(1\).
  2. For \(k \in \mathbb Z^+ \cup \{0\}\)
    1. Compute \(A_K\)
    2. Let \(\micro_k\) be an entry \(A x_k\) whose absolute value is as large as possible.
    3. Compute \(x_{k + 1} = \left(1 \micro_k\right) A x_k\)
  3. For almost all choice of \(x_0\) the sequence \(\{\micro_k\}\) approaches the dominant eigen value1 and sequence \(\{x_k\}\) approaches a dominant eigen vector.1

Example

Find the first three iterations of the power method applied on the following matrices

\[ \begin{bmatrix} 1 & 1 & 0\\ 2 & 4 & 2\\ 0 & 1 & 2 \end{bmatrix} \text{ use } x_0 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \]

Solution

\(1^{st}\) Iteration
\[ u_1 = A x_0 = \begin{bmatrix} 1 & 1 & 0\\ 2 & 4 & 2\\ 0 & 1 & 2\\ \end{bmatrix} \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix} = \begin{bmatrix} 0 + 0 + 0\\ 0 + 0 + 2\\ 0 + 0 + 2\\ \end{bmatrix} \begin{bmatrix} 0\\ 2\\ 2 \end{bmatrix} \]

Now we normalize4 the resultant vector4 to get

\[ u_1 = \begin{bmatrix} 0 \\ 2 \\ 2 \\ \end{bmatrix} = 2 \begin{bmatrix} 0 \\ 1 \\ 1 \\ \end{bmatrix} = q_1 x_1 \]
\(2^{nd}\) Iteration
\[ u_2 = A x_1 = \begin{bmatrix} 1 & 1 & 0\\ 2 & 4 & 2\\ 0 & 1 & 2\\ \end{bmatrix} \begin{bmatrix} 0\\ 1\\ 1 \end{bmatrix} = \begin{bmatrix} 0 + 1 + 0\\ 0 + 4 + 2\\ 0 + 1 + 2\\ \end{bmatrix} \begin{bmatrix} 1\\ 6\\ 3 \end{bmatrix} \]

Now we normalize4 the resultant vector4 to get

\[ u_2 = \begin{bmatrix} 1 \\ 6 \\ 3 \\ \end{bmatrix} = 6 \begin{bmatrix} \frac 1 6 \\ 1 \\ \frac 1 2 \\ \end{bmatrix} = q_2 x_2 \]
\(3^{rd}\) Iteration
\[ u_3 = A x_2 = \begin{bmatrix} 1 & 1 & 0\\ 2 & 4 & 2\\ 0 & 1 & 2\\ \end{bmatrix} \begin{bmatrix} \frac 1 6\\ 1\\ \frac 1 2 \end{bmatrix} = \begin{bmatrix} \frac 1 6 + 1 + 0\\ \frac 1 3 + 4 + 2\\ 0 + 1 + 2\\ \end{bmatrix} \begin{bmatrix} \frac 7 {16}\\ \frac {16} 3\\ 2 \end{bmatrix} \]

Now we normalize4 the resultant vector4 to get

\[ u_3 = \begin{bmatrix} \frac 7 6 \\ \frac {16} 3 \\ 2 \\ \end{bmatrix} = \frac {16} 3 \begin{bmatrix} \frac 7 {32} \\ 1 \\ \frac 3 8 \\ \end{bmatrix} = q_3 x_3 \]

Hence the largest eigenvalue1 after \(3\) iterations is \(\frac {16} 3\) .

The corresponding eigenvector1 is

\[ \begin{bmatrix} \frac 7 {32} \\ 1 \\ \frac 7 8 \\ \end{bmatrix} \]

The Inverse Power Method

  1. Select an initial estimate sufficiently close to \(\lambda\).
  2. Select an initial vector4 \(\vec x_0\) whose large entry is \(1\).
  3. For \(k \in \mathbb Z^+ \cup \{0\}\)
    1. Solve \((A - \alpha I)y_k = x_k\)
    2. Let \(\micro_k\) be an entry in \(y_k\) whose absolute value is as large as possible.
    3. Compute \(v_k = \alpha + \left(\frac 1 {\micro_k}\right)\)
    4. Compute \(x_{k + 1} = \left(\frac 1 {\micro_k}\right) y_k\)
  4. For almost all the choice of \(x_0\), the sequence \(\{v_k\}\) approaches the eigenvalue1 \(\lambda\) of \(A\), and the sequence \(\{x_k\}\) approaches a corresponding eigenvector.1

References

Read more about notations and symbols.