Power iteration

From Infogalactic: the planetary knowledge core
(Redirected from Power method)
Jump to: navigation, search

In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ (the eigenvalue) and a nonzero vector v (the eigenvector), such that Av = λv. The algorithm is also known as the Von Mises iteration.[1]

The power iteration is a very simple algorithm. It does not compute a matrix decomposition, and hence it can be used when A is a very large sparse matrix. However, it will find only one eigenvalue (the one with the greatest absolute value) and it may converge only slowly.

The method

The power iteration algorithm starts with a vector b0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation

 b_{k+1} = \frac{Ab_k}{\|Ab_k\|}.

So, at every iteration, the vector bk is multiplied by the matrix A and normalized.

If we assume A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector b_0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence \left( b_{k} \right) converges to an eigenvector associated with the dominant eigenvalue.

Without the two assumptions above, the sequence \left( b_{k} \right) does not necessarily converge. In this sequence,

b_k = e^{i \phi_k} v_1 + r_k,

where v_1 is an eigenvector associated with the dominant eigenvalue, and  \| r_{k} \| \rightarrow 0. The presence of the term e^{i \phi_{k}} implies that \left( b_{k} \right) does not converge unless e^{i \phi_{k}} = 1. Under the two assumptions listed above, the sequence \left( \mu_{k} \right) defined by

\mu_{k} = \frac{b_{k}^{*}Ab_{k}}{b_{k}^{*}b_{k}}

converges to the dominant eigenvalue.

This can be run as a simulation program with the following simple algorithm:

for each(''simulation'') {
    // calculate the matrix-by-vector product Ab
    for(i=0; i<n; i++) {
         tmp[i] = 0;
         for (j=0; j<n; j++)
              tmp[i] += A[i][j] * b[j]; 
                        // dot product of i-th row in A with the column vector b
    }

    // calculate the length of the resultant vector
    norm_sq=0;
    for (k=0; k<n; k++)
         norm_sq += tmp[k]*tmp[k]; 
    norm = sqrt(norm_sq);

    // normalize b to unit vector for next iteration
    b = tmp/norm;
}

The value of norm converges to the absolute value of the dominant eigenvalue, and the vector b to an associated eigenvector.

Note: The above code assumes real A,b. To handle complex; A[i][j] becomes conj(A[i][j]), and tmp[k]*tmp[k] becomes conj(tmp[k])*tmp[k]

This algorithm is the one used to calculate such things as the Google PageRank.

The method can also be used to calculate the spectral radius (the largest eigenvalue) of a matrix by computing the Rayleigh quotient

 \frac{b_k^\top A b_k}{b_k^\top b_k} = \frac{b_{k+1}^\top b_k}{b_k^\top b_k}.

Analysis

Let A be decomposed into its Jordan canonical form: A=VJV^{-1}, where the first column of V is an eigenvector of A corresponding to the dominant eigenvalue \lambda_{1}. Since the dominant eigenvalue of A is unique, the first Jordan block of J is the 1 \times 1 matrix \begin{bmatrix} \lambda_{1} \end{bmatrix} , where \lambda_{1} is the largest eigenvalue of A in magnitude. The starting vector b_{0} can be written as a linear combination of the columns of V: b_{0} = c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{n}v_{n}. By assumption, b_{0} has a nonzero component in the direction of the dominant eigenvalue, so c_{1} \ne 0.

The computationally useful recurrence relation for b_{k+1} can be rewritten as: b_{k+1}=\frac{Ab_{k}}{\|Ab_{k}\|}=\frac{A^{k+1}b_{0}}{\|A^{k+1}b_{0}\|}, where the expression: \frac{A^{k+1}b_{0}}{\|A^{k+1}b_{0}\|} is more amenable to the following analysis.
\displaystyle
\begin{array}{lcl}
b_{k} &=& \frac{A^{k}b_{0}}{\| A^{k} b_{0} \|} \\
      &=& \frac{\left( VJV^{-1} \right)^{k} b_{0}}{\|\left( VJV^{-1} \right)^{k}b_{0}\|} \\
      &=& \frac{ VJ^{k}V^{-1} b_{0}}{\| V J^{k} V^{-1} b_{0}\|} \\
      &=& \frac{ VJ^{k}V^{-1} \left( c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{n}v_{n} \right)}
               {\| V J^{k} V^{-1} \left( c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{n}v_{n} \right)\|} \\
      &=& \frac{ VJ^{k}\left( c_{1}e_{1} + c_{2}e_{2} + \cdots + c_{n}e_{n} \right)}
                {\| V J^{k} \left( c_{1}e_{1} + c_{2}e_{2} + \cdots + c_{n}e_{n} \right) \|} \\
      &=& \left( \frac{\lambda_{1}}{|\lambda_{1}|} \right)^{k} \frac{c_{1}}{|c_{1}|}
          \frac{ v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right)}
               {\| v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right) \| }
           
\end{array}
The expression above simplifies as k \rightarrow \infty

\left( \frac{1}{\lambda_{1}} J \right)^{k} = 
\begin{bmatrix}
[1] & & & & \\
& \left( \frac{1}{\lambda_{1}} J_{2} \right)^{k}& & & \\
& & \ddots & \\
& & & \left( \frac{1}{\lambda_{1}} J_{m} \right)^{k} \\
\end{bmatrix}
\rightarrow
\begin{bmatrix}
1 & & & & \\
& 0 & & & \\
& & \ddots & \\
& & & 0 \\
\end{bmatrix}
as  k \rightarrow \infty .
The limit follows from the fact that the eigenvalue of  \frac{1}{\lambda_{1}} J_{i} is less than 1 in magnitude, so 
\left( \frac{1}{\lambda_{1}} J_{i} \right)^{k} \rightarrow 0
as  k \rightarrow \infty
It follows that:

\frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
\left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right)
\rightarrow 0
as 
k \rightarrow \infty
Using this fact, b_{k} can be written in a form that emphasizes its relationship with v_{1} when k is large:

\begin{matrix}
b_{k} &=& \left( \frac{\lambda_{1}}{|\lambda_{1}|} \right)^{k} \frac{c_{1}}{|c_{1}|}
          \frac{ v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right)}
               {\| v_{1} + \frac{1}{c_{1}} V \left( \frac{1}{\lambda_1} J \right)^{k} 
                      \left( c_{2}e_{2} +  \cdots + c_{n}e_{n} \right) \| }
      &=& e^{i \phi_{k}} \frac{c_{1}}{|c_{1}|} v_{1} + r_{k}
\end{matrix}
where  e^{i \phi_{k}} = \left( \lambda_{1} / |\lambda_{1}| \right)^{k} and  \| r_{k} \| \rightarrow 0 as k \rightarrow \infty
The sequence  \left( b_{k} \right) is bounded, so it contains a convergent subsequence. Note that the eigenvector corresponding to the dominant eigenvalue is only unique up to a scalar, so although the sequence \left(b_{k}\right) may not converge, b_{k} is nearly an eigenvector of A for large k.

Alternatively, if A is diagonalizable, then the following proof yields the same result
Let λ1, λ2, …, λm be the m eigenvalues (counted with multiplicity) of A and let v1, v2, …, vm be the corresponding eigenvectors. Suppose that \lambda_1 is the dominant eigenvalue, so that |\lambda_1| > |\lambda_j| for j>1.

The initial vector b_0 can be written:

b_0 = c_{1}v_{1} + c_{2}v_{2} + \cdots + c_{m}v_{m}.

If b_0 is chosen randomly (with uniform probability), then c1 ≠ 0 with probability 1. Now,

\begin{array}{lcl}A^{k}b_0 & = & c_{1}A^{k}v_{1} + c_{2}A^{k}v_{2} + \cdots + c_{m}A^{k}v_{m} \\
& = & c_{1}\lambda_{1}^{k}v_{1} + c_{2}\lambda_{2}^{k}v_{2} + \cdots + c_{m}\lambda_{m}^{k}v_{m} \\
& = & c_{1}\lambda_{1}^{k} \left( v_{1} + \frac{c_{2}}{c_{1}}\left(\frac{\lambda_{2}}{\lambda_{1}}\right)^{k}v_{2} + \cdots + \frac{c_{m}}{c_{1}}\left(\frac{\lambda_{m}}{\lambda_{1}}\right)^{k}v_{m}\right). \end{array}

The expression within parentheses converges to v_1 because |\lambda_j/\lambda_1| < 1 for j>1. On the other hand, we have

 b_k = \frac{A^kb_0}{\|A^kb_0\|}.

Therefore, b_k converges to (a multiple of) the eigenvector v_1. The convergence is geometric, with ratio

 \left| \frac{\lambda_2}{\lambda_1} \right|,

where \lambda_2 denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.

Applications

Although the power iteration method approximates only one eigenvalue of a matrix, it remains useful for certain computational problems. For instance, Google uses it to calculate the PageRank of documents in their search engine,[2] and Twitter uses it to show users recommendations of who to follow.[3] For matrices that are well-conditioned and as sparse as the Web matrix, the power iteration method can be more efficient than other methods of finding the dominant eigenvector.

Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the inverse iteration method applies power iteration to the matrix A^{-1}. Other algorithms look at the whole subspace generated by the vectors b_k. This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or Lanczos iteration.

See also

References

  1. Richard von Mises and H. Pollaczek-Geiringer, Praktische Verfahren der Gleichungsauflösung, ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik 9, 152-164 (1929).
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Bosagh Zadeh WTF: The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide Web

External links