Computing Eigenvalues & Eigenvectors
Dominance & Real Eigenvalues
It is often the case in applications that we don’t need all the eigenvalues for a given matrix.
We can check if a real matrix has a positive, real, largest eigenvalue using the Perron-Frobenius theorem.
- These conditions also ensure existence of positive real eigenvectors for the associated eigenvalue.
This doesn’t check for dominant or uniqueness.
The Perron-Frobenius Conditions
If $A$ has only non-negative real elements and is irreducible:
-
There is unique positive real eigenvalue, $\lambda_{pf}$ associated with it.
Unique doesn’t imply dominant.
- This eigenvalue has an eigenvector, $\vec x_{pf}$, that contains only positive real components.
- Every other eigenvalue (including complex values) is such that $\vert\lambda\vert\leq\lambda_{pf}$.
The adjacency matrix of a strongly-connected directed graph is irreducible.
A directed graph is strongly-connected if there is a directed path from any node to any other node in the graph.
The Power Method
If $A$ has a dominant eigenvalue the power method allows an eigenvector associated with it to be found.
The method uses the fact that if $\vec x$ is an eigenvector of $\lambda$ of $A$ then for any constant $\alpha \in \Bbb R$, $a\vec x$ is also an eigenvector of $\lambda$.
- Choose an initial guess.
- Commonly used: $\vec x_0=\langle 1,1,1,\ldots,1,1\rangle$.
- Set $i=0$.
- Compute $\vec x_{i+1}=A\cdot \vec x_i$.
- Set $i=i+1$.
- Continue from 3. until $i\geq \max$.
- $\max$ is user-defined.
Problems
Although the convergence to a dominant eigenvector is quick, the numbers involved in computing successive $\vec x_i$ become large so increasing computing time is needed.
This problem can be overcame using scaling. This adds the following step:
\[\vec x_i := \frac{\vec x_i}{\max\{\vert x_k\vert:x_k\in\vec x_i\}}\]Divide each component of the current vector by the largest value in the vector itself.
This is after step 4.
This doesn’t affect the result (the vector is replaced with a scalar multiple of itself) and ensures the numbers are small.
Rayleigh Quotient
This method finds eigenvalues.
Suppose we have found (by the power method) the approximating eigenvector $\vec x$.
The Raleigh Quotient is:
\[\frac{(A\cdot\vec x)\cdot\vec x^T}{\vec x\cdot\vec x}\]The transpose allows this to be calculated as a matrix by flipping the vector on it’s side.
This uses the dot product and reports the eigenvalue for $\vec x$. The dot product is calculated as so:
\[\vec x\cdot \vec y=\sum^n_{k=1}x_k\cdot y_k\]Smallest Eigenvalue
We can find the smallest eigenvalue of $A$ is it is non-singular.
The smallest eigenvalue of $A$ is:
\[\frac 1 \lambda\]where $\lambda$ is the largest eigenvalue of $A^{-1}$.
Spectral Graph Theory
The relationship between the spectra (adjacency matrices) of graphs and properties of the graphs themselves has been a long established study in algorithmics and computer science.
Usage Examples
-
The number of colours needed to properly colour the nodes of a graph so that no two linked nodes have the same colour is at most:
\[\lambda_1(G)+1\]Round down for the upper bound.
and at least:
\[1-\frac{\lambda_1(G)}{\lambda_n(G)}\]Round up for the lower bound.
-
The number of links in any graph is exactly:
\[\frac1 2\sum^n_{k=1}\left(\lambda_k(G)\right)^2\]Round to the nearest whole number.
This relies on the fact that the sum of the $k^{th}$ powers of eigenvalues of an adjacency matrix divided by $k!$ reports the number of closed walks of length $k$ in the graph.
For $k\geq4$ we include degenerate cases as you are able to go back an forth between already visited edges.