Eigenvalues & Eigenvectors

7 min

Visualize eigenvalues and eigenvectors - key concepts for PCA, spectral methods, and matrix analysis.

Best viewed on desktop for optimal interactive experience

Eigenvalues & Eigenvectors

When a matrix transforms a vector, most vectors change direction. But eigenvectors are special - they only get scaled, not rotated. This property makes them fundamental to many ML algorithms.

Eigenvalues & Eigenvectors

Matrix A

Test Vector

λ₁:
λ₂:
Eigenvector 1
Eigenvector 2
Test vector
Tip: Eigenvectors show directions that remain unchanged (only scaled) under the transformation. The eigenvalues indicate how much scaling occurs in each direction.

The Eigen Equation

For a square matrix A, if:

Av=λvAv = \lambda v

Then:

  • v is an eigenvector
  • λ (lambda) is its eigenvalue
  • The matrix scales v by factor λ

Geometric Intuition

Think of a matrix as a transformation:

  • Most vectors get rotated AND scaled
  • Eigenvectors only get scaled (stretched/shrunk)
  • Eigenvalues tell you how much scaling
import numpy as np A = np.array([[3, 1], [1, 3]]) # Find eigenvalues and eigenvectors eigenvalues, eigenvectors = np.linalg.eig(A) print(f"Eigenvalues: {eigenvalues}") # [4, 2] print(f"Eigenvectors:\n{eigenvectors}") # Verify: Av = λv v1 = eigenvectors[:, 0] lambda1 = eigenvalues[0] print(np.allclose(A @ v1, lambda1 * v1)) # True

Applications in ML

1. Principal Component Analysis (PCA)

PCA finds directions of maximum variance:

from sklearn.decomposition import PCA # Covariance matrix eigenvectors = principal components pca = PCA(n_components=2) X_transformed = pca.fit_transform(X) # Components are eigenvectors of covariance matrix print(pca.components_) # Principal directions print(pca.explained_variance_) # Eigenvalues

2. Spectral Clustering

Uses eigenvalues of graph Laplacian:

# Graph Laplacian L = D - A # Degree matrix - Adjacency matrix # Eigenvectors reveal cluster structure eigenvalues, eigenvectors = np.linalg.eig(L) # Use smallest k eigenvectors for k clusters clusters = KMeans(n_clusters=k).fit(eigenvectors[:, :k])

3. PageRank Algorithm

Web pages as eigenvector of link matrix:

# Transition matrix (column-stochastic) # Dominant eigenvector = PageRank scores eigenvalues, eigenvectors = np.linalg.eig(M) pagerank = np.abs(eigenvectors[:, 0]) pagerank = pagerank / pagerank.sum()

Properties

Symmetric Matrices

  • All eigenvalues are real
  • Eigenvectors are orthogonal
  • Can be diagonalized: A=QΛQTA = Q\Lambda Q^T

Positive Definite Matrices

  • All eigenvalues are positive
  • Used in optimization (convex functions)
  • Covariance matrices are positive semi-definite

Computing Eigenvalues

Characteristic Polynomial

det(AλI)=0\det(A - \lambda I) = 0

For 2×2 matrix:

# [[a, b], # [c, d]] # # λ² - (a+d)λ + (ad-bc) = 0 # Use quadratic formula

Power Iteration

Find dominant eigenvector:

def power_iteration(A, num_iterations=100): n = A.shape[0] v = np.random.rand(n) for _ in range(num_iterations): v = A @ v v = v / np.linalg.norm(v) eigenvalue = v.T @ A @ v return eigenvalue, v

Eigendecomposition

For diagonalizable matrices:

A=QΛQ1A = Q\Lambda Q^{-1}

Where:

  • Q: Matrix of eigenvectors
  • Λ: Diagonal matrix of eigenvalues
# Decompose eigenvalues, Q = np.linalg.eig(A) Lambda = np.diag(eigenvalues) # Reconstruct A_reconstructed = Q @ Lambda @ np.linalg.inv(Q)

Matrix Powers & Exponentials

Eigendecomposition simplifies computations:

# Matrix power: A^n = Q @ Λ^n @ Q^(-1) def matrix_power(A, n): eigenvalues, Q = np.linalg.eig(A) Lambda_n = np.diag(eigenvalues ** n) return Q @ Lambda_n @ np.linalg.inv(Q) # Matrix exponential: e^A = Q @ e^Λ @ Q^(-1) def matrix_exp(A): eigenvalues, Q = np.linalg.eig(A) exp_Lambda = np.diag(np.exp(eigenvalues)) return Q @ exp_Lambda @ np.linalg.inv(Q)

Connection to SVD

For symmetric matrices:

  • Eigendecomposition = SVD
  • Eigenvectors = Singular vectors
  • |Eigenvalues| = Singular values

Common Pitfalls

  1. Not all matrices have eigendecomposition (need n linearly independent eigenvectors)
  2. Complex eigenvalues for non-symmetric real matrices
  3. Numerical instability for nearly singular matrices
  4. Multiple eigenvalues may have multiple eigenvectors

Next Steps

If you found this explanation helpful, consider sharing it with others.

Mastodon