Hello it's a me again. Today we continue with **Linear Algebra** getting into **Eigenvalues and eigenvectors**. You may have to see some of my older posts like Linear Functions to understand them better. So, without further do let's get started!

## Introduction:

The eigenvalues and eigenvectors are useful when having a square function matrix or an homomorphism. A basis of a vector space that is build up of eigenvectors gives us an function matrix that is diagonal. That way we can simplify the problem of solving differential equations using linear algebra. We will get into this solving method later on, after getting into mathematical analysis and differential equations that are yet to come.

## Eigenvalues and eigenvectors:

Suppose V is a linear vector space with finite-dimension n and f: V->V is an homomorphism that is represented by:

**f(v) = λv**, for every v in vector space V and for some real λ.

When A is an nxn function matrix that represents f for a basis in V then:

**f(v) = Av = λv**

**Example:**

Suppose the function f: R^2 -> R^2 where f is: f(x, y) = 3(x, y) = (3x, 3y) for every x, y in R^2.

Then the function matrix of f when using the standard basis of R^2 is the diagonal matrix:

3 0

0 3

A number λ in R is called an **eigenvalue **of the nxn matrix A when there is at least on non-zero vector v in R^n so that:

**Av = λv **

This non-zero vector v is called an **eigenvector** of the matrix A that corresponds to the eigenvalue λ.

**Example:**

For the matrix A:

1 -1

2 4

we can find out that:

A*[2 -2] = 2*[2 -2]

That way 2 is an eigenvalue of matrix A and the vector [2 -2] is an eigenvector that corresponds to the eigenvalue 2. But, this eigenvector is not the only one corresponding to 2, and 2 is also not the only eigenvalue. So, for each eigenvalue we have infinite eigenvectors.

### When is a number an eigenvalue and when a vector an eigenvector?

The number **λ** in R **is an** **eigenvalue **of the nxn matrix A only when** A - λ*In** (In is an indicator matrix) **is not invertible**. A non-zero vector** v** in R^n **is an eigenvector** of A only **when it is an element of the nullspace of** the matrix** A - λ*In**.

To check the invertibility we will see if **det(A - λ*In) = 0** that means that the homogeneous system (A - λ*In)X = 0 has infinite solutions.

### Eigenspace:

When **λ is an eigenvalue **of nxn matrix A **then **the **nullspace N(A - λ*In) is **called **an eigenspace**.

We symbolize this space as **V(λ)** and we call the **dimension **of this subspace the **geometric multiplicity** of this eigenvalue. This multiplicity is equal to the** algebric multiplicity** of the solution in the characteristic polynomial that we will talk about in a sec.

So, when **v1, v2 are eigenvectors **that correspond to the same eigenvalue λ of a nxn matrix A then **any linear combination** of those 2 vectors: **k1*v1 + k2*v2** (k1, k2 in R) is **also an eigenvector** **that corresponds to eigenvalue λ**.

### Characteristic polynomial:

Knowing that det(A - λ*Ιn) = 0 means that λ is an eigenvalue, we also know that we will get the eigenvalue λ polynomial from this determinant. This **polynomial P(λ)** that we** get from the determinant det(A - λ*Ιn)** is called the **characteristic polynomial** of the nxn matrix A.

So,** P(λ) = det(A - λ*Ιn)**

When A is the function matrix of f: V->V for some basis of V, where V is a finite-dimensional vector space with dimension n and f is a homomorphism, then this polynomial is called the** characteristic polynomial of the homomorphism f**.

### Example:

The characteristic polynomial of the matrix A:

1 -1

2 4

is P(λ) = det(A - λ*I2) =

|1-λ -1|

|2 4-λ|

= (1-λ)*(4-λ) -(-1)*2 = λ^2 - 5λ + 6.

The solutions λ = 2 and λ = 3 are the eigenvalues of this matrix.

Afterwards by solving the 2 homogeneous systems (A - λI)v=0 we can find the eigenvectors.

(Α - 2Ι)*v = 0 => which gives us y = -x and x arbitrary and so the v2 = x*[1 -1] , x in R

(Α - 3Ι)*v = 0 => which gives us y = -2x and x arbitrarty and so the v3 = x*[1 -2], x in R

So, the eigenspaces of λ = 2 and λ =3 are:

V(2) = {v = [ x y] in R^2 : [x y] = x*[1 -1], x in R}

V(3) = {v = [ x y] in R^2 : [x y] = x*[1 -2], x in R}

We see that dimV(2) = dimV(3) = 1 because a basis of V(2) is [1 -1] and a basis of V(3) is [1 -2].

In this example the solving for the polynomial P(λ) and the homogeneous systems was pretty simple, but you may have to solve a polynomial with a higher order and solve the linear system using the Cramer method for solving square systems.

### Properties:

**det(A) = λ1*λ2*...*λn = b0**, where λi are the eigenvalues of the nxn matrix A and the characteristic polynomial is P(λ) = (-1)^n*λ^n +b(n-1)*λ^n-1 + ... + b1*λ + b0.- A nxn matrix A is
**invertible**only**when the eigenvalues it contains are all non-zero**. So, a nxn matrix A is**invertible**only when**the constant b0****of**it's characteristic polynomial**P(λ) is non-zero**. - When
**A, B are similar matrixes**then they have the**same characteristic polynomial**. - When
**A, B are function matrixes****of**the**same homomorphism**f: V->V of a n-dimensional vector space**then PA(λ) = PB(λ)** - When
**A, B are similar matrixes**and**B = P^-1*A*P**for some**invertible matrix P**then**A and B have the same eigenvalues**. So, when v is an eigenvector for some eigenvalue of B, then Pv is an eigenvector of A that corresponds to the same eigenvalue. - When
**A is a nxn matrix that is similar to an diagonal matrix D****=diag(d1, d2, ..., dn)**then**di are the eigenvalues of A**and the**eigenvectors**that correspond to them are**Pei**, where ei are the basis vector of the**standard basis**of R^n. So, when a matrix is diagonal the elements of the diagonal are the eigenvalues. - When
**v is a eigenvector of a invertible matrix A**that**corresponds the the eigenvalue λ**, then**v is also an eigenvector of inverse matrix A^-1 for the eigenvalue 1-λ**. - When
**v1, v2, ..., vs are eigenvectors**of a nxn matrix A**that correspond to discrete eigenvalues λ1, λ2, ..., λs**, then those vectors are**linear independent**. - When a
**nxn matrix A has discrete eigenvalues****then there is a****basis of R^n that is build up of eigenvectors of A**.

### Polynomial Matrix:

Suppose p(x) = am*x^m + am-1*x^m-1 + ... + a1*x + ao is a **polynomial of m order** and A is a nxn matrix. We **define a polynomial matrix** as the **nxn matrix that we get by replacing x with the matrix A**.

So, **p(A) = am*A^m + am-1*A^m-1 + ... + a1*A + ao*In**

When v is a eigenvector of A that corresponds to eigenvalue λ, then v is a eigenvector of the polynomial matrix p(A) that correspongs to the eigenvalue p(λ).

### Diagonizable matrix:

A nxn matrix **A is diagonizable** **when **it is **similar to a diagonal matrix D**. This means that there is an **invertible matrix P** so that:

**D = P^-1*A*P**

A **matrix is diagonizable** **when**

- there is a
**basis**in R^n that is**build up of eigenvectors** - it has
**n discrete eigenvalues** **dimV(λi) = ri**, where**ri are the multiplicities**of the eigenvalues.

## Algorithm:

To calculate the eigenvalues and eigenvectors of a matrix a we do:

- Calculate the characteristic polynomial from P(λ) = det(Α - λ*In)
- Find the solutions of P(λ) that are the eigenvalues of A
- For each eigenvalue λi we find the eigenvectors by solving the homogeneous system (A - λi*In)X = 0. We then find out which ones are linear independent in the eigenspace V(λi)
- {v1, v2, ..., vs} is the set of linear independent eigenvectors and s = n then A is diagonizable and so we can continue, else (s < n) A is not diagonizable and we can't find eigenvectors
- We can then construct the invertible matrix P = [v1 v2 ... vn] that contains the linear independent eigenvectors and also the diagonal matrix D = diag(λ1, λ2, ..., λn) that contains the λi eigenvalues so that D = P^-1*A*P

Step 5. is needed only when we want to diagonize the matrix A.

And this is actually it for today and I hope you enjoyed it!

I will get into examples when we get into how we solve differential equations using all that we talked about today. Next time in Linear Algebra we will get into some function and function matrix examples that I don't covered so much and then we are actually finished with what I wanted to cover until now in Linear Algebra.

Until next time...Bye!