Matrix is the extension of vector where from n×1→n×m, but notice, the addition of matrix only perform when they are all the same n×m where they have the same number of rows and columns. For multiplication, we can only perform An×p=Bn×k⋅Ck×p
Distributive property: A⋅(B+C)=A⋅B+A⋅C and (A+B)⋅C=A⋅C+B⋅C
Associative property: (A⋅B)⋅C=A⋅(B⋅C)
Identity property: A⋅I=A and I⋅A=A
Inverse property: A⋅A−1=I and A−1⋅A=I
The inverse of product of matrix: (AB)−1=B−1A−1, but only square matrix has inverse we will see below
The transpose of product of matrix: (AB)T=BTAT
(A−1)′=(A′)−1
We define the rank of a matrix A as the maximum number of linearly independent row/columns of A
rank(An×p)≤min(n,p)
rank(A)=rank(A′)=rank(AA′)=rank(A′A)
Let ∣(A−λI)∣=0, then λ is the eigenvalue. And Ax=λx, then x is the eigenvector or specifically, the left eigenvector. We can also define the right eigenvector as yA=λy.
We define a squared matrix if it has the same number of rows and columns.
We denote the determinant of a squared matrix as det(A) or ∣A∣
∣AB∣=∣A∣∣B∣
∣A∣=∣A′∣
∣A−1∣=1/∣A∣
∣cAk×k∣=ck∣Ak×k∣,c∈R
det(A)=0⟺A is singular
We denote the trace of a squared matrix as tr(A)
tr(A+B)=tr(A)+tr(B)
tr(AB)=tr(BA)
tr(P−1AP)=tr(A)P satisfied the inverse and multiplication condition
The spectral radius of a squared matrix is the largest absolute eigenvalue denote as ρ(An×n)=maxi=1,…,n∣λi∣
Only squared matrix has inverse but only when satisfied A−1⋅A=I, or we can said iff rank(A)= #A's row/column
full rank means det=0
Perron-Frobenius theorem: if A is a real-valued, nonnegative matrix, then there exists a unique largest positive eigenvalue λ and a corresponding eigenvector v.
We define a matrix Q∈Rk×k is a Orthogonal/orthonormal Matrices⟺QQ′=Q′Q=I where Q′=Q−1
The determinant of an orthogonal matrix is +1 or -1.
Equivalently, for each vector v in the matrix, we have v′v=1 and v′w=0 if v=w. Vice versa, if v′v=1 and v′w=0 if v=w, then Q is an orthogonal matrix.
For a symmetric matrix A, if x′Ax≥0, we call A is nonnegative definite matrix. we define as A⪰0. Or equivalent all its eigenvalues are ≥0
For a symmetric matrix A, if x′Ax>0,∀x=0, we call A is positive definite matrix. we define as A≻0. Or equivalent all its eigenvalues are strictly positive
The inverse(always exists) of a symmetric positive definite matrix is also symmetric positive definite.
Given a positive definite matrix B and a sclar b>0, ∀ positive definite matrix A, ∣A∣b1exp(−tr(A−1B)/2)≤∣B∣b1(2b)pbexp(−bp). Such equality holds when A=2b1B.
Let say A is p×p with eigenvalues λ1≥λ2≥…≥λp>0 and associated normalized eigenvectors e1,…,ep. Then we have x=0maxx′xx′Ax=λ1 when x=e1,..., x=0minx′xx′Ax=λp when x=ep and x⊥e1,…,ekminx′xx′Ax=λk+1 when x=ek+1
Depend on different matrix, we have different decomposition. Every matrix has a Singular Value Decomposition (SVD). Every square matrix has a Jordan Decomposition. Every diagonalizable matrix has a eigenvalue decomposition when such factorized matrix is a normal or real symmetric matrix we call it spectral decomposition.
An×m=Un×rDr×r(VT)r×m where Un×r and Vm×r are semi-orthonormal matrices and Dr×r is a diagonal matrix with nonnegative entries with Dij=0 if i=j and singluar valuesλi ordered on the diagonal such that D11=λ1≥D22=λ2≥⋯≥Drr=λr>0. We call it Singular Value Decomposition (or SVD). U and V are semi-orthonormal matrices where they have the properties:
UjTUj=VjTVj=I for j=1,…,r
UjTUk=VjTVk=0 for j=k
UTU=VTV=Ir×r and UUT=In×n and VVT=Im×m when r=m and r=n. if r=m=n, then UUT=VVT=Ir×r
if A is non-square full-rank (i.e. rank =min(n,m)), then we have A^=ATA=VD^UTUD^VT=VDDVT=VD^VT is symmetric and positive definite
Ak×k=PΛP′ and A−1=PΛ−1P′ where Λ=diag[λ1,…,λk], P=[e1,…,ek], and we call it spectral decomposition, for convenience purpose we use the normalized eigenvectors e1,…,ek as the columns of P where P′P=I/ei′ei=1.