Solution and notes to Linear Algebra Done Right, Sheldon Axeler. Each section is based on a chapter, and contains important result and exercise solutions.

Vector Spaces

  • start:
  • definition and properties
  • Subspaces
    • Algebraic relation with calculus is really interesting. Example: All continuous real valued functions are subspace of . Similarly, all differentiable real-valued functions are also subspace of .
    • What are the operations possible on subspaces? Can I add them?
    • How to find out all possible subspaces of a vector space?
    • I’ve seen math’s obsession with subsets of a current structure. How are these utilized in real-world applications. I mean, are there examples where a subspace of a vector space is needed to solve a problem? or can you model real world problems into mathematical ones for these subsets?
    • Linear dependence lemma
  • complete: 31-05-25 Finite-dimensional vector spaces:
  • started: 31-05-25
  • Bases: vectors in a base of a vector space are linearly independent. Are all the elements derived using linear combination of the base unique?
  • dimension
  • extend LI vector to basis, extend spanning list to basis
  • analogy between sets and vector spaces:
  • Is it possible to find out how many LI vectors exists in a n dim VS?

Linear Maps

  • start: 01-07-25
  • linear map lemma
  • To prove that any vector space is finite dimensional, can be proved by taking a basis, and proving that any arbitrary element is represented using finite basis.
  • ==fundamental theorem of linear maps: dim V = dim null T + dim range T==
  • Ex3B:
    • 21: I can’t figure out how to prove the dimension of . It was really straightforward, since I just had to represent that every element in the subspace lies in the intersection of U, and range T.
    • 22: create a linear map from null S to V, and use that to prove that dim null ST dim null T (obvious) + dim null S
    • 23: same as above, create a map from range S W, and prove dimension will be less for both of right hand side. then, the final answer is just stronger limit. can prove that weaker limit, i.e. max of both is not possible.
    • 24: seems like a rubbish question.
    • 25,26: use same strategy as 23, try to create a map, and prove both sides.
    • 27: interesting problem. I forgot what was the relation for direct sum.
    • 29: how do i approach this problem? if i have to prove existence of a polynomial. Claude says you need to take a poly q, and solve for arbitrary poly p. get a relation between coefficients of q and p. and solve the relation to get the coefficients.
    • 32: understanding what maps I need to create that will help me form the contradiction or successful proof is being difficult for me.
  • 3C: Matrices, Invertibility, isomorphism:
  • Linear map T:U V is denoted by a matrix of size m x n, where is the basis of V.
  • 3D: Invertibility
  • Prove a LM is invertible iff injective and surjective.
  • For maps on same dimension spaces, injectivity, surjectivity, invertibility is same, i.e. they are isomorphic.
  • Linear maps act like Matrix multiplication: M(Tv) = M(T)M(v)
  • How do you define ?
  • 3E: Products and Quotient space
  • Product of vector space: V x W
  • , where U is a subspace of V.
  • Quotient Map:
  • Ex 3E:
    • 2. proving it using dim of product space = sum of individual space is wrong because it’s proven when individual spaces are assumed as finite dim VS.
      1. very interesting question because the vector spaces we’re dealing with are maps. So we have to create a map that takes an element in the first VS and map it to an element in second space. To prove isomorphism, prove the map is indeed a LM, is injective and surjective. The main insight is where . To prove that the map is surjective is a little tricky.
      1. similar to 3, selecting the map is the main problem.
      1. main thing is you already know what the other subspace for is. You just have to prove that the elements belonging to both and also lie in that subspace.
      1. I forgot to prove that to prove that a list is a basis, I have to prove linear independence too, and only proved span.
      1. I found out a flaw in my understanding of quotient spaces. is actually a vector space of cosets of . So the elements of are not individual vector spaces, but a whole set.
  • 3F: Dual Space and dual maps
  • Is this where I get information overload lol? the description is getting too complicated.
  • linear functional: way to measure a vector space
  • dual space: set of all measures of a vector space
  • dual map: takes a measure back to V from W for a map T: VW
  • annihilator
    • prove:
    • how does annihilator relate to the subspace and the parent space? i.e. if annihilator is nil,
    • if annihilator exists, then double annihilator should exist as well.
  • relation between dimension of dual map and map and domain and codomain
  • QUES: can you prove in two distinct ways why column rank of is equal to row rank of matrix?
  • Ex 3F:
    • 7: same as ex 3E.5,6.
    • 16: T=0 T’ = 0 looks straightforward. other way round is my problem. . can be proven using contradiction by selecting a map that takes w to 1.
    • 17: What does inverse of T’ mean? , then . This means if T is invertible, then S(w) exists for each w in W. Then just show composition of dual maps returns same map. converse can be proven using bijectivity of dual maps.
    • 24: TODO
    • 26: interesting framing of the annihilator.
  • Things I struggled at in this chapter:
    • Proving a map is surjective. Injectivity can be proven is straightforward way, i.e. Tu = Tv u = v, or null T = {0}. But surjectivity need to be proven for arbitrary vector in the range of T.
    • Concept of Matrix of linear map is still not completely clear to me. M(T) allowed us to represent linear maps as matrix, which is pivotal for computation.
    • what are the most prominent examples and use case i can think of quotient maps and dual maps?
    • QUES: what are the proof techniques that I know of right now? contradiction, induction, counterexample, equality by subset, iff, at least logic,

Polynomials

  • PROVE: polynomial division: p=sq+r with dim r dim q.

Trigonometry primer

  • trigonometric interpretation of complex numbers =
  • geometric interpretation/polar representation is interpreted as vector of magnitude r with angle from origin.
  • euler form: .
    • TODO: Prove this. has implications in fourier transform.
  • multiplication of complex number in polar form:
    • To prove this: use formula of
  • de moivre’s theorem: . Can be trivially proved using induction.
  • TODO: come back to fundamental theorem of linear algebra proof. the current proof uses analysis (Extreme Value theorem precisely), to assume a complex-coefficient polynomial that attains minimum at certain .
  • Factorization of a polynomial over C exists and is unique with at most m roots.
    • What are the subresults that need to be proven in order to prove above?
    • Every polynomial can be written as p(z) = (z-a)q(z), with q being deg m-1 poly.
    • Prove that any deg m polynomial has at most m roots.
    • Prove polynomial in P(C) has a root in C.
    • Prove polynomial can be written as p(z) = c(z-a1)(z-a2)…(z-am)
    • prove a1..am is unique.
  • Ex 4:
    • 9: prove using induction.
    • 11: define
    • 14: I would have taken slightly more time if the steps (a), (b) were not explicitly provided.
      • Show T is a LM.
      • Injective: T(r1,s1)=T(r2,s2) r1=r2, s1=s2.
      • Surjective: TODO
      • rp+sq=1 is proven trivially from (a,b)

Eigenvalue and Eigenvectors

  • START: 04-10
  • 5A
  • operator
  • Invariant subspace
  • QUES: what examples other than {0}, V, can you provide for invariant subspaces? differentiation, null T, range T, all scalar multiples (1-dim subspace)
  • eigenvalue: all 1-dim subspace corresponding to a LM T is formed using a scalar (eigenvalue).
  • eigenvector: all vector associated with an eigenvalue.
  • PROVE: eigenvector corresponding to distinct eigenvalues are LI.
  • QUES: what’s the upper bound for number of eigenvalues for a T?
  • QUES: What effect a polynomial can have on an operator : ? and why only operator, and not other LM?
  • QUES: can you represent an eigenvalue in matrix form? i.e. ?
  • QUES: if eigenvectors are objects that remain unchanged in form after a map, then can this apply to objects beyond vector spaces? functions (differentiation, integral)?
    • Answer is yes, and that exactly what we call eigenfunction.
  • QUES: Geometrically, what would the eigenvector of a rotation be?
  • QUES: what are the trivial subspaces of p(T) invariant under T?
  • Ex 5A:
    • 1, 2, 3: trivial problems on proving why the said subspace is invariant under T.
    • 6: eigenvalue = 1, eigenvector = (x, x)
    • 9: , with k as eigenvalue.
    • 10: as
    • 13:
    • 15: define eigenvalue of T’. use to prove T’T direction.
    • 16:
    • 17:
    • 18:
    • 19:
    • 20:
    • 21: Use T.T^-1 = I identity to prove
    • 23: if T has eigenvalue and S has , then the problem is solved. My question is, it is possible to have following identity: and . Won’t that lead to different eigenvalues? TODO
    • 26, 27: can be proven using
    • 28: T has atmost dim V eigenvalues from 5.12. rank-nullity implies dim null T + dim range T >= # eigenvalues. TODO
    • 32: Take a polynomial that satisfies the invariant. use fundamental theorem of algebra to factorise it into individual factors. Use the fact that T has no eigenvalues to discard first two solutions.
    • 33: a) T^m injective T injective direction: can be proven using induction. Take T^2 as first step, prove using contradiction that if T^2 is injective implies T is injective. Scaled to T^m. b) T^m surjective T surjective. can also be proven similarly using contradiction. Assume T is not surjective, that means there exists v in V such that v is not in range T.
    • 35: couldn’t have done this without the hint. Proving is eigenvalues, and are the eigenvectors of D. Using 5.12 you prove eigenvectors corresponding to distinct eigenvalues are LI.
    • 37
    • 40: write the definitions of both sides.
    • 43:
  • 5B
  • existence and uniqueness of Minimal polynomial
  • roots of minimal polynomial equals eigenvalue of the operator
  • eigenvalues on odd dimensional real vector spaces
  • EX 5B
    • 6: again find the minimal polynomial using matrix of T.
    • 8: critical step: what’s the matrix of T? then find the minimal polynomial from the matrix.
    • 13:
    • 14: My first hunch was to find the matrix of T using minimal polynomial, take it’s transpose, and then find the minimal polynomial of T^-1. But that turned ugly really quickly. The trick was to just invert the polynomial’s invariant, i.e. calculate T(1/z) and then make it monic.
    • 29: my reasoning so far is, let p(x) be the minimal polynomial.
      • if 2 eigenvalues exist, then v1, v2 are the independent eigenvectors associated with it and span(v1, v2) is the invariant subspace
      • if 1 eigenvalue exist, i.e. for nonreal vector space. Take a complex vector v = u + iw, Then for any real operator T(u) = kv = (a + ib)(u + iw) = Tu = (au - bw), Tw = (aw + bu) invariance. span(u, w) is the 2-dim invariant subspace.
      • if no eigenvectors exist. p(T) = (T^2 + bT + cI)q(T) = 0. Then, q(T)v is in null (T^2 + bT + cI). consider any w \in q(T)v. then bTw + Cw = -T^2w. thus taking w \in range T^2 + bT + cI. span(w, Tw) is the two dimensional invariant subspace.
  • 5C
  • Upper triangular matrices
  • representation of M(T) in upper triangular form.
  • diagonal values of upper triangular matrix for T for some basis are eigenvalues
  • ==UMT exists iff ==
  • Ex 5C
    • 7.
      1. look at basis change.
  • 5D
  • Eigenspace
  • Diagonalisable matrices. conditions equivalent to diagonalisability
  • conditions necessary for diagonalisability
  • Ex 5D
    • 9, 10. basis change again.
      1. what’s the dim of L(V)? consider standard matrices for basis of L(V). can you prove them diagonalisable? How do you prove diagonalisability of E_ij?
      • realisation: diagonalisable matrix doesn’t mean that all diagonal entries are non-zero. to prove diagonalisability, find span of matrix that are itself diagonalisable, and reduce that to linearly independent operators.
    • Exercise for this section has been fairly straightforward and mostly same as 5C.
  • 5E
    • Diagonalisable operator have diagonal matrix with respect to same basis iff they commute.
    • Commutative operator have Upper-Triangular matrices with respect to same basis.
    • Ex 5E

Inner Product Spaces

  • Start: 10-11
  • 6A
  • Inner product, Inner product spaces, norm
  • Orthogonal vector, orthogonal representation
  • pythagoras
  • cauchy-schwarz
  • triangle
  • parallelogram
  • Ex 6A
    • 19: Hint: take a look at Gregshorin disk theorem, and try to apply Cauchy-Schwarz. Identify the relation between the result and both of the above.
      • 28:
    • 32:
    • 35: Problem is to prove p = q exists. Take any p = q + (1 - ||x||^2)r, and prove this exists, is injective and surjective, and is harmonic.
  • 6B
  • orthonormal list, bases
  • bessel’s inequality
  • Gram-schmidt procedure
  • UMT exists for every orthonormal bases
  • Riesz representation
  • Ex 6B
    • 6:
    • 10: Can you derive GM procedure from the assumptions provided in the question?
    • 14: if not immediately clear, take example of R^2. Can GM procedure start with a different sign?
    • 15: If orthogonal vectors are same for both inner product. Take e1, …, en to be the orthonormal vector in first inner product, then . Consider e_i + e_j, e_i - e_j. It’s orthogonal in first inner product, that means . Then, decomposing and , and taking both inner product, you get the final answer.
    • 16: None of the answers from LLMs were clear to me. TODO.
    • 17: Use schur’s theorem. Take the matrix and orthonormal basis, and apply norm to Tv_k.
    • 19: the given condition is equivalent to dual basis. Construct a u using as basis, and apply to it, to prove that it is indeed linearly independent. Then using dimensionality, it is a basis.
    • 20: 2 results needed to prove this. Every commuting operator in a set has upper triangular matrices with respect to same basis. Then, every operator with an upper triangular matrix has an orthonormal basis using Gram-Schmidt operator.
    • 21: TODO. Author has posted a solution here.
    • 22:
  • 6C
  • orthogonal complement:
    • properties of complement.
  • Projection: . properties of projection.
  • ==Riesz representation theorem using complements. Proving why is surjective.==
  • Minimization problem.
  • Pseudoinverse: .
    • algebraic properties of PI
  • Function approximation using PI
  • EX 6C
    • 8. Use the fact that, v in Riesz representation theorem is a unique vector. Thus, take a basis of U, and extend that to V. Then, w comes out to be projection of v onto U.
      1. Choose arbitrary elements in subspace U and , and prove the relation. For converse, provide contradiction, i.e. choose an element, and show that equality is possible only when subspaces are invariant.
      1. a) First prove that v | φᵥ is indeed a linear map. and to prove injectivity, we have to prove that T: V V’ has null T = {0}. then T(v) = 0, means <u, v> = 0 for all u in V. Let u = v, then <v, v> = 0, hence v = 0. Thus, T is injective. b) dim V = dim null T + dim range T. since dim null T = 0, and dim range T = dim V’ = dim V. hence T is an isomorphism.
      1. as per Riesz representation, for each v in V, there exists for all u. Now, take . for this to be equal to dual basis, , and 0 for all other . Thus is the dual basis identified by Riesz as .
      1. a) includes elements such that <f,g> = 0, implies . Suppose f in U, and g in complement of U, that means for a continuous function f, to be zero implies g = 0. b) set of all continuous real-valued functions are not finite, taking result of a) doesn’t hold.
    • 21: Form basis for null T.
    • 22: Use expression for . Then the problem becomes trivial of forming the connection between LHS and RHS.
    • 23: Identify . Form the expression for . Multiply by T both sides, and prove that it equals .

Operators on IP

  • 7A
  • start: 29-11
  • adjoint of an operator
  • properties of adjoint
  • dimension of adjoint operator
  • matrix of adjoint of an operator
  • self-adjoint operators
  • QUES: How do you write <Tv, w> as a combination of <Tu, u> when F = C?
  • normal operators
  • QUES: suppose T is normal. Find self-adjoint A, B such that T = A + iB.
  • Ex 7A
    • 5. Use the property that M(T*) = (M(T))*. Write Tek = <Tek, f1>f1 + … + <Tek, fn>fn. ||Tek||^2 = |<Tek, f1>|^2 + … + |<Tek, fn>|^2. Write same for . summing all terms of ||T*fk||^2 across all f_k, and we get the same term as LHS because
      1. Find out that A^2 = I. Eigenvalues of A = -1, 1. Minimal polynomial is .
      1. Calculate how the matrix of T looks like (Upper triangular). Number of possible configurations for a dimension n = n(n+1)/2.
      1. Do this after 27 because in 27, we prove that null T^k = null T, when T is normal. And D^9 = 0 null D = P_8(R). Thus, there can’t exist any inner product, where T is non zero and normal.
      1. Define T* = <v,x>u. For a) write T = T*. u comes out to be scalar multiple of x. For b) Write TT* and T*T, prove one direction by assuming u, x to be linearly dependent, T comes out to be normal. For other direction, go to the last identity when TT* = T*T, arrive at <v,x><u,u>x = <v,u><x,x>u. Take cases with u, x. and arrive to the conclusion.
      1. Proving null T^k is subset of null T is the main part here. Prove it using induction. Assume the base case k=2. Prove that null T^2 is subset of null T. Take v in null T^2. So, <T^2 v, v> = 0 = <Tv, T*v>. Thus v is either in null T or null T*. And null T* = null T. Hence v in null T. Assume kth case to be true, prove for k+1 similarly as T^2.
      1. Finding a counterexample for this was not straightforward. Need to find a matrix such that row-squared and column-squared are equal but T.T* != T*T.
  • 7B
  • decomposition of P(R)
  • Necessary condition for T to have UTM
  • T is self-adjoint > p(z) = (z-k1)…(z-km)
  • Real/Complex spectral theorem
  • Ex 7B:
    • 3. T is normal T is diagonalisable If eigenvalues = 0 implies v is in null T eigenvalues = 1 implies Tv = v. Thus T = Pu.
      1. If T^8 = T^9 and T has diagonalisable operator (due to normality) eigenvalue is either 0 or 1 T^2 = T self-adjoint.
      1. T^9 - T^8 = 0 = T^8(T-I). range(T-I) \in null(T^8).
      1. Proving the non-trivial direction T is normal p(T) exists. Use the complex spectral theorem to establish that T has diagonalisable matrix for some orthonormal basis, and then take individual eigenvalues on the matrix, To find a polynomial p(z) such that . We can use any polynomial interpolation method to find the required polynomial. Now, we have to establish that p(T) is indeed equal to T*. Take the orthonormal eigenvectors as basis, and suppose v \in V, prove that both are equal.
    • 10,11. Use spectral theorem to establish T has diagonalisable matrix for some orthonormal basis of V. S^2=T implies S is diagonalisable too, and has eigenvalues equal to square root of eigenvalues of T for F=C (complex spectral theorem). All complex numbers have a square root that can be calculated using De-Moivre’s theorem. For F=R, each real number have a cube root.
    • 14, 15. Every self-adjoint operator is normal. Eigenvector corresponding to each distinct eigenvalue for a normal operator is orthogonal. Thus, for distinct eigenvalues 1..m, corresponding eigenvectors form a basis, and thus T is diagonalisable.
    • 16, 17. Two results needed. complex/real spectral theorem, and any set of diagonalisable operators commute under same orthogonal basis.
  • 7C
  • Positive operators.
  • Properties of positive operators.
  • unique positive square root of a positive operator
  • Ex 7C:
    • 19. fix an operator in F^2. and equate square of it to identity operator. Now, a^2 + b^2 = 1. This equation has infinitely many solutions when a, b are complex.
  • 7D
  • Isometry, Properties of isometries
  • Unitary, properties of unitaries
  • difference between matrix of isometry and unitary operators.
  • unitary operator for complex vector spaces.
  • unitary matrix
  • QR factorisation
  • positive definite matrix
  • Chlolesky factorisation
  • Questions:
    • How are eigenvalues defined?
    • What are some of the common examples of unitary, positive operators?
    • Where is QR and cholesky factorisation used?
    • Are there any alternative proofs for these factorisations?
    • What’s the analogy for self-adjoint/positive operators with C, and unitary operator with unit circle?
  • Ex 7D
    • 6. First prove that for any normal operator T with distinct eigenvalues, T = U*DU, where U is unitary and D is diagonal operator (with diagonal values as eigenvalues). Now write T1 = U1*DU1 and T2 = U2*DU2. Substitute D = U2T2U2* in 1.
      1. For T to be equal to U*DU for any self-adjoint operator, all multiplicities of eigenvalues have to be same. So, in the example, just take different multiplicity for T1, and T2.
      1. <S-1v, S-1v> = <Sv, Sv> <S-1S-1v, v> - <SSv, v> = 0 S-1S-1 = SS S*S = I.
      1. forward direction using cauchy schwarz. For reverse direction, extend v to orthonormal basis
  • 7E
  • Dimensionality of T*T.
  • Singular values
  • Singular value decomposition for
  • Matrix version of SVD.
  • Ex 7E
    • 2. Let . Let s be any singular value, then Tv correspond to . Consider <Tv, w> = <sk <v, ek> fk, w> = <v, sk<w,fk>ek> = <v, T*w>.
      1. Expand SVD for . Lower bound, substitute all s_k = s_1. upper bound, substitute all s_k = s_n.
      1. Expand SVD for Tv. Find values for Tek. Multiply by inverse of T.
    • 12 a)
      1. a) let T1 = S1T2S2. T1*T1 = S2*(T2*T2)S2. Since S2 is unitary operator, S2 is invertible and equal to adjoint, take any eigenvalue of and y = S2x. , doing this in converse, we find T1, T2 has same singular values. b) let T1, T2 have same singular values, use matrix version of SVD to arrive at the result.
      1. write ||Tv|| in terms of s_k. Create lower bound inequality by taking all s_k equal to s_n, and arrive at the result.
      1. do above inequality for s_1 and arrive at upper bound.
  • 7F
  • Norm of linear map: ||T||
  • Alternative form of ||T||
  • Linear map approximation with smaller dimensional map
  • Polar decomposition
  • Definition: Box, Ellipsoid, Parallelpiped, Box
  • Volume via singular boxes
  • Ex 7F
    1. Take the definition of ||T-S||. There exists v such that ||T-S|| = ||(T-S)v|| = ||Tv-Sv|| >= ||Tv|| - ||Sv|| >= ||T|| - ||S||.
    1. ||Tv||=||T|| ||v|| implies v is along the most stretched dimension. Thus, taking ||T|| = s1, T*Tv = s1^2 v=||T||^2 v.
    1. a) The meaning of the question is if T is arbitrarily close to I (another invertible operator), then T must be invertible as well. To prove T in invertible, it’s enough to prove that T is injective. Take any v in null T Tv = 0. ||(I-T)v|| <= ||I-T|| ||v|| => ||v|| <= ||I-T|| ||v||. Since ||I-T|| < 1 => ||v|| < ||v|| => ||v|| = 0. b) Use the above relation, and the fact that PQ is invertible implies P, Q are both invertible.
    1. When T is invertible, taking T=S, proves that k > ||T-S|| > 0. When T is not invertible, Take S with singular values (s1, .., sk, c, c, ..), and dim range T = k, i.e. singular values of T = (s1, …, sk). We know that ||T-Tk|| = s_{k+1}, So ||S-T|| = c < k.
    • Show why this proof is invalid. ||(T-S)v|| <= ||T-S|| ||v||. Using reverse triangle inequality ||Tv|| - ||Sv|| <= ||T-S|| ||v|| < k ||v|| => ||Sv|| > ||T|| ||v|| - k||v||. Since ||Tv|| <= c ||v|| => ||S|| ||v|| > (c-k) ||v||. ||S|| = s_{min} > (c-k).
    1. Take two cases, a) when
    1. Use the identity that |<v,u>|^2 = ||v||^2 ||u||^2 cos^2(t).
    1. a) Keep in mind to prove this for arbitrary basis and not singular basis. For upper bound on ||T||, take the max of Tek, and apply Norm’s definition. For lower bound, use cauchy schwarz on ||Tv|| and max of that = ||T||.
    1. Use induction and 19. to prove this. Take base case of k = 2, and prove for any arbitrary k+1.
    1. For intuition, Geometrically operator norm and inner product space on L(V, W) mean different thing. Take any example with different singular values, let’s say s1=1, s2…sn=0, ||T||=1,
    1. Use polar decomposition, and assume Su = ||u||/||x|| x, then calculate S*x=||x||/||u||u. Prove S is unitary, then is as desired. Alternatively, you can also calculate T*, then T*T, and sqrt of that.
    1. a) trivial to show by subtraction, and calculating operator norm by defintion. b) calculate ||(T-E)v|| = ||((T-S) + (S-E))v|| = ||(T-S)v|| + ||(S-E)v|| >= ||T-S|| due to reverse triangle inequality on ||S-E||.
    1. Use spectral theorem to prove sqrt(T*T) can be a diagonal operator. and since S is unitary, S is normal too. Now prove that both commute, because commuting normal operators have a common orthonormal basis.

Operators on Complex VS

  • START: 15-12
  • 8A
  • Generalised eigenvector
  • there exists a basis of generalised eigenvectors
  • Nilpotent operator
  • Ex 8A
    1. For v, Tv, … to be LI, for any a1,…,an such that linear combination is zero. Apply T repeatedly to combination, and prove each individual a_i is zero.
    1. if null T and range T is direct sum, then null T ∩ range T = {0}. To prove null T^2 ⊆ null T, take any v such that T^2v=0 but Tv=w, then T.w=0, but then w is in both range T and null T, reaching contradiction, hence null T^2 = null T. The other direction can be proven similarly, using the identity that null T isn’t growing.
    1. a) T diagonalisable implies (T-λI)^n=0 implies each individual diagonal values raised to the power equal 0. Thus, each diagonal value of λₘ-λI must be 0. Hence, every generalised eigenvector is eigenvector of T. b) F = C implies generalised eigenvector form a basis of V, this basis is also the basis of eigenvector of T, which implies T is diagonalisable.
    1. Suppose T^m = 0. Prove that m ≤ 1 + dim range T. Use the inequality that dim null T^k keeps rising, and increases by atleast 1 dimension on each increase of power.
  • 8B
  • Generalised eigenspaces
  • Generalised eigenspace decomposition
  • multiplicity
  • characteristic polynomial
  • Cayley-hamilton theorem
  • Multiplicity of eigenvalue equals number of times on diagonal
  • Block diagonal matrix
  • Ex 8B
    1. Use the matrix representation as UTM. Since S is invertible, no eigenvalues are 0 and eigenvalues of are inverse of those of S, hence, eigenvalues remain same for both T, and .
    1. let m be the smallest positive integer of exponent of (z-λ) in minimal polynomial. Then (T−λI)ᵐv=0 for all the generalised eigenvectors of λ. Thus, k in generlalised eigenspace = m, and =0 in (b) (c), and (c) (d) holds using the null subset, where null T^k ⊆ null T^k+1.
    1. Since Vₖ is invariant under T, and , To show that p1…pm = q, we only need to show that (p1…pm)v=0 for all v in V. Since V_1⋯V_m is a direct sum, showing that (p1…pm)|V_k = 0 since p1…pm commute and we can bring p_k to the front as the first polynomial on v.
    1. a) (T-λₖI)^k(u+iv) = 0. Since
  • 8C
  • Prove if F=C and T ∈ ℒ(V) is invertible, then T has kth root for every positive integer k.
  • Jordan basis.
  • Every nilpotent operator has a jordan basis.
  • Every operator in F=C has a jordan basis.
  • 8D
  • Trace. tr(TS) = tr(ST)

Multilinear Algebra and Determinants

  • START: 22-12
  • 9A
  • Bilinear form (bilinear functional)
  • examples of bilinear form
  • Bilinear form and operator. , and , .
  • Change of basis
  • Symmetric bilinear form
  • Symmetric and diagonal matrix of bilinear form
  • Prove that when F=R, there exists diagonal matrix for some orthonormal basis of V for .
  • alternating bilinear form
  • quadratic form.
  • Ex 9A
    1. Counterexample: Consider symmetric bilinear form: β((x1,x2),(y1,y2)) = x1y1 - x2y2. u = (1, 1) and w = (1, -1) are both such that β(u, u) and β(w, w) = 0. But β(u+w, u+w) is not in the subspace.
    1. Calculate dimension of symmetric matrices for nxn, and then (dim V)^2 - (dim sym).
  • 9B
  • Multilinear form. set of multilinear forms: .
  • alternating multilinear form
  • permutation, sign of permutation
  • formula for dim-V linear alternating forms
  • ==dimension of alternating multilinear form = 1==
  • 9C
  • determinant of an operator, determinant of matrix
  • determinant as alternating multilinear form
  • formula for determinant of matrix
  • determinant of operator = determinant of operator’s matrix
  • characteristic polynomial when F=C/R.
  • Hadamard’s inequality
  • Vandermonde matrix
  • Ex 9C
    1. Start with the fact that eigenvalues of inverse of T is inverse of eigenvalues of T. then take p(z), then apply 1/z, then multiply both sides with . you get the polynomial that’s zero at exactly the values of eigenvalues of inverse of T.
    1. If V is a real vector space, then characteristic polynomial of T must be of the form q(z) = (z-λ)(z^2+bz+c). if we perform this iterative factorisation and remove irreducible quadratics (whose roots are complex and come in pairs), then we get a real root which is an eigenvalue of T.
    1. Consider the characteristic polynomial z | det(zI - T/S). and represent the characteristic polynomial as z^n - Tr(T/S)z^{n-1} - ... - (-1)^n det(T/S).
    1. Can we use the reasoning that in QR decomposition of A, is only possible when when i≠j?
    1. Show that matrix elementary operations (row addition, scalar row multiplication, row interchange) are diagonal matrices. Now, prove that every matrix can be written using elementary operations, and the given map satisfies the condition. hence the map must be determinant.
  • 9D
  • Bilinear functional. dimension of
  • tensor product: . element of a tensor product: .
  • basis of tensor product
  • Bilinear map
  • Question: Connect and compare between linear map vs functional and bilinear variant.
  • Converting bilinear maps to linear maps
  • Tensor product on Inner Product spaces
  • m-linear maps
  • Ex 9D
    1. take (e1,e2,e3) to be the basis of R^3. take tuples (vi,wi) = (e1-e2,e1+e3),(e1+e2,e3),(e1,-e1-2e3).
    1. represent in matrix form and find out that for each element of the matrix, the corresponding value should be zero, and thus, each element of w is zero.
    1. are rank-1 tensors. Show that no rank-2 tensors lie in the subspace. for example: v1⊗w1 + v2⊗w2 should lie in (VxW) because individual elements lie in VxW, but no elements of v⊗w equal the above element.
    1. can be done similar to above by showing that sum of rank-2 tensors won’t equal rank-3 tensors.
  • 9,10. Prove that a unique bilinear map exists from T’(v,w) = Sv⊗Tw. Now, use universality property of tensor products to prove that a linear map exists.

Examples and Computational Methods

  • T in R^2 such that dim null T^2 > dim null T.
  • operator for real vector space with no eigenvalues. Think about an irreducible characteristic polynomial and derive the operator from that.
  • T \in R^3 such that T is normal but not self-adjoint.
  • What are the main results that work for infinite dimensional VS as well?
  • Given T, find eigenvalues, eigenvectors. minimal polynomial. characteristic polynomial. generalised eigenvectors.
  • nilpotent operator.
  • Given T. Find Jordan basis. .
  • T in C^3 such that minimal polynomial and characteristic polynomial differ by 2 degrees.
  • Examples of Bilinear form.