Interesting Things I Learned About Linear Algebra
Like a lot of people, I took linear algebra courses without really understanding why they mattered. Revisiting it later with better teachers and better examples made it feel less like symbolic bookkeeping and more like a language for transforming problems.
Motivation
Before diving into the actual material, it is important to find the core drive behind learning linear algebra. Lots of us took linear algebra courses in college, but never really understood how it was useful or why we needed to learn it in the first place.
Most likely, our remnant knowledge was reduced to numerical operations between two or three vectors: determinant tricks, dot products, and symbolic manipulations. That is pure sadness.
After watching Terence Tao's master class, I had a small epiphany: mathematics should guide how you transform your way of thinking and how you construct a narrative for a problem.
- Magic squares can be reframed as tic-tac-toe.
- Counterfeit coin problems connect to compression and sensing.
- Sampling, optimization, and geometry all become more intuitive when the representation changes.
Resources
- Gilbert Strang remains the default recommendation.
- The Essence of Linear Algebra series is still magical for geometric intuition.
- Zico Kolter’s review is one of the most concise written references, plus fundamentals.
- Matrix calculus notes were especially useful for reconnecting linear algebra to ML optimization: Jacobian, gradient, Hessian, and quadratic forms.
- LU, QR, SVD is still a good visual pointer.
Interesting Facts
- Projection matrices satisfy $$P^2=P$$ and, in the orthogonal case, $$P^T=P$$. Projection matters because $Ax=b$ may not have a solution, so you project onto the column space.
- QR decomposition is where orthonormality becomes operational rather than abstract: $$A = QR,\; Q^{-1}=Q^T,\; Rx = Q^Tb.$$
- Determinant facts become easier to remember once you view them as structure-preserving transformations.
- Eigenvectors matter because multiplication by $A$ preserves direction while scaling magnitude. As long as one eigenvalue is greater than one, powers of $A$ can blow up; if every eigenvalue is less than one, powers shrink toward zero.
Fourier
- FFT in action is one of the clearest demonstrations that linear algebra and signal processing are deeply connected.
- Derivation of FFT, plus STFT / DFT / FFT lecture, are still good references.
- Using FFT to process audio with
librosa,stft, and spectrogram transforms makes the abstractions concrete. The original note also pointed to this librosa notebook and a step function Fourier example.
Translating Math to Code
- Tensor puzzles are a good way to pressure-test whether linear algebra intuition survives contact with PyTorch.
- Even interview-style matrix problems can often be reframed cleanly with linear algebra, such as image overlap.
- Hardware acceleration is also a linear algebra story: breaking down matrix multiplication cost and understanding how to speed it up.