Part 2: Linear Algebra Fundamentals

Part of the Mathematics for Programming 101 Series

Why Linear Algebra Clicked for Me

I was implementing image preprocessing for a computer vision model. The code was a messβ€”nested loops iterating over pixels, converting RGB to grayscale, normalizing values, rotating images.

Hundreds of lines. Slow execution. Hard to maintain.

Then I learned about matrix operations:

# Before: Nested loops
for i in range(height):
    for j in range(width):
        gray[i][j] = 0.299*r[i][j] + 0.587*g[i][j] + 0.114*b[i][j]

# After: One line with matrix multiplication
gray = image @ np.array([0.299, 0.587, 0.114])

10x faster. One line instead of hundreds. Easier to understand.

That's when I realized: linear algebra isn't abstract theoryβ€”it's the language of data transformation.

What is Linear Algebra?

Linear algebra is the mathematics of vectors and matrices. In programming terms:

  • Vectors = Arrays of numbers (like lists)

  • Matrices = 2D arrays (like nested lists)

  • Operations = Transforming data efficiently

Every time you:

  • Process images or video

  • Train machine learning models

  • Transform 3D graphics

  • Manipulate datasets

  • Solve systems of equations

You're using linear algebra.

Vectors: The Foundation

Understanding Vectors

A vector is an ordered collection of numbers. Think of it as:

  • A point in space

  • A direction with magnitude

  • A list of features

Vector Operations

Real Use Case: Cosine Similarity

Finding similar documents or products:

Used in:

  • Recommendation systems

  • Document search

  • Image similarity

  • Face recognition

Matrices: Data Transformations

Understanding Matrices

A matrix is a 2D array of numbers. Think of it as:

  • A transformation function

  • A dataset (rows = samples, columns = features)

  • A collection of vectors

Matrix Operations

Matrix Multiplication Intuition

Matrix multiplication transforms vectors:

Real-World Application: Image Processing

Grayscale Conversion

Image Rotation

Machine Learning Applications

Linear Regression

Neural Network Forward Pass

Eigenvalues and Eigenvectors

The Concept

An eigenvector of a matrix is a vector that only gets scaled (not rotated) when the matrix is applied to it.

Real Use Case: Principal Component Analysis (PCA)

Dimensionality reduction for machine learning:

Why this matters:

  • Reduce features for faster ML training

  • Visualize high-dimensional data

  • Remove noise and redundancy

  • Compress data efficiently

Practical Tips

1. Think in Transformations

Don't memorize matrix multiplication rules. Think: "This matrix transforms my data from space A to space B."

2. Use Broadcasting

NumPy broadcasting avoids explicit loops:

3. Vectorize Operations

Replace loops with matrix operations:

When You Need Linear Algebra

You'll use these concepts when:

  1. Machine Learning: Every ML algorithm uses matrix operations

  2. Computer Graphics: Transformations, projections, rendering

  3. Data Processing: Feature engineering, normalization, PCA

  4. Computer Vision: Image filters, transformations, CNNs

  5. Natural Language Processing: Word embeddings, transformers

  6. Scientific Computing: Simulations, numerical methods

  7. Game Development: Physics, transformations, camera systems

Key Takeaways

  • Vectors represent data points, directions, or features

  • Matrices represent transformations or datasets

  • Matrix multiplication transforms data efficiently

  • Eigendecomposition reveals the fundamental directions of transformation

  • Vectorization replaces slow loops with fast matrix operations

  • NumPy makes linear algebra practical and fast in Python

What's Next

In the next article, we'll explore calculus and optimizationβ€”the mathematics behind training machine learning models, understanding gradients, and building neural networks from scratch.

You'll learn:

  • How derivatives power gradient descent

  • Building backpropagation from scratch

  • Optimization algorithms explained

  • Real debugging with calculus

Continue to Part 3: Calculus and Optimization β†’


Last updated