Dynamic Programming

📖 Introduction

Dynamic programming (DP) was the concept that took me the longest to truly understand. I remember staring at the Fibonacci memoization solution thinking "okay, I see why this works" but not really understanding when to apply it elsewhere.

The breakthrough came when I realized DP is just smart recursion—breaking problems into overlapping subproblems and reusing solutions. Once I started recognizing the patterns, problems that seemed impossible became systematic.


🎯 What Is Dynamic Programming?

DP solves problems by:

  1. Breaking them into overlapping subproblems

  2. Storing results to avoid recomputation

  3. Building solutions from smaller optimal substructures

When to Use DP

spinner

Signs a problem is DP:

  • Asks for optimal (min/max), count of ways, or possibility (can/cannot)

  • Future decisions depend on earlier decisions

  • Same subproblem computed multiple times


🔝 Top-Down (Memoization)

Start with the original problem, recurse down, cache results.

Visualizing Memoization

spinner

Memoization eliminates duplicate calls—fib(3) computed once instead of twice.


⬆️ Bottom-Up (Tabulation)

Start with base cases, build up to the solution iteratively.

Top-Down vs Bottom-Up

Aspect
Top-Down
Bottom-Up

Implementation

Recursive + cache

Iterative + table

Order

Natural problem order

Must determine order

Subproblems

Only needed ones

All subproblems

Stack

Uses call stack

No stack overflow

Readability

Often more intuitive

Can be harder to derive


📊 DP Framework

Step-by-Step Approach

  1. Define State: What variables describe a subproblem?

  2. Define Transition: How to compute state from smaller states?

  3. Define Base Cases: What are the smallest subproblems?

  4. Define Answer: How to get final answer from states?

  5. Optimize Space: Can we reduce memory usage?


🎯 Classic DP Patterns

Pattern 1: Linear DP

Climbing Stairs

House Robber

Maximum Subarray (Kadane's Algorithm)

Pattern 2: 2D Grid DP

Unique Paths

Minimum Path Sum

Pattern 3: Subsequence DP

Longest Increasing Subsequence (LIS)

Longest Common Subsequence (LCS)

Pattern 4: Knapsack Problems

0/1 Knapsack

Unbounded Knapsack (Coin Change)

Pattern 5: String DP

Edit Distance

Palindrome Substrings

Pattern 6: Interval DP

Burst Balloons


🧠 DP Problem Recognition

Key Phrases

Phrase
Likely DP Type

"Minimum/maximum"

Optimization DP

"Count ways"

Counting DP

"Is it possible"

Boolean DP

"Longest/shortest"

Sequence DP

"Partition into"

Knapsack variant


📝 Practice Exercises

Exercise 1: Decode Ways

chevron-rightSolutionhashtag

Exercise 2: Word Break

chevron-rightSolutionhashtag

Exercise 3: Longest Palindromic Substring

chevron-rightSolutionhashtag

🔑 Key Takeaways

  1. Identify overlapping subproblems and optimal substructure

  2. Top-down is often more intuitive; bottom-up avoids stack overflow

  3. State definition is crucial—ask "what do I need to know?"

  4. Space optimization is usually possible by keeping only needed states

  5. Practice patterns: linear, grid, subsequence, knapsack, string, interval


🚀 What's Next?

In the next article, we'll explore greedy algorithms and backtracking:

  • Greedy choice property

  • Interval scheduling

  • Backtracking template

  • Pruning strategies

Next: Article 13 - Greedy & Backtracking


📚 References

  • CLRS Chapter 15: Dynamic Programming

  • "Introduction to Algorithms" - DP examples

  • LeetCode DP study plan

Last updated