Dynamic Programming (DP)
A big idea, hard, yet simple
- A powerful algorithmic design technique
- A large class of seemingly exponential problems has a polynomial solution (“only”) via DP
- Particularly for optimization problems (min / max) (e.g., shortest paths)
* DP ≈ “controlled brute force”
* DP ≈ "recursion + re-use"
Fibonacci numbers
F1 = F2 = 1; Fn = Fn−1 + Fn−2
Goal: compute Fn
Naive recursive algorithm
fib(n):
if n <= 2: f = 1
else: f = fib(n-1) + fib(n-2)
return f
Memoized DP Algorithm
memo = {}
fib(n):
if n in memo: return memo[n]
if n <= 2: return 1
else: f = fib(n-1) + fib(n-2)
memo[n] = f
return f
* DP ≈ recursion + memoization + guessing
- memoize (remember) & re-use solutions to subproblems that help solve problem
- in Fibonacci, subproblems are F1, F2, . . . , Fn
* ⇒ time = # of subproblems · time/subproblem
- Fibonacci: # of subproblems is n, and time/subproblem is Θ(1) = Θ(n) (ignore recursion!)
Bottom-up DP algorithm
fib = {}
for k in range(n):
if k <= 2: f = 1
else f = fib[k-1] + fib[k-2]
fib[k] = f
return fib[n]
- exactly the same computation as memoized DP (recursion “unrolled”)
- in general: topological sort of subproblem dependency DAG
- practically faster: no recursion
- analysis more obvious
- can save space: just remember last 2 fibs ⇒ Θ(1)
Shortest Paths
- Recursive formulation: δ(s, v) = min{w(u, v) + δ(s, u)| (u, v) ∈ E}
- Memoized DP algorithm: takes infinite time if cycles
- works for directed acyclic graphs in O(V + E)
* Subproblem dependency should be acyclic
δk(s, v) = min{δk−1(s, u) + w(u, v) (u, v) ∈ E}
δ0(s, v) = ∞ for s = v (base case)
δk(s, s) = 0 for any k (base case, if no negative cycles)
- more subproblems remove cyclic dependence: δk(s, v) = shortest s → v path using ≤ k edges
- recurrence:
- Goal: δ(s, v) = δ|V|−1(s, v) (if no negative cycles)
- time: # subproblems·time/subproblem
- actually Θ(indegree(v)) for δk(s, v)
- ⇒ time = Θ(VE) BELLMON FORD