Dynamic Programming
Memoisation, tabulation, LCS, LIS, Knapsack, Matrix Chain, shortest paths — complete GATE CS coverage.
Last updated: April 2026 | GATE CS 2024–2026 syllabus
Key Takeaways
- DP applies when: optimal substructure + overlapping subproblems.
- Top-down (memoisation): recursive with cache. Bottom-up (tabulation): iterative table fill.
- LCS of length-m and length-n strings: O(mn) time, O(mn) or O(min(m,n)) space.
- LIS: O(n²) DP, or O(n log n) with patience sorting.
- 0-1 Knapsack: O(nW) pseudo-polynomial — NOT polynomial (W can be exponential in input bits).
- Matrix Chain: O(n³) — finds optimal parenthesisation, not the actual multiplication.
- Floyd-Warshall: All-pairs shortest paths in O(V³), works with negative edges (not negative cycles).
1. DP Conditions
Test: assume subproblem solution is non-optimal → show this contradicts optimality of overall solution (cut-and-paste argument).
Overlapping subproblems: Recursive solution revisits same subproblems repeatedly.
Contrast: D&C has independent subproblems (merge sort subproblems are distinct).
Greedy vs DP: Greedy has greedy choice property (local optimal = global optimal). DP is needed when greedy fails (e.g., 0-1 knapsack).
2. Top-down vs Bottom-up
| Property | Top-down (Memoisation) | Bottom-up (Tabulation) |
|---|---|---|
| Approach | Recursive + cache | Iterative, fill table |
| Subproblem order | Computed on demand | Fixed order (smaller first) |
| Space | O(n) + stack | O(n) or less |
| Overhead | Function call overhead | No recursion overhead |
| Ease of coding | Easier (mirrors recursion) | Requires order analysis |
3. Longest Common Subsequence (LCS)
LCS[i][j] = LCS[i−1][j−1] + 1 if X[i] = Y[j]
LCS[i][j] = max(LCS[i−1][j], LCS[i][j−1]) if X[i] ≠ Y[j]
Base case: LCS[0][j] = LCS[i][0] = 0
Time: O(mn) Space: O(mn) or O(min(m,n)) with rolling array
LCS length = LCS[m][n]. Backtrack to find actual subsequence.
LCS vs LCS string (substring):
LCS = subsequence (gaps allowed). Longest Common Substring requires contiguous match:
dp[i][j] = dp[i−1][j−1]+1 if X[i]=Y[j] else 0. Answer = max over all i,j.
4. Longest Increasing Subsequence (LIS)
dp[i] = max(dp[j]+1) for all j<i with a[j]<a[i]; base dp[i]=1
LIS = max(dp[i])
O(n log n) — Patience Sorting:
Maintain array tails[] where tails[i] = smallest tail of all IS of length i+1.
For each element x: binary search for leftmost tails[k] ≥ x; replace or extend.
Length of tails[] at end = LIS length.
Relation to LCS: LIS of A = LCS(A, sorted(A)).
5. 0-1 Knapsack
Recurrence:
dp[i][w] = dp[i−1][w] if w[i] > w
dp[i][w] = max(dp[i−1][w], v[i]+dp[i−1][w−w[i]]) otherwise
Base: dp[0][w]=0 for all w.
Time: O(nW) Space: O(nW) or O(W) with 1D array
Note: O(nW) is pseudo-polynomial (W is not bounded by n).
Unbounded knapsack: item can be used multiple times.
dp[w] = max(dp[w], dp[w−w[i]]+v[i]) for each item i, for each w.
6. Matrix Chain Multiplication
Recurrence:
m[i][j] = 0 if i=j
m[i][j] = mini≤k<j(m[i][k] + m[k+1][j] + p[i−1]·p[k]·p[j]) if i<j
Time: O(n³) Space: O(n²)
Fill table by chain length l=2,3,…,n; for each i, j=i+l−1.
Answer: m[1][n] = minimum scalar multiplications.
7. DP Shortest Paths
Relax all edges V−1 times. If edge can still be relaxed on V-th pass: negative cycle.
Time: O(VE) Space: O(V)
Floyd-Warshall: All-pairs shortest paths.
dist[i][j][k] = shortest path from i to j using only vertices 1…k as intermediates.
dist[i][j][k] = min(dist[i][j][k−1], dist[i][k][k−1] + dist[k][j][k−1])
Time: O(V³) Space: O(V²) (can drop k dimension)
Detects negative cycle: if dist[i][i] < 0 after algorithm.
Transitive closure (Warshall): Replace min/+ with OR/AND.
8. Other DP Problems
| Problem | DP State | Time |
|---|---|---|
| Coin change (min coins) | dp[w] = min coins for amount w | O(n·W) |
| Coin change (count ways) | dp[w] += dp[w−coin] | O(n·W) |
| Edit distance | dp[i][j] = edits to convert s1[1..i] to s2[1..j] | O(mn) |
| Rod cutting | dp[n] = max revenue from rod of length n | O(n²) |
| Subset sum | dp[i][s] = can achieve sum s with first i items | O(n·S) |
| Egg drop (k eggs, n floors) | dp[k][n] = min trials | O(kn²) or O(kn log n) |
| Optimal BST | cost[i][j] = cost of optimal BST for keys i..j | O(n³) |
9. GATE Examples
LCS = “BCBA” or “BDAB” → length 4. Answer: 4.
Option 1: (A×B)×C = 10×100×5 + 10×5×50 = 5000+2500 = 7500
Option 2: A×(B×C) = 100×5×50 + 10×100×50 = 25000+50000 = 75000
Answer: 7500 (left-to-right).
dist[i][i] will become negative for some vertex i on the cycle. The algorithm does not terminate incorrectly — it completes, but shortest path values for vertices on/reachable from the cycle will be incorrect (or −∞).
10. Common Mistakes
- 0-1 Knapsack called polynomial: O(nW) is pseudo-polynomial because W can be exponentially large relative to the number of input bits (log W bits to encode W).
- Fractional knapsack solved with DP: Fractional knapsack has the greedy choice property; DP is unnecessary and slower. O(n log n) greedy is optimal.
- LIS = LCS(A, sort(A)) confusion: This gives LIS only if all elements are distinct. For duplicates, use strictly increasing version of LCS.
- Floyd-Warshall with negative cycle not detected: After running, always check if dist[i][i]<0 to detect negative cycles; the algorithm does not self-report them.
- Matrix chain counts matrices not multiplications: For n=4 matrices, the number of ways to parenthesise is Catalan(3)=5, and there are n−1=3 splits to try at each level.
11. FAQ
- What are the two conditions for applying dynamic programming?
- Optimal substructure: the optimal solution contains optimal solutions to subproblems (provable by cut-and-paste). Overlapping subproblems: the same subproblems are computed repeatedly, so caching them saves work. Without overlap, D&C (like merge sort) is more appropriate.
- What is the time complexity of LCS?
- O(mn) time and O(mn) space for sequences of length m and n. Space can be reduced to O(min(m,n)) by only keeping two rows of the DP table at a time, since dp[i][j] depends only on dp[i-1][j-1], dp[i-1][j], and dp[i][j-1].
- What is the difference between 0-1 Knapsack and Fractional Knapsack?
- 0-1 Knapsack: each item is taken whole or not at all. Requires DP in O(nW). Fractional Knapsack: items can be split. Greedy (sort by value/weight ratio, take greedily) gives optimal in O(n log n). Greedy fails for 0-1 because taking a high ratio item may block better combinations.
- What is the recurrence for Matrix Chain Multiplication?
- m[i][j] = min over k from i to j−1 of (m[i][k] + m[k+1][j] + p[i−1]×p[k]×p[j]). Base: m[i][i]=0. Compute in order of increasing chain length. Time O(n³), Space O(n²).