Asymptotic Analysis & Recurrences
Big-O, Theta, Omega, Master theorem, and recurrence-solving techniques for GATE CS.
Last updated: April 2026 | GATE CS 2024–2026 syllabus
Key Takeaways
- Big-O = upper bound; Big-Ω = lower bound; Θ = tight bound (both simultaneously).
- o (little-o) and ω (little-omega) are strict bounds — the function does NOT touch the boundary.
- Master theorem covers T(n)=aT(n/b)+f(n); compare f(n) with nlogba.
- Substitution method: guess the form, prove by induction.
- Recursion tree: draw levels, sum row costs, multiply by number of rows.
- Akra-Bazzi generalises Master theorem to unequal subproblem sizes.
- Amortised analysis: aggregate, accounting, and potential methods all give the same answer.
1. Asymptotic Notations
| Notation | Meaning | Formal Definition |
|---|---|---|
| O(g) | Upper bound | ∃ c,n0: f(n) ≤ c·g(n) for all n≥n0 |
| Ω(g) | Lower bound | ∃ c,n0: f(n) ≥ c·g(n) for all n≥n0 |
| Θ(g) | Tight bound | f(n) = O(g) AND f(n) = Ω(g) |
| o(g) | Strict upper | lim f(n)/g(n) = 0 as n→∞ |
| ω(g) | Strict lower | lim f(n)/g(n) = ∞ as n→∞ |
O(1) < O(log n) < O(√n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2n) < O(n!)
2. Properties & Relations
Reflexivity: f=O(f), f=Θ(f), f=Ω(f)
Symmetry: f=Θ(g) ⇔ g=Θ(f)
Transpose symmetry: f=O(g) ⇔ g=Ω(f)
Sum rule: O(f)+O(g) = O(max(f,g))
Product rule: O(f)×O(g) = O(f×g)
Log rule: log(n!) = Θ(n log n) log(nk) = Θ(log n)
Limit method: if lim f/g = c (0<c<∞) then f=Θ(g)
3. Common Complexities
| Algorithm | Time | Space |
|---|---|---|
| Binary search | O(log n) | O(1) |
| Merge sort | O(n log n) | O(n) |
| Heap sort | O(n log n) | O(1) |
| Quick sort (avg) | O(n log n) | O(log n) |
| Matrix multiply (naive) | O(n³) | O(n²) |
| Strassen | O(n2.81) | O(n²) |
| Floyd-Warshall | O(V³) | O(V²) |
| BFS / DFS | O(V+E) | O(V) |
4. Master Theorem
For T(n) = aT(n/b) + f(n), a≥1, b>1, let p = logba:
Case 2: f(n) = Θ(np logkn) ⇒ T(n) = Θ(np logk+1n)
Case 3: f(n) = Ω(np+ε) AND af(n/b) ≤ cf(n) ⇒ T(n) = Θ(f(n))
Common examples:
T(n) = 2T(n/2)+n → p=1, f=Θ(n) = Θ(n1) → Case 2: Θ(n log n)
T(n) = 2T(n/2)+1 → p=1, f=O(n1−ε) → Case 1: Θ(n)
T(n) = 2T(n/2)+n² → p=1, f=Ω(n1+ε) → Case 3: Θ(n²)
T(n) = 8T(n/2)+n² → p=3, f=O(n3−ε) → Case 1: Θ(n³)
T(n) = T(n/2)+1 → p=0, f=Θ(1) → Case 2: Θ(log n)
5. Substitution Method
Guess a bound, then prove it by strong induction. The key is making a tight enough guess.
Assume T(k) ≤ ck log k for all k < n.
T(n) = 2T(n/2)+n ≤ 2(c·n/2·log(n/2))+n = cn(log n−1)+n = cn log n−cn+n
≤ cn log n for c ≥ 1. Hence T(n) = O(n log n). ✓
Pitfall: Guessing T(n)=O(n) for same recurrence — the subtracted term cn−n can go negative for large c, so the induction fails. Strengthen guess to T(n) ≤ cn log n − dn.
6. Recursion Tree Method
T(n) = 2T(n/2) + n
Level 0: n Level 1: n/2+n/2=n Level k: n (each level costs n)
Depth = log n ⇒ Total = n × log n = Θ(n log n)
T(n) = T(n/3) + T(2n/3) + n
Longest path: always take 2n/3, depth = log3/2n
Each level ≤ n, so T(n) = O(n log n). Also Ω(n log n). Hence Θ(n log n).
7. Special Recurrences
| Recurrence | Solution | Example |
|---|---|---|
| T(n) = T(n−1) + 1 | Θ(n) | Linear scan |
| T(n) = T(n−1) + n | Θ(n²) | Insertion sort |
| T(n) = 2T(n−1) + 1 | Θ(2n) | Tower of Hanoi |
| T(n) = T(n−1) + T(n−2) | Θ(φn) | Naive Fibonacci |
| T(n) = T(√n) + 1 | Θ(log log n) | Iterated squaring |
| T(n) = aT(n−1) + f(n) | Solve by unrolling | — |
8. Amortised Analysis
Dynamic array: n inserts cost ≤ 3n total ⇒ O(1) amortised each.
Accounting method: Assign credits to operations; ensure credit ≥ 0 always.
Each insert charged 3: 1 for insert, 2 stored for future copy.
Potential method: Φ = size − capacity/2. Amortised cost = actual + ΔΦ.
Key results:
Stack with multi-pop: O(1) amortised Binary counter increment: O(1) amortised
Splay tree: O(log n) amortised Union-Find with path compression: O(α(n)) amortised
9. GATE Examples
a=3, b=2, p=log23 ≈ 1.585. f(n)=n=O(n1.585−ε). Case 1 ⇒ T(n)=Θ(nlog23).
By Stirling: n! ≈ (n/e)n√(2πn). Clearly (n/e)n ≤ nn. So n! = O(nn). Answer: (A).
10. Common Mistakes
- Using O when Θ is meant: “Merge sort is O(n log n)” is technically true but imprecise — it is Θ(n log n). GATE may ask which is tightest.
- Master theorem Case 3 without regularity check: af(n/b)≤cf(n) must hold. Forgetting this gives wrong answer.
- logba calculation error: For T(n)=8T(n/2)+n², p=log28=3, not 8/2=4. Always apply log correctly.
- Confusing amortised with average: Amortised is worst-case over a sequence of ops; average is expected over random inputs. They are different concepts.
- Ignoring floor/ceiling in substitution: T(⌊n/2⌋) and T(n/2) differ at base cases; when proving induction, handle small n separately.
11. FAQ
- What is the difference between Big-O and Theta notation?
- Big-O gives an upper bound: f(n)=O(g(n)) means f grows no faster than g. Theta gives a tight bound: f(n)=Θ(g(n)) means f grows at exactly the same rate as g (both an upper and lower bound). In GATE, Θ is the strongest and most informative statement.
- When can the Master theorem not be applied?
- When f(n) is not polynomially larger or smaller than nlogba (e.g., f(n)=n/log n sits between cases), when subproblems have unequal sizes (use Akra-Bazzi instead), or when the recurrence has additional terms or subtract-type structure.
- How do you solve T(n) = T(n−1) + n?
- Unroll: T(n) = T(n-1)+n = T(n-2)+(n-1)+n = … = T(1)+2+3+…+n = Θ(n²). This is the recurrence for insertion sort.
- What is the amortised time complexity of dynamic array insertion?
- O(1) amortised. Although doubling the array costs O(n) occasionally, over n total insertions the total work is O(n) — each element is copied at most O(log n) times total, but by aggregate analysis the sum telescopes to O(n).