GATE 2026 CSE — AI & ML Preparation Guide
Syllabus, Topic Weightage, Study Plan & Exam Tips
Last Updated: March 2026
📌 Quick Summary
- Exam: GATE 2026 — Computer Science & Information Technology (CS)
- AI/ML weightage: Approximately 5–10 marks out of 100 (varies by year)
- Key topics: Search algorithms, propositional logic, ML algorithms, Bayesian inference, neural networks, CSP
- Difficulty: Moderate — questions test conceptual clarity and numerical ability
- Strategy: Master classical AI first (search, logic, Bayesian), then ML algorithms — these appear more consistently than deep learning
- Related exam: GATE DA (Data Science & AI) — entirely focused on ML and statistics
1. Official GATE CS AI Syllabus
The following topics are part of the official GATE CS syllabus under the “Artificial Intelligence” section:
Search and Optimisation
- Uninformed search: BFS, DFS, Iterative Deepening DFS, Bidirectional Search
- Informed search: Best-First Search, Greedy Search, A* Search, Heuristics and admissibility
- Local search: Hill Climbing, Simulated Annealing, Genetic Algorithms
- Constraint Satisfaction Problems (CSP): backtracking, constraint propagation, arc consistency
Knowledge Representation and Reasoning
- Propositional Logic: syntax, semantics, inference rules (modus ponens, resolution)
- First-Order Logic (FOL): quantifiers, unification, forward and backward chaining
- Knowledge graphs, ontologies (conceptual)
Probabilistic Reasoning
- Basic probability: conditional probability, Bayes theorem, chain rule
- Bayesian networks: structure, conditional probability tables, inference
- Hidden Markov Models (HMMs): basic structure and Viterbi algorithm
Machine Learning (included from recent years)
- Supervised learning: regression (linear, logistic), decision trees, SVM, Naive Bayes, KNN
- Unsupervised learning: K-means clustering, PCA (basic concept)
- Neural networks: perceptron, multi-layer networks, activation functions, backpropagation
- Evaluation: overfitting, cross-validation, precision, recall, F1
2. Topic-wise Weightage Analysis
Based on previous year GATE CS papers (2019–2025):
| Topic | Avg. Marks (out of 100) | Frequency | Priority |
|---|---|---|---|
| Search Algorithms (BFS, DFS, A*) | 2–3 | Very High | ⭐ P1 |
| Propositional & First-Order Logic | 1–2 | High | ⭐ P1 |
| Bayesian Reasoning | 1–2 | High | ⭐ P1 |
| ML Algorithms (supervised) | 1–2 | Medium-High | ⭐ P1 |
| Neural Networks / Backpropagation | 1 | Medium | P2 |
| CSP (Constraint Satisfaction) | 1 | Medium | P2 |
| Clustering (K-Means) | 0–1 | Low-Medium | P2 |
| HMMs | 0–1 | Low | P3 |
| Deep Learning (CNN, RNN) | 0–1 | Low (increasing) | P3 |
Key insight: Classical AI topics (search, logic, Bayes) have appeared consistently across all years. ML topics have become more frequent since 2022. Deep learning is a low-frequency but growing area — prioritise only after mastering classical topics.
3. Classical AI Topics — Detailed Breakdown
Search Algorithms
The most consistently tested area in GATE AI. Questions typically ask you to trace through an algorithm, compute path cost, determine optimality, or identify the correct heuristic property.
| Algorithm | Complete? | Optimal? | Time Complexity | Space Complexity |
|---|---|---|---|---|
| BFS | Yes | Yes (unit cost) | O(b^d) | O(b^d) |
| DFS | No | No | O(b^m) | O(bm) |
| IDDFS | Yes | Yes (unit cost) | O(b^d) | O(bd) |
| Uniform Cost | Yes | Yes | O(b^(C*/ε)) | O(b^(C*/ε)) |
| Greedy Best-First | No | No | O(b^m) | O(b^m) |
| A* | Yes | Yes (admissible h) | O(b^d) | O(b^d) |
b = branching factor, d = depth of optimal solution, m = max depth, C* = optimal cost, ε = min step cost
Key fact for A*: A* is optimal if and only if the heuristic h is admissible (never overestimates the true cost). A heuristic is consistent (monotone) if h(n) ≤ c(n, a, n’) + h(n’) for every successor n’ — consistency implies admissibility.
Bayesian Networks
Questions involve: reading the network structure, computing joint probabilities, conditional independence, and marginalisation. Practise computing P(A | B, C) from a given network by applying the chain rule and summing over hidden variables.
4. Machine Learning Topics — Detailed Breakdown
Key Formulas to Memorise
| Concept | Formula |
|---|---|
| Linear Regression | ŷ = β₀ + β₁x | OLS: β₁ = Σ(xᵢ−x̄)(yᵢ−ȳ)/Σ(xᵢ−x̄)² |
| Logistic Regression | P(y=1|x) = 1/(1+e⁻⁽ᵝ⁰⁺ᵝ¹ˣ⁾) |
| Naive Bayes | P(c|x) ∝ P(c) × Π P(xᵢ|c) |
| Perceptron update | w := w + α(y−ŷ)x |
| Gini Impurity | G = 1 − Σpᵢ² |
| Entropy | H = −Σ pᵢ log₂(pᵢ) |
| Precision | TP/(TP+FP) |
| Recall | TP/(TP+FN) |
| F1 Score | 2×(P×R)/(P+R) |
| Sigmoid | σ(z) = 1/(1+e⁻ᶻ) |
| K-Means objective | Minimise WCSS = Σₖ Σ||xᵢ−μₖ||² |
Conceptual Questions to Prepare
- When is a decision tree guaranteed to overfit? (When grown to maximum depth with no pruning)
- What is the bias-variance tradeoff? How does regularisation affect it?
- When would you prefer SVM over Logistic Regression?
- What is the “kernel trick” and why is it needed?
- How does K-Means converge? Is it guaranteed to find the global optimum?
- Why is cross-validation preferred over a single train-test split?
- What is the VC dimension and how does it relate to model complexity?
5. Recommended Books & Resources
| Resource | Type | Best For |
|---|---|---|
| Artificial Intelligence: A Modern Approach — Russell & Norvig (4th ed.) | Textbook | Classical AI — the definitive reference. Chapters 3–4 (search), 7–9 (logic), 12–13 (probability), 18–19 (ML). |
| Pattern Recognition and Machine Learning — Bishop | Textbook | Advanced ML theory — Bayesian methods, SVMs, neural networks. Dense but authoritative. |
| GATE CS Previous Year Papers — Last 10 years | Practice | Essential — solve at least 5 years of PYQs under timed conditions before the exam. |
| Made Easy / ACE Academy GATE Notes | Study Notes | Concise summaries of all GATE topics — good for last-month revision. |
| EngineeringHulk AI/ML Hub | Free Online | All ML algorithms with worked examples, formulas, and exam-ready summaries. |
6. 12-Week Study Plan — AI & ML Section
This plan assumes 1–1.5 hours per day dedicated to AI/ML, alongside preparation for other GATE sections:
| Week | Topics | Goal |
|---|---|---|
| Week 1 | Search: BFS, DFS, IDDFS, Uniform Cost | Trace all algorithms by hand on small graphs. Know completeness & optimality. |
| Week 2 | Informed Search: A*, Greedy Best-First, Heuristics | Prove admissibility. Run A* on graph examples. Understand consistency. |
| Week 3 | Propositional Logic: syntax, truth tables, inference | Derive using modus ponens, resolution, and proof by contradiction. |
| Week 4 | First-Order Logic: quantifiers, unification, forward/backward chaining | Solve unification problems. Trace forward chaining on example KB. |
| Week 5 | Probability: Bayes theorem, conditional probability, Bayesian networks | Compute posterior probabilities from Bayesian networks. Practice joint & marginal computations. |
| Week 6 | ML Fundamentals: supervised vs unsupervised, bias-variance, evaluation metrics | Know all metric formulas. Understand bias-variance tradeoff deeply. |
| Week 7 | Regression: linear & logistic regression, gradient descent | Derive OLS solution. Compute sigmoid manually. Understand log loss. |
| Week 8 | Classification: Decision Trees (Gini, Entropy), Naive Bayes, KNN | Build decision trees by hand. Compute NB predictions numerically. |
| Week 9 | SVM, K-Means, Neural Networks basics | Understand margin, kernel trick, K-Means convergence, perceptron update rule. |
| Week 10 | PYQ Practice — 2023, 2024, 2025 GATE CS AI section | Solve under exam conditions. Identify weak areas. |
| Week 11 | Revision of weak areas + PYQ 2019–2022 | Focus on frequently tested topics. Re-do questions you got wrong. |
| Week 12 | Full mock tests + formula revision sheet | 3 full mock tests. Time management practice. Final formula consolidation. |
7. Previous Year Question Patterns
Understanding the type of questions asked helps you prepare more efficiently:
Type 1 — Algorithm Tracing (most common)
Example: “Apply A* search to the following graph with heuristic values h(A)=5, h(B)=3… What is the order of node expansion?”
Preparation: Practise tracing BFS, DFS, A*, and Bayesian inference by hand on small examples from textbooks.
Type 2 — Conceptual True/False or MCQ
Example: “Which of the following statements about A* search is CORRECT? (a) A* is complete even with infinite branching factor (b) A* with an admissible heuristic is always optimal (c) A* always expands fewer nodes than BFS…”
Preparation: Know the properties, conditions, and edge cases of every algorithm.
Type 3 — Numerical Computation
Example: “Given the following training data, compute the Gini impurity after splitting on feature X…” or “Compute P(Disease=Yes | Test=Positive) given prior P(Disease)=0.01…”
Preparation: Practise numerical examples for Bayes theorem, Gini impurity, entropy, logistic regression, and confusion matrix metrics.
Type 4 — Match the Following
Matching algorithms to properties, activation functions to their formulas, or model types to their characteristics.
8. GATE DA — For AI/ML Specialists
GATE DA (Data Science and Artificial Intelligence) was introduced in 2023 as a dedicated paper for students pursuing AI/ML careers. Key differences from GATE CS:
| Feature | GATE CS | GATE DA |
|---|---|---|
| AI/ML weightage | ~5–10% | ~40–50% |
| Programming | Algorithms & DS focus | Python, pandas, NumPy focus |
| Mathematics | Discrete maths, linear algebra basics | Heavy statistics, probability, linear algebra |
| Deep learning | Minimal | Included (CNNs, RNNs) |
| PSUs/Jobs | Wide PSU eligibility | More limited but growing |
| IIT M.Tech | IIT CS programs | IIT AI/DS programs |
If your career goal is specifically AI/ML engineering or data science, GATE DA may be the better choice. You can appear for both GATE CS and GATE DA in the same year.
9. Exam Day Tips
- Do not skip AI/ML section: Many students skip AI to focus on DSA and OS. With 5–8 marks available, a well-prepared student can gain significant advantage over those who skip it.
- Attempt search algorithm questions first: These are the most predictable and mechanical — you can trace the algorithm step by step and get the answer with certainty.
- For numerical questions, show your working on rough paper: Bayesian inference and decision tree calculations have multiple steps — organise your computation to avoid errors.
- Negative marking applies (−1/3 for 1-mark, −2/3 for 2-mark): Skip questions you are genuinely unsure about. Educated guessing on 3–4 options is acceptable; pure guessing on uncertain questions costs you more than leaving blank.
- Know your formulas cold: Precision, Recall, F1, Gini impurity, sigmoid, Bayes theorem — these appear regularly and must be recalled instantly without looking anything up.
10. Frequently Asked Questions
Is AI/ML worth preparing for GATE CS?
Absolutely. With 5–10 marks at stake and many students neglecting this section, strong AI/ML preparation translates directly to a better rank. The topics are also conceptually interesting, well-documented in standard textbooks, and useful beyond GATE for interviews and placements at tech companies.
Which is harder — GATE CS or GATE DA?
This depends on your background. GATE DA has deeper ML and statistics content, which can be harder for students with a traditional CS background but easier for students with a mathematics or data science focus. GATE CS has broader coverage across all CS areas. If your strengths lie in algorithms and systems programming, GATE CS is likely more suitable. If you are stronger in probability, statistics, and ML, consider GATE DA.
What GATE score is needed for IIT AI/ML M.Tech programs?
IIT programmes in AI, Data Science, and ML (offered at IIT Bombay, Delhi, Madras, Hyderabad, Bangalore) typically require a GATE score above 700–750 (out of 1000) or a rank within the top 500–1000, depending on the institute and programme. Check each institute’s official cutoffs from the previous year — these fluctuate based on the number of applicants and seats available.