Prompt Engineering
Techniques for Getting the Best from AI Models
Last Updated: March 2026
📌 Key Takeaways
- Definition: Prompt engineering is designing inputs to large language models to maximise output quality — without any model training.
- Core principle: Be specific, give context, show examples, specify format.
- Zero-shot: Direct instruction — works for clear, well-defined tasks.
- Few-shot: Provide 2–5 input/output examples — dramatically improves consistency of format and style.
- Chain-of-thought: “Think step by step” — significantly improves reasoning accuracy.
- Iteration is essential: Effective prompts are refined over multiple attempts — first drafts rarely produce optimal results.
1. Why Prompt Engineering Matters
Large language models are incredibly capable — but they are also sensitive to exactly how you phrase your request. The same underlying question can produce a vague, generic answer or a precise, expert-level response depending solely on how the prompt is written.
Compare these two prompts asking for the same thing:
Weak prompt: “Explain Bernoulli’s theorem.”
Strong prompt: “You are a fluid mechanics professor teaching undergraduate engineering students. Explain Bernoulli’s theorem using a real-world analogy involving water flowing through pipes. Include the formula with all variables defined, state the key assumptions, and give one worked numerical example. Format your response with clear headings.”
The second prompt specifies the audience, requests an analogy, formula, assumptions, worked example, and formatting — producing a structured, pedagogically appropriate response rather than a generic paragraph.
2. Zero-Shot Prompting
Zero-shot prompting gives the model a task with no examples — just a direct instruction. It works well for clear, well-defined tasks that are well-represented in the model’s training data.
Examples:
| Task | Prompt |
|---|---|
| Classification | “Classify the following text as Positive, Negative, or Neutral: ‘The product arrived damaged and customer service was unhelpful.'” |
| Translation | “Translate the following English text to Hindi: ‘The thermodynamic cycle is reversible.'” |
| Summarisation | “Summarise the following paragraph in one sentence: [paragraph]” |
| Code generation | “Write a Python function that takes a list of numbers and returns the mean and standard deviation.” |
Zero-shot works best when the task is unambiguous and the desired output format is obvious. For complex or unusual tasks, few-shot prompting significantly improves results.
3. Few-Shot Prompting
Few-shot prompting provides 2–5 examples of input-output pairs that demonstrate the exact format and style you want. The model learns the pattern from the examples and applies it to new inputs.
Example — Extracting Components from Engineering Specifications:
Extract the component, material, and dimension from each specification.
Specification: "M10 stainless steel bolt, 50mm length"
Component: Bolt
Material: Stainless Steel
Dimension: M10, 50mm
Specification: "25mm diameter mild steel shaft, 500mm length"
Component: Shaft
Material: Mild Steel
Dimension: 25mm diameter, 500mm length
Specification: "Aluminium 6061 I-beam, 200mm × 100mm cross-section"
Component:
The model will complete the pattern: “I-beam / Aluminium 6061 / 200mm × 100mm”. Few-shot is especially powerful for: consistent output formatting, domain-specific terminology, unusual tasks not in the model’s default behaviour, and extraction of structured data from unstructured text.
4. Chain-of-Thought (CoT) Prompting
Chain-of-thought prompting instructs the model to reason through problems step by step before giving the final answer. This dramatically improves accuracy on mathematical, logical, and multi-step reasoning tasks.
Without CoT:
Prompt: “A car travels at 60 km/h for 2 hours then at 90 km/h for 3 hours. What is the average speed?”
Model might incorrectly answer: “75 km/h” (simple average of 60 and 90 — wrong)
With CoT:
Prompt: “A car travels at 60 km/h for 2 hours then at 90 km/h for 3 hours. What is the average speed? Think step by step.”
Model correctly reasons: “Step 1: Distance at 60 km/h = 60 × 2 = 120 km. Step 2: Distance at 90 km/h = 90 × 3 = 270 km. Step 3: Total distance = 390 km. Step 4: Total time = 5 hours. Step 5: Average speed = 390/5 = 78 km/h.”
CoT Variations:
- “Think step by step” — simplest, works for most reasoning tasks
- “Show your work” — good for mathematical problems
- “Let’s reason through this carefully” — useful for complex analysis
- Zero-shot CoT: Just add “Think step by step” to any prompt
- Few-shot CoT: Provide examples that include step-by-step reasoning in the demonstrations
5. Role Prompting
Assigning a role or persona to the model focuses its responses on domain-specific knowledge, vocabulary, and perspective.
Examples for Engineering Students:
| Role | Prompt Prefix | Use Case |
|---|---|---|
| Professor | “You are a mechanical engineering professor with 20 years of experience teaching thermodynamics…” | Concept explanations, exam preparation |
| Code reviewer | “You are a senior software engineer reviewing code for production quality…” | Code quality feedback |
| GATE tutor | “You are a GATE CS expert tutor who specialises in making complex algorithms intuitive…” | Exam preparation, problem solving |
| Interviewer | “You are a technical interviewer at a top engineering company. Ask me 5 challenging questions about…” | Interview preparation |
6. Output Format Control
Always specify the desired output format explicitly. Models default to flowing prose — but structured formats are often more useful.
Common Format Requests:
- Bullet points: “List the 5 most important properties of… in bullet points.”
- Numbered steps: “Give me step-by-step instructions for… as a numbered list.”
- Table: “Compare X and Y in a markdown table with columns: Feature | X | Y | Winner.”
- JSON: “Return the answer as a JSON object with keys: ‘formula’, ‘variables’, ‘units’, ‘example’.”
- Word limit: “Explain this concept in exactly 50 words.”
- Specific sections: “Structure your response with these sections: Definition | Formula | Example | Common Mistakes.”
7. Providing Context
The more relevant context you provide, the more targeted the response. Include:
- Your background: “I am a third-year B.Tech mechanical student preparing for GATE 2026.”
- What you already know: “I understand Newton’s laws but am struggling with moment of inertia.”
- The purpose: “I need this for a presentation to first-year students.”
- Constraints: “Keep the explanation under 200 words and avoid calculus.”
- Prior attempts: “I tried this approach [describe] but got this result [describe]. What am I doing wrong?”
8. Iterative Refinement
Effective prompting is iterative — rarely does the first prompt produce the ideal output. The process:
- Write initial prompt — be as specific as you can from the start
- Evaluate the output — what is good? What is missing? What is wrong?
- Identify the gap — is the format wrong? Too general? Missing specific information?
- Refine the prompt — address the specific gap: add constraints, examples, or clarifications
- Repeat until satisfied
You can also refine within a conversation: “That was good but too technical — simplify it for a first-year student” or “Add a worked numerical example using real values.”
9. Prompt Templates for Engineering Students
For understanding a new concept:
You are a [subject] professor. Explain [concept] to a [year]-year B.Tech student.
Include:
1. A simple definition in plain English
2. A real-world analogy
3. The key formula with all variables defined
4. One worked numerical example
5. Three common mistakes students make
Keep the total response under 500 words.
For GATE preparation:
I am preparing for GATE 2026 [branch]. Generate 5 multiple-choice questions on [topic]
at GATE difficulty level. For each question:
- State the question clearly
- Provide 4 options (A, B, C, D)
- Give the correct answer
- Explain why it is correct and why the other options are wrong
For debugging code:
I have written this Python code for [task] but it produces [error/wrong output].
Expected output: [describe]
Actual output: [describe]
[paste code here]
Please:
1. Identify the bug(s)
2. Explain why the bug occurs
3. Provide the corrected code
4. Suggest any improvements to make the code more efficient or readable
10. Common Mistakes in Prompt Engineering
- Being too vague: “Tell me about machine learning” — gives a generic Wikipedia-level answer. Specify what aspect, at what depth, for what audience, in what format.
- Asking too many questions at once: A single prompt with 10 questions often gives shallow answers to all. Break complex requests into focused sub-prompts or ask for one thing at a time.
- Not specifying the audience: The same concept explained to a primary school student vs a PhD researcher requires completely different language and depth. Always specify who the explanation is for.
- Accepting the first output without refinement: First responses are rarely optimal. Iterate — ask for changes, more detail, different format, or a different approach.
- Over-trusting numerical outputs: LLMs can confidently produce incorrect calculations. For anything involving arithmetic or specific numbers, verify the computation independently.
11. Frequently Asked Questions
Is prompt engineering a real skill worth developing?
Yes — effective prompt engineering can dramatically increase the value you extract from LLMs for learning, research, coding, and professional work. It is also a skill increasingly valued in the industry — many companies hire “prompt engineers” to build and optimise LLM-powered applications. At a basic level, every engineer who uses AI tools benefits from knowing how to communicate effectively with them.
Does prompt engineering work the same way for all LLMs?
The core principles (specificity, examples, chain-of-thought) apply across all LLMs, but different models respond differently to specific phrasings. Claude, GPT-4, and Gemini have different strengths and respond to instruction nuances differently. For professional use, test your prompts on the specific model you are deploying. What works perfectly on GPT-4 may need adjustment for Claude or Llama.