302: Advanced Prompting Techniques¶
Chapter Overview
Beyond basic instructions, several advanced prompt engineering techniques can be used to elicit more complex reasoning, improve accuracy, and unlock the full potential of Foundation Models. These methods are primarily forms of In-Context Learning (ICL).
The Prompting Spectrum: From Zero-Shot to Few-Shot¶
This describes how many examples you provide to the model within the prompt.
graph LR
A[Zero-Shot] --> B[One-Shot] --> C[Few-Shot] --> D[Many-Shot]
subgraph "No Examples"
A
end
subgraph "Single Example"
B
end
subgraph "2-10 Examples"
C
end
subgraph "10+ Examples"
D
end
style A fill:#ffcdd2,stroke:#B71C1C,stroke-width:2px
style B fill:#fff3e0,stroke:#f57c00,stroke-width:2px
style C fill:#e8f5e9,stroke:#1B5E20,stroke-width:2px
style D fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
๐ฏ Zero-Shot Prompting¶
Zero-Shot Prompting is the most basic form. You provide an instruction but no examples. You rely entirely on the model's pre-existing knowledge to perform the task.
Example:¶
When to Use:¶
- Simple, well-defined tasks
- When the model already knows the task well
- Quick prototyping and testing
- When you don't have examples available
๐ฒ Few-Shot Prompting¶
Few-Shot Prompting provides the model with a few examples (the "shots") of the task being performed correctly. This gives the model a clear pattern to follow.
Example:¶
Translate English to French:
English: sea otter
French: loutre de mer
English: platypus
French: ornithorynque
English: butterfly
French: papillon
English: cheese
French:
Benefits:¶
- Pattern Recognition: Model learns the desired input-output format
- Style Consistency: Examples show the preferred style and tone
- Error Reduction: Fewer misunderstandings about the task
- Format Control: Ensures consistent output structure
graph TD
A[Few-Shot Examples] --> B[Pattern Recognition]
A --> C[Style Learning]
A --> D[Format Consistency]
B --> E[Better Performance]
C --> E
D --> E
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
style E fill:#e8f5e9,stroke:#1B5E20,stroke-width:2px
When to use Few-Shot
Few-shot prompting is incredibly effective for guiding the model's style, format, and tone. If you need a very specific output structure, providing a few examples is often the best way to achieve it.
๐ง Chain-of-Thought (CoT) Prompting¶
Chain-of-Thought (CoT) Prompting is a technique designed to improve a model's performance on multi-step reasoning problems (like math word problems or logic puzzles).
The key insight is to show the model how to think, not just the final answer. In the few-shot examples, you include the step-by-step reasoning process.
graph TD
subgraph "โ Standard Few-Shot"
A["Q: Roger has 5 tennis balls.<br/>He buys 2 more cans of 3 balls each.<br/>How many balls does he have now?"] --> B["A: 11."]
end
subgraph "โ
Chain-of-Thought (CoT)"
C["Q: Roger has 5 tennis balls.<br/>He buys 2 more cans of 3 balls each.<br/>How many balls does he have now?"] --> D["A: Roger started with 5 balls.<br/>2 cans of 3 balls each is 2 ร 3 = 6 balls.<br/>So he has 5 + 6 = 11 balls total.<br/>The answer is 11."]
end
style B fill:#ffcdd2,stroke:#B71C1C,stroke-width:2px
style D fill:#c8e6c9,stroke:#1B5E20,stroke-width:2px
CoT Example in Practice:¶
Q: A store has 15 apples. They sell 7 apples in the morning and 4 apples in the afternoon. How many apples are left?
A: Let me work through this step by step.
- The store starts with 15 apples
- They sell 7 apples in the morning: 15 - 7 = 8 apples remaining
- They sell 4 more apples in the afternoon: 8 - 4 = 4 apples remaining
- Therefore, there are 4 apples left.
Q: A parking lot has 3 levels. Each level has 8 rows, and each row has 12 parking spaces. If the parking lot is 75% full, how many cars are currently parked?
A: Let me calculate this step by step.
- Total parking spaces: 3 levels ร 8 rows ร 12 spaces = 288 spaces
- The lot is 75% full, so: 288 ร 0.75 = 216 cars
- Therefore, there are 216 cars currently parked.
Q: [Your new problem here]
A:
๐ Zero-Shot Chain-of-Thought¶
You can also trigger step-by-step reasoning without providing examples by simply adding "Let's think step by step" to your prompt.
Example:¶
Q: If a train travels at 60 mph and needs to cover 180 miles, how long will the journey take?
Let's think step by step:
This often produces reasoning like:
To find the time, I need to use the formula: Time = Distance รท Speed
- Distance = 180 miles
- Speed = 60 mph
- Time = 180 รท 60 = 3 hours
Therefore, the journey will take 3 hours.
๐ญ Role-Playing and Persona Prompting¶
Advanced role-playing goes beyond simple job titles to create rich, detailed personas that guide the model's behavior.
Example: The Socratic Tutor¶
You are Socrates, the ancient Greek philosopher known for the Socratic method. Instead of giving direct answers, you guide students to discover answers themselves through thoughtful questioning.
Student: "What is democracy?"
Socrates: "Ah, an excellent question! But before I attempt to define it, let me ask you - have you ever participated in making a decision as part of a group? Perhaps in your family or with friends?"
Student: "Yes, we voted on where to go for dinner last week."
Socrates: "Interesting! And how did that process work? Did everyone get an equal say in the decision?"
Persona Components:¶
- Background: Relevant experience and expertise
- Communication Style: How they speak and interact
- Values: What they prioritize and believe in
- Methods: Their approach to problem-solving
๐ช Advanced Prompting Patterns¶
1. Template-Based Prompting¶
Analyze the following [DOCUMENT_TYPE] and provide insights about [SPECIFIC_ASPECT]:
Document: [DOCUMENT_CONTENT]
Please structure your response as:
1. Summary (2-3 sentences)
2. Key Insights (3-5 bullet points)
3. Recommendations (2-3 actionable items)
4. Confidence Level (High/Medium/Low with reasoning)
2. Multi-Turn Reasoning¶
I'm going to give you a complex problem. Please:
1. First, break it down into smaller sub-problems
2. Then, solve each sub-problem step by step
3. Finally, combine the solutions to answer the original question
Problem: [COMPLEX_PROBLEM]
3. Perspective-Taking¶
Please analyze this situation from three different perspectives:
1. **Stakeholder A:** [Description]
2. **Stakeholder B:** [Description]
3. **Stakeholder C:** [Description]
For each perspective, consider:
- Their primary concerns
- Their desired outcomes
- Potential objections or challenges
- Suggested solutions
Situation: [SITUATION_DESCRIPTION]
๐ Prompt Engineering Workflow¶
flowchart TD
A[Define Task] --> B[Choose Prompting Strategy]
B --> C[Create Initial Prompt]
C --> D[Test with Examples]
D --> E{Performance Good?}
E -->|No| F[Analyze Failures]
E -->|Yes| G[Test Edge Cases]
F --> H[Refine Strategy]
H --> I[Update Prompt]
I --> D
G --> J{Robust?}
J -->|No| F
J -->|Yes| K[Deploy & Monitor]
subgraph "Strategy Options"
L[Zero-Shot]
M[Few-Shot]
N[Chain-of-Thought]
O[Role-Playing]
P[Template-Based]
end
B --> L
B --> M
B --> N
B --> O
B --> P
style A fill:#e3f2fd,stroke:#1976d2
style K fill:#e8f5e9,stroke:#1B5E20
style E fill:#fff3e0,stroke:#f57c00
style J fill:#fff3e0,stroke:#f57c00
๐ฏ Practice Exercises¶
Exercise 1: Chain-of-Thought Math¶
Create a CoT prompt for solving compound interest problems. Include 2-3 examples with step-by-step reasoning.
Exercise 2: Few-Shot Classification¶
Design a few-shot prompt to classify customer emails into categories: Complaint, Question, Compliment, or Refund Request.
Exercise 3: Role-Playing Scenario¶
Create a detailed persona prompt for a "Senior Software Architect" reviewing code and providing feedback to junior developers.
๐ฌ Experimental Techniques¶
Self-Consistency¶
Run the same CoT prompt multiple times and take the majority answer to improve reliability.
Tree of Thoughts¶
Generate multiple reasoning paths and evaluate them before selecting the best one.
Program-Aided Language Models¶
Combine natural language reasoning with code execution for mathematical problems.
Advanced Prompting Best Practices
- Start simple and add complexity only when needed
- Use diverse examples in few-shot prompts to avoid overfitting
- Test reasoning paths by asking the model to explain its thinking
- Combine techniques - CoT can work with role-playing and few-shot
- Monitor for consistency across similar inputs
Common Pitfalls
- Over-engineering prompts that become too complex to maintain
- Biased examples that lead to skewed model behavior
- Inconsistent formatting across examples
- Too many examples that exceed context limits