When you want an AI to do something specific โ classify text in a particular way, write in a distinct style, or follow a strict output format โ telling it what to do in words often isn't enough. The most reliable way to communicate a precise pattern is to show the model what you want through worked examples. That's the core idea behind few-shot prompting.
Zero-Shot vs. Few-Shot: What's the Difference?
In zero-shot prompting, you describe a task and ask the model to complete it with no prior examples. This works well for common, straightforward tasks where the model has seen plenty of similar cases during training.
In few-shot prompting, you include between 2 and 10 complete examples of the input-output pattern you want the model to follow, then pose your actual question. The model uses these examples as a template and mirrors the structure, format, and style.
๐ Rule of thumb: Use few-shot when zero-shot produces inconsistent outputs, when format precision matters, or when your task involves a pattern the model may not have seen frequently in training data.
A Simple Few-Shot Example: Sentiment Classification
Suppose you're building a tool to classify customer reviews as Positive, Negative, or Neutral in a specific one-word format. Zero-shot often produces verbose explanations instead. Here's how few-shot fixes this:
With these three examples, the model will reliably return a single-word label rather than a paragraph-length explanation. The examples have taught it the exact format you need.
How Many Examples Should You Include?
Research and practical experience point to a sweet spot:
- 1 example (one-shot): Better than zero-shot but still inconsistent for complex tasks.
- 3โ5 examples: The optimal range for most classification, extraction, and formatting tasks.
- 6โ10 examples: Useful for tasks with high output variance or where the pattern is very specific.
- 10+ examples: Diminishing returns โ at this point, fine-tuning the model may be more efficient.
Choosing Good Examples
The quality of your examples matters far more than the quantity. Follow these principles:
- Cover edge cases. Include examples that represent the boundary conditions of your task, not just the easy middle cases.
- Maintain consistent format. Every example must use the exact same input-output structure.
- Vary the content. Don't use examples that are all very similar โ diversity helps the model generalize.
- Make examples accurate. Incorrect examples will teach the model the wrong behaviour. Quality control is critical.
Few-Shot for Writing Style
Few-shot prompting is also powerful for establishing a specific writing voice or tone. If you want the AI to write in the style of your brand, include examples of that style before making your request:
Common Mistakes to Avoid
- Inconsistent example format. If your examples use different structures, the model will average them rather than following any one pattern precisely.
- Too few examples for complex tasks. One example is rarely enough for anything beyond simple classification.
- Examples that contradict each other. If different examples imply different rules, the model will be confused.
- Using examples as a replacement for clear instructions. Always combine few-shot examples with a clear task description โ examples show the format, instructions explain the goal.
Few-shot prompting is one of the highest-leverage techniques available. Once you start applying it to tasks where consistency matters, you'll find it's the single change that most reliably improves output quality.