Definition
In-context learning is the phenomenon where a large language model adapts its behavior to a new task using only examples shown in the prompt — without any weight updates. This is the mechanism behind few-shot prompting. The model recognizes the input-output pattern in the examples and applies it to your real input. Discovered in GPT-3 (2020); now a foundational concept in prompt engineering.
Example
Showing 3 examples of converting 'cat' → 'kitten', 'dog' → 'puppy', 'horse' → 'foal' lets the model infer the pattern and produce 'cow' → 'calf' for a new input.
When to use
Custom tasks where you can show examples but don't want to fine-tune.
Also known as
icl