Definition
Fine-tuning is the process of training a base language model on additional task-specific examples so that the model internalizes a specific style, format, or behavior. Compared to RAG and prompt engineering, fine-tuning is more expensive, slower to update, and best reserved for cases where prompt engineering or RAG cannot achieve the desired consistency at scale. By 2026, OpenAI, Anthropic, and Google all offer managed fine-tuning APIs.
Example
A legal team fine-tunes a model on 10,000 anonymized contracts to get reliably-formatted contract clauses without re-prompting each time.
When to use
When prompt engineering and RAG aren't enough for required consistency, latency, or cost.