Back to glossary
Techniques

Fine-Tuning

Training a base model on task-specific examples to bake in style or behavior.

Definition

Fine-tuning is the process of training a base language model on additional task-specific examples so that the model internalizes a specific style, format, or behavior. Compared to RAG and prompt engineering, fine-tuning is more expensive, slower to update, and best reserved for cases where prompt engineering or RAG cannot achieve the desired consistency at scale. By 2026, OpenAI, Anthropic, and Google all offer managed fine-tuning APIs.

Example

A legal team fine-tunes a model on 10,000 anonymized contracts to get reliably-formatted contract clauses without re-prompting each time.

When to use

When prompt engineering and RAG aren't enough for required consistency, latency, or cost.

Related terms

Free Chrome Extension

Stop rewriting prompts. Start shipping.

Works with ChatGPT, Claude, Gemini, Grok, Midjourney, Ideogram, Veo3 & Kling. 5.0★ on the Chrome Web Store.

Add to Chrome — Free