AI Prompt Engineering Glossary
30 definitions covering prompt frameworks, techniques, concepts, parameters, and tools. Concise, AI-extractable, internally linked.
Frameworks
Techniques
Chain-of-Thought (CoT)
Prompting technique that forces the model to show step-by-step reasoning.
Few-Shot Prompting
Including 2-5 input-output examples in the prompt to teach a pattern.
Fine-Tuning
Training a base model on task-specific examples to bake in style or behavior.
JSON Prompt
Prompts using structured JSON to specify input, constraints, and output schema.
Persona Prompting
Assigning the model a detailed character or expert role to constrain its voice.
RAG (Retrieval-Augmented Generation)
Pattern that retrieves relevant documents and includes them in the prompt before generation.
Self-Consistency Prompting
Run Chain-of-Thought N times and take the majority answer to reduce reasoning errors.
Structured Output
API mode that guarantees the model returns valid JSON matching a schema.
Tool Use (Function Calling)
Mechanism letting an LLM call external functions or APIs to get information or perform actions.
Tree-of-Thought (ToT)
Reasoning method where the model explores multiple paths in parallel and picks the best.
Zero-Shot Prompting
Asking a model to perform a task with no examples, only instructions.
Concepts
AI Agents
Systems where an LLM plans, calls tools, observes results, and iterates toward a goal.
Context Window
Maximum tokens a model can read in one prompt+output cycle.
Embeddings
Numerical vector representations of text used for semantic search and similarity.
Google AI Overviews
Google Search feature that summarizes top-ranking pages into an AI-generated answer.
Guardrails
Defensive layers around an LLM that filter input and output to prevent harm or drift.
Hallucination
When an LLM produces fluent but factually incorrect output.
In-Context Learning
Model behavior of adapting to a task using only examples shown in the prompt.
Prompt Engineering
The practice of designing AI inputs to produce reliable, structured outputs.
Prompt Injection
Attack technique where malicious user input overrides an AI app's instructions.
Prompt Template
Reusable prompt structure with placeholder variables to be filled per use.
System Prompt
Stable instructions that define the model's role, tone, and refusal rules.
Tokens
Units of text (sub-word fragments) that LLMs process; 1 token ≈ 0.75 English words.
User Prompt
The specific task or question the user sends; sits inside the system prompt's rules.