LLM Concepts
Deep dives into large language models — from transformer architecture to fine-tuning, RAG, and agents.
Understand how modern large language models are built and how to use them effectively. Each topic page includes diagrams, code examples, and connections to related concepts.
Topics
| Topic | Description |
|---|---|
| Transformer Architecture | Self-attention, multi-head attention, positional encoding, and the encoder-decoder stack |
| Tokenization & Embeddings | BPE, WordPiece, vector spaces, and semantic similarity |
| Fine-tuning & RLHF | SFT, reward modeling, PPO, and parameter-efficient methods like LoRA/QLoRA |
| RAG & Retrieval | Vector DBs, chunking strategies, hybrid search, and reranking |
| Prompt Engineering | Few-shot prompting, chain-of-thought, structured output, and best practices |
| Agents & Tool Use | ReAct pattern, function calling, Model Context Protocol, and multi-agent systems |