designpattern.site

LLM Concepts

Deep dives into large language models — from transformer architecture to fine-tuning, RAG, and agents.

Understand how modern large language models are built and how to use them effectively. Each topic page includes diagrams, code examples, and connections to related concepts.

Topics

TopicDescription
Transformer ArchitectureSelf-attention, multi-head attention, positional encoding, and the encoder-decoder stack
Tokenization & EmbeddingsBPE, WordPiece, vector spaces, and semantic similarity
Fine-tuning & RLHFSFT, reward modeling, PPO, and parameter-efficient methods like LoRA/QLoRA
RAG & RetrievalVector DBs, chunking strategies, hybrid search, and reranking
Prompt EngineeringFew-shot prompting, chain-of-thought, structured output, and best practices
Agents & Tool UseReAct pattern, function calling, Model Context Protocol, and multi-agent systems

On this page