LLM Fundamentals
Understand the core concepts behind Large Language Models
Learn how text is broken down into tokens that AI can understand
Explore how words become vectors in a high-dimensional space
Understand how models decide which parts of the input matter most
See how all components work together in a transformer architecture
Understand prefill vs generation phases, KV-Cache, and why the first token is slow
Learn about Greedy, Beam Search, Temperature, Top-k, and Top-p sampling
Learn how to control LLM behavior with generation parameters
Understand the key components that make up effective prompts
Learn the core principles of writing effective prompts for LLMs
Master proven techniques for writing high-quality prompts
Reduce model size with FP16, INT8, INT4 while preserving quality
Learn when to use prompting, LoRA, or full fine-tuning for your use case