This course offers a theoretical introduction to Large Language Models, focusing on how they are built, trained, and used in real-world applications.
You’ll learn the basics of language modeling, the role of transformer architecture, and key techniques like pre-training, fine-tuning, and transfer learning.
Led by Andreea Turcu, this course also includes hands-on work with LLM DataStudio and covers how to evaluate and improve model performance.
What you'll learn
- Foundations of Language Models
Understand what language models are, how they work, and their role in natural language processing.
- Neural Networks, Deep Learning, and Transformer Architecture
Learn the core concepts behind modern LLMs, including how transformers power these models.
- Pre-training, Fine-tuning, and Performance Evaluation
Gain practical skills in training, adapting, and benchmarking large language models.
- Real-World Applications and Use Cases
Explore how LLMs are used across industries, from text generation to advanced NLP solutions.