We may earn an affiliate commission when you visit our partners.
Course image
Lucy Park and Sung Kim

In Pretraining LLMs you’ll explore the first step of training large language models using a technique called pretraining. You’ll learn the essential steps to pretrain an LLM, understand the associated costs, and discover how starting with smaller, existing open source models can be more cost-effective.

Read more

In Pretraining LLMs you’ll explore the first step of training large language models using a technique called pretraining. You’ll learn the essential steps to pretrain an LLM, understand the associated costs, and discover how starting with smaller, existing open source models can be more cost-effective.

Pretraining involves teaching an LLM to predict the next token using vast text datasets, resulting in a base model, and this base model requires further fine-tuning for optimal performance and safety. In this course, you’ll learn to pretrain a model from scratch and also to take a model that’s already been pretrained and continue the pretraining process on your own data.

In detail:

1. Explore scenarios where pretraining is the optimal choice for model performance. Compare text generation across different versions of the same model to understand the performance differences between base, fine-tuned, and specialized pre-trained models.

2. Learn how to create a high-quality training dataset using web text and existing datasets, which is crucial for effective model pretraining.

3. Prepare your cleaned dataset for training. Learn how to package your training data for use with the Hugging Face library.

4. Explore ways to configure and initialize a model for training and see how these choices impact the speed of pretraining.

5. Learn how to configure and execute a training run, enabling you to train your own model.

6. Learn how to assess your trained model’s performance and explore common evaluation strategies for LLMs, including important benchmark tasks used to compare different models’ performance.

After taking this course, you’ll be equipped with the skills to pretrain a model—from data preparation and model configuration to performance evaluation.

Enroll now

Two deals to help you save

We found two deals and offers that may be relevant to this course.
Save money when you learn. All coupon codes, vouchers, and discounts are applied automatically unless otherwise noted.

What's inside

Syllabus

Pretraining LLMs
In Pretraining LLMs you’ll explore the first step of training large language models using a technique called pretraining. You’ll learn the essential steps to pretrain an LLM, understand the associated costs, and discover how starting with smaller, existing open source models can be more cost-effective.Pretraining involves teaching an LLM to predict the next token using vast text datasets, resulting in a base model, and this base model requires further fine-tuning for optimal performance and safety. In this course, you’ll learn to pretrain a model from scratch and also to take a model that’s already been pretrained and continue the pretraining process on your own data. In detail: 1. Explore scenarios where pretraining is the optimal choice for model performance. Compare text generation across different versions of the same model to understand the performance differences between base, fine-tuned, and specialized pre-trained models. 2. Learn how to create a high-quality training dataset using web text and existing datasets, which is crucial for effective model pretraining. 3. Prepare your cleaned dataset for training. Learn how to package your training data for use with the Hugging Face library. 4. Explore ways to configure and initialize a model for training and see how these choices impact the speed of pretraining. 5. Learn how to configure and execute a training run, enabling you to train your own model. 6. Learn how to assess your trained model’s performance and explore common evaluation strategies for LLMs, including important benchmark tasks used to compare different models’ performance. After taking this course, you’ll be equipped with the skills to pretrain a model—from data preparation and model configuration to performance evaluation.

Good to know

Know what's good
, what to watch for
, and possible dealbreakers
Provides an introduction to pretraining, which is a fundamental step in training large language models such as GPT-3 and BERT
Instructs learners in the process of creating a high-quality training dataset, which is essential for effective model pretraining
Guides learners on how to prepare their cleaned dataset for training and package it for use with the Hugging Face library, a popular tool for training and deploying machine learning models
Provides a comprehensive understanding of model configuration and initialization for training, allowing learners to optimize the training process
Enables learners to assess the performance of their trained model and explore common evaluation strategies for LLMs, including important benchmark tasks used to compare different models' performance

Save this course

Save Pretraining LLMs to your list so you can find it easily later:
Save

Activities

Be better prepared before your course. Deepen your understanding during and after it. Supplement your coursework and achieve mastery of the topics covered in Pretraining LLMs with these activities:
Python Refresher
Brush up on your Python skills before starting the course to ensure a strong foundation and minimize any potential obstacles in your learning journey.
Browse courses on Python
Show steps
  • Review basic syntax and data structures
  • Practice coding exercises
  • Complete online tutorials or refresher courses
  • Build a small project to apply your skills
Subject Matter Experts
Seek out subject matter experts beyond your instructor to gain additional insights and perspectives on the course material, broadening your understanding and expanding your professional network.
Browse courses on Mentorship
Show steps
  • Identify potential mentors in the field
  • Reach out and introduce yourself
  • Schedule meetings or discussions
  • Seek guidance and advice
Peer-to-Peer Training
Engage in peer-to-peer training by mentoring other students in the class, reinforcing your understanding of the course material and developing your leadership and communication skills.
Browse courses on Communication
Show steps
  • Identify a student who could benefit from your support
  • Schedule regular sessions to review course material
  • Provide constructive feedback and guidance
  • Create practice exercises and assignments
Five other activities
Expand to see all activities and additional details
Show all eight activities
GPT-3 Response Generator
Build a GPT-3 based response generator using Hugging Face to practice your text generation skills and improve the quality of your responses in the course.
Browse courses on Hugging Face
Show steps
  • Install Hugging Face and transformers library
  • Load a pre-trained text generator model from Hugging Face
  • Write a function to generate text using the model
  • Create a simple web app that uses your function to generate responses
Fine-tuning for Specific Tasks
Follow guided tutorials to fine-tune pre-trained models for specific tasks, allowing you to apply the concepts learned in the course and enhance your problem-solving skills.
Browse courses on Fine-tuning
Show steps
  • Identify a specific task and dataset
  • Select an appropriate pre-trained model
  • Fine-tune the model on your task
  • Evaluate the performance of the fine-tuned model
Pretrained Model Performance Evaluation
Engage in practice drills to evaluate the performance of different pre-trained models on various benchmark tasks to reinforce your understanding of model evaluation techniques and gain practical experience.
Browse courses on Model Evaluation
Show steps
  • Create a dataset for model evaluation
  • Select relevant evaluation metrics
  • Implement model evaluation code
  • Evaluate and compare the performance of multiple models
  • Analyze the results and identify areas for improvement
State-of-the-Art Pretraining Techniques
Compile resources and articles on state-of-the-art pre-training techniques to expand your knowledge and stay updated with the latest advancements in the field of Large Language Models.
Show steps
  • Conduct a literature search for relevant papers and articles
  • Summarize the key findings and insights
  • Organize and categorize the resources
  • Create a presentation or document to share your findings
Contribute to Hugging Face
Engage with the open-source community by contributing to Hugging Face, the leading platform for Large Language Models, to gain practical experience and deepen your understanding of the ecosystem.
Browse courses on Open Source
Show steps
  • Find an issue or feature to work on
  • Fork the repository
  • Implement your changes
  • Submit a pull request

Career center

Learners who complete Pretraining LLMs will develop knowledge and skills that may be useful to these careers:

Reading list

We haven't picked any books for this reading list yet.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Here are nine courses similar to Pretraining LLMs.
Monitor and Evaluate Model Performance During Training
Most relevant
Prompt Engineering for Improved Performance
Most relevant
Continuous Model Training with Evolving Data Streams
Most relevant
Optimize Model Training with Hyperparameter Tuning
Most relevant
Reinforcement Learning from Human Feedback
Most relevant
Model Building and Evaluation for Data Scientists
Most relevant
Optimizing Neural Networks for Efficient Data Processing
Most relevant
Efficient Data Feeding and Labeling for Model Training
Optimize Enterprise-scale Data Models - DP-500
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2024 OpenCourser