"Large Language Models (LLMs) have revolutionized the AI industry, providing unprecedented precision, expanding the possibilities of artificial intelligence. However the pre-trained LLMs may not always meet the specific requirements of an organization, hence there is always a need to Fine tune the LLMs to tailor these models to your unique tasks and requirements."
"Large Language Models (LLMs) have revolutionized the AI industry, providing unprecedented precision, expanding the possibilities of artificial intelligence. However the pre-trained LLMs may not always meet the specific requirements of an organization, hence there is always a need to Fine tune the LLMs to tailor these models to your unique tasks and requirements."
This comprehensive course is designed to equip you with the skills of LLM Fine tuning technique. It starts with a thorough introduction to the fundamentals of fine tuning while highlighting its critical role to make the LLM models adapt to your specific data. Then we will dive into Hands-on sessions covering the entire Fine-tuning workflow to Fine tune OpenAI GPT model. Through practical sessions, you'll step-by-step learn to prepare & format datasets, execute OpenAI fine tuning processes, and evaluate the model outcomes.
By the end of this course, you will be proficient in Fine tuning the OpenAI's GPT model to meet specific organizational needs and start your career journey as LLM Fine tuning engineer.
What in nutshell is included in the course ?
[Theory]
We'll start with LLM and LLM Fine tuning's core basics and fundamentals.
Discuss Why is Fine tuning needed, How it works, Workflow of Fine tuning and the Steps involved in it.
Different types of LLM Fine tuning techniques including
Best practices for LLM Fine tuning.
[Practicals]
Get a detailed walkthrough of OpenAI Dashboard and Playground to get a holistic understanding of wide range of tools that OpenAI offers for Generative AI.
Follow the OpenAI Fine tuning workflow in practical sessions covering Exploratory Data Analysis (EDA), Data preprocessing, Data formatting, Creating fine tuning job, Evaluation.
Understand the OpenAI specialized JSONL format that it accepts for training & test data, and learn about 3 important roles - System, User, Assistant.
Calculate the Token count and Fine tuning cost in advance using Tiktoken library.
Gain Hands-on experience in fine tuning OpenAI's GPT model on a custom dataset using Python through step-by-step practical sessions.
Assess the accuracy and performance of the fine-tuned model compared to the base pre-trained model to evaluate the impact of fine-tuning.
In this video you will get a basic idea of What are Large Language models (LLM) and the nitty gritty things surrounding it.
Why LLMs are so important and what are its various applications.
Pre-trained LLM models are the LLM models that have already been trained on a large dataset mostly for the general tasks. They have already learnt patterns and features which can be then reused for tasks of similar nature
LLM Fine tuning in technical terms is the process of adjusting the parameters of a pre-trained model to make it better suited for a specific task of your personal use case. Fine tuning is about making some tweaks in the already pre-trained LLM model that does general-purpose tasks to make it do more specialized tasks.
In this video, you will learn about the steps involved in the LLM fine tuning process.
What are different types of LLM fine tuning
Standard LLM Fine tuning
Sequential LLM Fine tuning
Layer wise LLM finetuning
Feature extraction LLM finetuning
RLHF finetuning
PEFT finetuning
LoRA
QLoRA
Instruction fine tuning
LLM fine tuning fundamentals quiz
In this video you will get a walkthrough of what dashboard OpenAI offers for developers for Generative AI. You will find the fine tune option in this dashboard.
In this video, I did the basic setup required for LLM fine tuning.
In this video I have shown the roadmap of the steps involved in Fine tuning OpenAI GPT model
In this video, I have performed basic EDA on the input dataset collected for OpenAI LLM fine tuning. EDA includes removing the duplicated and null values.
In this video I have created a world cloud on the collected data for OpenAI LLM fine tuning.
Fine-tuning OpenAI GPT models is directly proportional to the input token count so it is necessary to count the tokens in your input dataset before submitting a fine tuning job in OpenAI
In this video I am splitting the input file to test and train splits.
JSONL or Json lines is same as JSON format except newline characters are used to delimit JSONL data. OpenAI takes only JSONL formatted data for fine tuning.
OpenAI JSONL format contains messages with 3 roles - user, system and assistant role
Convert the train and test csv files into OpenAI accepted format for fine-tuning.
In this video, we will upload the processed files (OpenAI JSONL format) to the OpenAI servers. The purpose of the upload shall be set to fine-tune.
What are the parameters of client.fine_tuning.jobs.create() method of OpenAI.
Create and submit a fine tuning job to openai to fine tune the GPT LLM
In this video, I am finally creating and submitting the job to fine tune OpenAI LLM GPT 3.5 turbo model
It is important to evaluate the fine tuned LLM model for its accuracy from base model and then based on the evaluation results check if the fine tuned LLM model actually performed well than the base pre-trained LLM model.
OpenAI fine tuning Quiz
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.