Are you interested in learning generative AI, but feel intimidated by the complexities of AI and ML?
If your answer is YES, then this course is for you.
I structured this course based on my own journey learning generative AI technology. Having faced the challenges firsthand, I've designed it to make the learning process easier and more accessible. This course is tailored specifically for those without an AI or ML background, helping you quickly get up to speed with generative AI.
Are you interested in learning generative AI, but feel intimidated by the complexities of AI and ML?
If your answer is YES, then this course is for you.
I structured this course based on my own journey learning generative AI technology. Having faced the challenges firsthand, I've designed it to make the learning process easier and more accessible. This course is tailored specifically for those without an AI or ML background, helping you quickly get up to speed with generative AI.
Designed specifically for IT professionals, developers, and architects with no prior AI/ML background, this course will empower you to build intelligent, innovative applications using Large Language Models (LLM). You’ll gain practical, hands-on experience in applying cutting-edge generative AI technologies without the steep learning curve of mastering complex algorithms or mathematical theories.
Here is an overview of course structure & coverage:
Generative AI Foundations: Dive into the core concepts of Large Language Models (LLM), and learn how to work with powerful models like Google Gemini, Anthropic Claude, OpenAI GPT, and multiple open-source/Hugging Face LLMs.
Building Generative AI Applications: Discover practical techniques for creating generative AI applications, including prompting techniques, inference control, in-context learning, RAG patterns (naive and advanced), agentic RAG, vector databases & much more.
Latest Tools and Frameworks: Gain practical experience with cutting-edge tools like LangChain, Streamlit, Hugging Face, and popular vector databases like Pinecone and ChromaDB.
Try out multiple LLM: Course doesn't depend on a single LLM for hands-on exercises, rather learners are encouraged to use multiple models for exercises so that they learn the nuances of their behavior.
Learning Reinforcement: After each set of conceptual lessons, students are given exercises, projects, and quizzes to solidify their understanding and reinforce the material covered in previous lessons.
Harnessing the Power of Hugging Face: Master the Hugging Face platform, including its tools, libraries, and community resources, to effectively utilize pre-trained models and build custom applications.
Advanced Techniques: Delve into advanced topics like embeddings, search algorithms, model architecture, and fine-tunings to enhance your AI capabilities.
Real-World Projects: Apply your knowledge through hands-on projects, such as building a movie recommendation engine and a creative writing workbench.
Course Features
18+ Hours of Video Content
Hands-On Projects and Coding Exercises
Real-World Examples
Quizzes for Learning Reinforcement
GitHub Repository with Solutions
Web-Based Course Guide
By the end of this course, you'll be well-equipped to leverage Generative AI for a wide range of applications, from natural language processing to content generation and beyond.
Who Is This Course For?
This course is perfect for:
IT professionals, application developers, and architects looking to integrate generative AI into their applications.
Students or professional preparing for interviews for the roles related to generative AI
Those with no prior experience in AI/ML who want to stay competitive in today’s rapidly evolving tech landscape.
Anyone interested in learning how to build intelligent systems that solve real-world business problems using AI.
Why Choose This Course?
Raj structured this course based on his own experience in learning Generative AI technology. He applied his first hand knowledge of challenges faced in learning generative to create a structured course aimed at making it simple for anyone without AI/ML background to be able to get up to speed with generative ai fast.
No AI/ML Background Needed: This course is designed for non-experts and beginners in AI/ML.
Hands-On Learning: Engage in practical, real-world projects and coding exercises that bring AI concepts to life.
Expert Guidance: Learn from Rajeev Sakhuja, a seasoned IT consultant with over 20 years of industry experience.
Comprehensive Curriculum: Over 18 hours of video lessons, quizzes, and exercises, plus a web-based course guide to support you throughout your learning journey.
Latest Tools and Frameworks: Gain practical experience with cutting-edge tools like LangChain, Streamlit, Hugging Face, and popular vector databases like Pinecone
Folks looking for deep dive into the internals of generative AI models
Looking to gain understanding of mathematics behind the models
IT professionals interested in DataSciences role
Meet your instructor!!!
Course outline, tips etc.
Provides an overview of what's covered in this section.
In this video you will follow the instructions to setup tools and course repository on your machine.
Hands on experience is a key part of this course. In this lesson you will learn about the various ways in which the course will enhance your learning experience.
In this lesson I will go over the options to access the models.
Discusses the objective and lessons covered in this section
Explore the evolution of Artificial Intelligence over the past two decades. This lesson provides an overview of AI technologies like Machine Learning (ML), Neural Networks, and Generative AI, laying the foundation for deeper understanding.
Delve into the basic building blocks of Generative AI—neurons and neural networks. Understand how deep learning networks work and why they are pivotal to AI models.
Interact with a neural network to solve mathematical problems, demystifying the underlying mechanisms. This hands-on exercise helps reinforce your understanding of how these networks operate.
Gain insight into how a Generative AI model functions from an external perspective. This lesson simplifies complex AI models by exploring their behavior without diving into technical intricacies.
Test your knowledge of Generative AI and its core concepts through this quiz, reinforcing your understanding of the material covered so far.
Learn how to build Generative AI applications. Understand the process of accessing models, and explore the differences between open-source and closed-source models.
Experience setting up access to a Google Gemini hosted model. This hands-on exercise teaches you how to integrate these models into your code for real-world applications.
Discover the capabilities of Hugging Face, a leading platform for AI models. Learn about its inference endpoints, gated models, and libraries essential for building AI applications.
Walk through the Hugging Face portal to familiarize yourself with its features. This exercise will help you navigate its tools and understand how to leverage its resources effectively.
Create an account on Hugging Face, request access to gated models, and generate access tokens. This exercise will enable you to interact with models using your tokens.
Check your understanding of Generative AI and Hugging Face with this quiz, designed to review key concepts and practical skills you’ve acquired.
Learn the fundamentals of Natural Language Processing (NLP) and its subsets, Natural Language Understanding (NLU) and Natural Language Generation (NLG). This lesson introduces the key concepts that power AI language models.
Explore how Large Language Models (LLMs) handle NLP tasks. Understand the basics of transformer architecture and the differences between encoder-only and decoder-only models.
Use the Hugging Face portal to find and apply models for specific NLP tasks. This exercise helps solidify your understanding of LLMs in practical applications.
Test your grasp of NLP concepts, including NLP, NLU, NLG, and how LLMs execute these tasks, with this knowledge-check quiz.
Provides an overview of the topics covered in this section.
In this lesson I will introduce you to OLlama, a platform for hosting models locally.
In this lesson you will learn how to hos models with HTTP endpoints using OLlama. In addition I will demonstrate pre-built chat apps for OLlama.
Learn how creators or providers assign names to AI models, and what these names reveal about their architecture, capabilities, and intended use cases.
Explore the key differences between instruct models, embedding models, and chat models, and see how platforms like Hugging Face use them to build AI applications.
Test your understanding of base, instruct, embedding, and chat models by completing this quiz and reinforcing key concepts from the lessons.
Discover how language models predict the next word in a sequence and tackle the fill-mask task, a common NLP challenge that evaluates a model’s vocabulary knowledge.
Dive into decoding parameters and understand how they shape a model’s output, with a walkthrough of commonly used controls in transformer-based models.
Understand how randomness is controlled in model outputs using hyperparameters like temperature, top-p, and top-k, to fine-tune creative or deterministic outputs.
Get hands-on with the Cohere API, register for a key, and explore randomness control by adjusting key parameters to impact model output.
Learn how to use frequency penalty and decoding penalty to manage the diversity of responses generated by a model.
Explore how max output tokens and stop sequences help control the length of the model’s generated content for more focused results.
Apply what you’ve learned by tuning decoding parameters in real-world tasks to see how they affect the model’s behavior and outputs.
Check your understanding of decoding hyperparameters like temperature, max tokens, and others by taking this quiz.
Learn how In-Context Learning allows models to mimic human learning by using examples, including techniques like zero-shot and few-shot learning.
Assess your knowledge of In-Context Learning, and concepts like zero-shot, few-shot, and fine-tuning through this comprehensive quiz.
Provides an overview of topics covered in the section.
Get an overview of the Hugging Face Transformers library, followed by a step-by-step guide on how to install it and use it in Python for building AI applications.
Understand how task pipelines work in Hugging Face, explore key pipeline classes, and see practical demonstrations of their use for tasks like text classification and translation.
Test your knowledge of the Hugging Face Transformers library, including how to use task pipelines effectively in various applications.
Learn how to interact with the Hugging Face Hub to access model endpoints, manage model repositories, and integrate them into your projects for inference tasks.
Check your understanding of the Hugging Face Hub, its endpoints, and the inference classes used to streamline model interaction.
Explore both abstractive and extractive summarization methods, then apply Hugging Face models to implement a summarization task and experiment with real data.
Learn how to use the Hugging Face CLI to manage tasks, including model caching and cache cleanup, while streamlining workflows with locally stored models.
Lesson provides an overview of lessons covered in this section.
Learn the foundational concept of tensors, which represent the multi-dimensional arrays produced by neural networks. Understand how pipeline classes transform tensors into meaningful task outputs.
Explore model configuration classes to compare and understand the underlying architecture of Hugging Face models, including parameters like hidden layers and vector dimensions.
Dive into the critical role of tokenizers in converting text into input for models. This lesson explains what tokenizers are and demonstrates how to use Hugging Face tokenizer classes effectively.
Learn what logits represent in machine learning, and explore their use in Hugging Face task-specific classes. This lesson includes a code walkthrough showing logits in action.
Discover the flexibility of auto model classes, which automatically load appropriate models for various tasks. See how they simplify working with different Hugging Face models in practice.
Test your knowledge of Hugging Face tokenizers, model configurations, and auto model classes with this quiz, designed to reinforce key concepts covered in the lessons.
Learn about different types of Question/Answering tasks, then design and implement your own question-answering system using Hugging Face models, combining theory with hands-on practice.
Provides overview of topics covered in this section.
Learn about LangChain template classes for creating complex and reusable templates.
Explore ICL from the LLM challenges perspective. Understand prompt engineering practices, transfer learning, and fine-tuning.
Find domain-adapted models on Hugging Face for specific industries or tasks.
Learn about prompt structure and general best practices.
Continue discussing prompt engineering best practices.
Test your understanding of prompt engineering and practice fixing prompts.
Understand how LLMs learn from few-shot prompts and the data requirements for ICL, fine-tuning, and pre-training. Learn best practices for few-shot and zero-shot prompts.
Test your knowledge of few-shot and zero-shot prompting and practice fixing prompts for Named Entity Recognition (NER).
Learn about the Chain of Thought (CoT) technique and how it enhances LLM responses.
Test your understanding of the CoT technique.
Learn about the self-consistency technique and how it enhances LLM responses.
Learn how the tree of thoughts technique can be used for solving reasoning and logical problems. Compare it to other techniques.
Test your knowledge of various prompting techniques and apply them to solve a task.
Use your knowledge of prompting techniques to build a creative workbench for a marketing team.
Lesson provides an overview of the topics covered in this section.
Explore LangChain FewShotPromptTemplate and example selector classes.
Understand that there’s no universal prompt for all LLMs and learn how to address this challenge.
Learn how to invoke LLMs, stream responses, implement batch jobs, and use Fake LLMs for development.
Practice invoking, streaming, and batching with LLMs, and experiment with Fake LLMs.
Understand how the LLM client utility is implemented.
Test your knowledge of prompt templates, LLMs, and Fake LLMs.
Learn about LangChain chains and components, LangChain Execution Language (LCEL), and a demo of LCEL usage.
Build a compound sequential chain using LCEL and the pipe operator.
Learn about essential Runnable classes for building gen AI task chains.
Continue learning about essential Runnable classes for building gen AI task chains.
Familiarize yourself with common LCEL patterns using the LCEL cheatsheet and how-tos documentation.
Re-write the creative writing workbench project using LCEL and Runnable classes.
Test your knowledge of LCEL, Runnables, and chains.
Compare structured, unstructured, and semi-structured data. Understand the need for structured LLM responses and best practices for achieving them.
Learn about LangChain output parsers and how to use different types.
Write code to use the LangChain EnumOutputParser.
Write code to use the LangChain PydanticOutputParser.
Understand the application requirements and your tasks for the creative writing workbench project.
Step-by-step solution for the creative writing workbench project (part 1).
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.