We may earn an affiliate commission when you visit our partners.
Course image
Arnold Oberleiter

ChatGPT is useful, but have you noticed that there are many censored topics, you are pushed in certain political directions, some harmless questions go unanswered, and our data might not be secure with OpenAI? This is where open-source LLMs like Llama3, Mistral, Grok, Falkon, Phi3, and Command R+ can help.

Read more

ChatGPT is useful, but have you noticed that there are many censored topics, you are pushed in certain political directions, some harmless questions go unanswered, and our data might not be secure with OpenAI? This is where open-source LLMs like Llama3, Mistral, Grok, Falkon, Phi3, and Command R+ can help.

Are you ready to master the nuances of open-source LLMs and harness their full potential for various applications, from data analysis to creating chatbots and AI agents? Then this course is for you.

Introduction to Open-Source LLMs

This course provides a comprehensive introduction to the world of open-source LLMs. You'll learn about the differences between open-source and closed-source models and discover why open-source LLMs are an attractive alternative. Topics such as ChatGPT, Llama, and Mistral will be covered in detail. Additionally, you’ll learn about the available LLMs and how to choose the best models for your needs. The course places special emphasis on the disadvantages of closed-source LLMs and the pros and cons of open-source LLMs like Llama3 and Mistral.

Practical Application of Open-Source LLMs

The course guides you through the simplest way to run open-source LLMs locally and what you need for this setup. You will learn about the prerequisites, the installation of LM Studio, and alternative methods for operating LLMs. Furthermore, you will learn how to use open-source models in LM Studio, understand the difference between censored and uncensored LLMs, and explore various use cases. The course also covers finetuning an open-source model with Huggingface or Google Colab and using vision models for image recognition.

Prompt Engineering and Cloud Deployment

An important part of the course is prompt engineering for open-source LLMs. You will learn how to use HuggingChat as an interface, utilize system prompts in prompt engineering, and apply both basic and advanced prompt engineering techniques. The course also provides insights into creating your own assistants in HuggingChat and using open-source LLMs with fast LPU chips instead of GPUs.

Function Calling, RAG, and Vector Databases

Learn what function calling is in LLMs and how to implement vector databases, embedding models, and retrieval-augmented generation (RAG). The course shows you how to install Anything LLM, set up a local server, and create a RAG chatbot with Anything LLM and LM Studio. You will also learn to perform function calling with Llama 3 and Anything LLM, summarize data, store it, and visualize it with Python.

Optimization and AI Agents

For optimizing your RAG apps, you will receive tips on data preparation and efficient use of tools like LlamaIndex and LlamaParse. Additionally, you will be introduced to the world of AI agents. You will learn what AI agents are, what tools are available, and how to install and use Flowise locally with Node.js. The course also offers practical insights into creating an AI agent that generates Python code and documentation, as well as using function calling and internet access.

Additional Applications and Tips

Finally, the course introduces text-to-speech (TTS) with Google Colab and finetuning open-source LLMs with Google Colab. You will learn how to rent GPUs from providers like Runpod or Massed Compute if your local PC isn’t sufficient. Additionally, you will explore innovative tools like Microsoft Autogen and CrewAI and how to use LangChain for developing AI agents.

Harness the transformative power of open-source LLM technology to develop innovative solutions and expand your understanding of their diverse applications. Sign up today and start your journey to becoming an expert in the world of large language models.

Enroll now

What's inside

Learning objectives

  • Why open-source llms? differences, advantages, and disadvantages of open-source and closed-source llms
  • What are llms like chatgpt, llama, mistral, phi3, qwen2-72b-instruct, grok, gemma, etc.
  • Which llms are available and what should i use? finding "the best llms"
  • Requirements for using open-source llms locally
  • Installation and usage of lm studio, anything llm, ollama, and alternative methods for operating llms
  • Censored vs. uncensored llms
  • Finetuning an open-source model with huggingface or google colab
  • Vision (image recognition) with open-source llms: llama3, llava & phi3 vision
  • Hardware details: gpu offload, cpu, ram, and vram
  • All about huggingchat: an interface for using open-source llms
  • System prompts in prompt engineering + function calling
  • Prompt engineering basics: semantic association, structured & role prompts
  • Groq: using open-source llms with a fast lpu chip instead of a gpu
  • Vector databases, embedding models & retrieval-augmented generation (rag)
  • Creating a local rag chatbot with anything llm & lm studio
  • Linking ollama & llama 3, and using function calling with llama 3 & anything llm
  • Function calling for summarizing data, storing, and creating charts with python
  • Using other features of anything llm and external apis
  • Tips for better rag apps with firecrawl for website data, more efficient rag with llamaindex & llamaparse for pdfs and csvs
  • Definition and available tools for ai agents, installation and usage of flowise locally with node (easier than langchain and langgraph)
  • Creating an ai agent that generates python code and documentation, and using ai agents with function calling, internet access, and three experts
  • Hosting and usage: which ai agent should you build and external hosting, text-to-speech (tts) with google colab
  • Finetuning open-source llms with google colab (alpaca + llama-3 8b, unsloth)
  • Renting gpus with runpod or massed compute
  • Security aspects: jailbreaks and security risks from attacks on llms with jailbreaks, prompt injections, and data poisoning
  • Data privacy and security of your data, as well as policies for commercial use and selling generated content
  • Show more
  • Show less

Syllabus

Introduction and Overview
Welcome
Course Overview
My Goal and Some Tips
Read more
Explanation of the Links
Important Links
Why Open-Source LLMs? Differences, Advantages, and Disadvantages
What is this Section about?
What are LLMs like ChatGPT, Llama, Mistral, etc.
Which LLMs are available and what should I use: Finding "The Best LLMs"
Disadvantages of Closed-Source LLMs like ChatGPT, Gemini, and Claude
Advantages and Disadvantages of Open-Source LLMs like Llama3, Mistral & more
Recap: Don't Forget This!
The Easiest Way to Run Open-Source LLMs Locally & What You Need
Requirements for Using Open-Source LLMs Locally: GPU, CPU & Quantization
Installing LM Studio and Alternative Methods for Running LLMs
Using Open-Source Models in LM Studio: Llama 3, Mistral, Phi-3 & more
4 Censored vs. Uncensored LLMs: Llama3 with Dolphin Finetuning
The Use Cases of classic LLMs like Phi-3 Llama and more
Vision (Image Recognition) with Open-Source LLMs: Llama3, Llava & Phi3 Vision
Some Examples of Image Recognition (Vision)
More Details on Hardware: GPU Offload, CPU, RAM, and VRAM
Summary of What You Learned & an Outlook to Lokal Servers & Prompt Engineering
Prompt Engineering for Open-Source LLMs and Their Use in the Cloud
HuggingChat: An Interface for Using Open-Source LLMs
System Prompts: An Important Part of Prompt Engineering
Why is Prompt Engineering Important? [A example]
Semantic Association: The most Importnant Concept you need to understand
The structured Prompt: Copy my Prompts
Instruction Prompting and some Cool Tricks
Role Prompting for LLMs
Shot Prompting: Zero-Shot, One-Shot & Few-Shot Prompts
Reverse Prompt Engineering and the "OK" Trick
Chain of Thought Prompting: Let`s think Step by Step
Tree of Thoughts (ToT) Prompting in LLMs
The Combination of Prompting Concepts
Creating Your Own Assistants in HuggingChat
Groq: Using Open-Source LLMs with a Fast LPU Chip Instead of a GPU
Recap: What You Should Remember
Function Calling, RAG, and Vector Databases with Open-Source LLMs
What Will Be Covered in This Section?
What is Function Calling in LLMs
Vector Databases, Embedding Models & Retrieval-Augmented Generation (RAG)
Installing Anything LLM and Setting Up a Local Server for a RAG Pipeline
Local RAG Chatbot with Anything LLM & LM Studio
Function Calling with Llama 3 & Anything LLM (Searching the Internet)
Function Calling, Summarizing Data, Storing & Creating Charts with Python
Other Features of Anything LLM: TTS and External APIs
Downloading Ollama & Llama 3, Creating & Linking a Local Server
Recap Don't Forget This!
Optimizing RAG Apps: Tips for Data Preparation
What Will Be Covered in This Section: Better RAG, Data & Chunking
Tips for Better RAG Apps: Firecrawl for Your Data from Websites
More Efficient RAG with LlamaIndex & LlamaParse: Data Preparation for PDFs &more
LlamaIndex Update: LlamaParse made easy!
Chunk Size and Chunk Overlap for a Better RAG Application
Recap: What You Learned in This Section
Local AI Agents with Open-Source LLMs
What Will Be Covered in This Section on AI Agents
AI Agents: Definition & Available Tools for Creating Opensource AI-Agents
We use Langchain with Flowise, Locally with Node.js
Installing Flowise with Node.js (JavaScript Runtime Environment)
The Flowise Interface for AI-Agents and RAG ChatBots
Local RAG Chatbot with Flowise, LLama3 & Ollama: A Local Langchain App
Our First AI Agent: Python Code & Documentation with Superwicer and 2 Worker
AI Agents with Function Calling, Internet and Three Experts for Social Media
Which AI Agent Should You Build & External Hosting with Render
Chatbot with Open-Source Models from Huggingface & Embeddings in HTML (Mixtral)
Insanely fast inference with the Groq API
Recap What You Should Remember
Finetuning, Renting GPUs, Open-Source TTS, Finding the BEST LLM & More Tips
What Is This Section About?
Text-to-Speech (TTS) with Google Colab
Moshi Talk to an Open-Source AI
Finetuning an Open-Source Model with Huggingface or Google Colab
Finetuning Open-Source LLMs with Google Colab, Alpaca + Llama-3 8b from Unsloth
What is the Best Open-Source LLM I Should Use?
Llama 3.1 Infos and What Models should you use
Grok from xAI
Renting a GPU with Runpod or Massed Compute if Your Local PC Isn't Enough
Recap: What You Should Remember!
Data Privacy, Security, and What Comes Next?
THE LAST SECTION: What is This About?
Jailbreaks: Security Risks from Attacks on LLMs with Prompts
Prompt Injections: Security Problem of LLMs
Data Poisoning and Backdoor Attacks
Data Privacy and Security: Is Your Data at Risk?
Commercial Use and Selling of AI-Generated Content
My Thanks and What's Next?
Bonus

Good to know

Know what's good
, what to watch for
, and possible dealbreakers
Appropriate for learners who want to expand their knowledge and skills in using open-source LLMs for various applications
Provides practical guidance and hands-on experience in running and utilizing open-source LLMs locally
Covers a wide range of topics, from the basics of open-source LLMs to advanced techniques like prompt engineering and function calling
Suitable for learners with some prior understanding of machine learning or natural language processing
Assumes familiarity with concepts such as GPUs and cloud computing, which may require additional learning for beginners
May require access to specialized hardware or software, such as GPUs or cloud computing services, which may incur additional costs

Save this course

Save Open-source LLMs: Uncensored & secure AI locally with RAG to your list so you can find it easily later:
Save

Activities

Be better prepared before your course. Deepen your understanding during and after it. Supplement your coursework and achieve mastery of the topics covered in Open-source LLMs: Uncensored & secure AI locally with RAG with these activities:
Compile a list of LLM Resources
Keep your learning resources organized. Create a curated list of useful LLM-related resources (e.g., tutorials, documentation, articles)
Show steps
  • Search for LLM-related resources online using search engines, social media, or online forums.
  • Evaluate the relevance and quality of each resource.
  • Organize your resources into categories (e.g., tutorials, documentation, articles).
  • Share your list with others who may find it useful.
Review Basic Python Concepts
Strengthen your foundation for working with LLMs by refreshing your knowledge of basic Python concepts.
Browse courses on Python
Show steps
  • Review Python tutorials and documentation.
  • Complete practice exercises and coding challenges.
  • Build a small Python project to apply your refreshed knowledge.
Follow Tutorials on Open-Source LLMs
Deepen your understanding of open-source LLMs by following tutorials that provide hands-on experience with their APIs and capabilities.
Browse courses on Open Source LLMs
Show steps
  • Search for tutorials on open-source LLMs using search engines, online forums, or social media.
  • Choose a tutorial that aligns with your learning goals and skill level.
  • Follow the tutorial step-by-step and experiment with the LLM API.
  • Build a small project or experiment to apply what you learned from the tutorial.
Five other activities
Expand to see all activities and additional details
Show all eight activities
Attend an LLM Workshop
Learn from experts and connect with peers at an LLM workshop designed to further your understanding of the field.
Show steps
  • Research and identify LLM workshops that align with your learning goals.
  • Register for the workshop and prepare any necessary materials.
  • Attend the workshop and actively participate in the sessions.
  • Take notes, ask questions, and engage with the workshop facilitators and other attendees.
  • Follow up after the workshop by applying what you learned to your own projects and initiatives.
Practice LLM Prompt Engineering
Mastering LLMs require fluency in giving useful instructions. Sharpen your abilities by practicing LLM prompt engineering.
Show steps
  • Identify the type of task you want to complete with the LLM (e.g., question answering, code generation).
  • Brainstorm a list of potential prompts that could be used to complete the task.
  • Choose the best prompt from your list and test it out with the LLM.
  • Evaluate the output of the LLM and make any necessary adjustments to your prompt.
  • Repeat steps 2-4 until you are satisfied with the output of the LLM.
Develop a Presentation on LLM Applications
Expand your understanding of LLM applications by researching, analyzing, and presenting on their potential use cases in various industries and domains.
Show steps
  • Research and identify different industries and domains where LLMs can be applied.
  • Analyze the potential benefits and challenges of using LLMs in each industry or domain.
  • Develop a presentation that clearly communicates your findings.
  • Deliver your presentation to an audience of peers, instructors, or industry professionals.
Write a Blog Post on LLM Ethics
Develop a nuanced understanding of LLM ethics by researching, analyzing, and writing a comprehensive blog post on the topic.
Browse courses on AI Ethics
Show steps
  • Research different ethical considerations related to the use of LLMs.
  • Analyze the potential benefits and risks of LLMs in society.
  • Write a blog post that clearly presents your findings and perspectives on LLM ethics.
  • Publish your blog post on a relevant platform and share it with others.
Build an LLM-powered Chatbot
Solidify your understanding of LLMs by designing, developing, and deploying your very own LLM-powered chatbot.
Show steps
  • Choose an LLM provider and API.
  • Design the chatbot's user interface and functionality.
  • Develop the chatbot's backend code.
  • Deploy the chatbot and test its functionality.
  • Monitor the chatbot's performance and make improvements as needed.

Career center

Learners who complete Open-source LLMs: Uncensored & secure AI locally with RAG will develop knowledge and skills that may be useful to these careers:

Reading list

We haven't picked any books for this reading list yet.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Here are nine courses similar to Open-source LLMs: Uncensored & secure AI locally with RAG.
LLM Mastery: ChatGPT, Gemini, Claude, Llama3, OpenAI &...
Most relevant
AI-Agents: Automation & Business with LangChain & LLM Apps
Most relevant
Open-Source LLMs: Unzensierte & sichere KI lokal auf dem...
Most relevant
Getting Started with Mistral
Most relevant
Open Source LLMOps Solutions
Most relevant
Advanced LangChain Techniques: Mastering RAG Applications
Most relevant
Open Source LLMOps
Most relevant
Function-Calling and Data Extraction with LLMs
Most relevant
Azure Generative (OpenAI) + Predictive AI (23+ Hours)
Most relevant
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2024 OpenCourser