We may earn an affiliate commission when you visit our partners.
Course image
PhD Researcher AI & Robotics Scientist Fikrat Gasimov

This course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their  deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on  TensorFlow , Pytorch,  Keras models, HuggingFace with Docker Service.

Read more

This course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their  deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on  TensorFlow , Pytorch,  Keras models, HuggingFace with Docker Service.

In addition, one will be able to optimize and quantize TensorRT frameworks for deployment in variety of sectors. Moreover, They will learn deployment of LLM quantized model to Web Pages developed with React, Javascript and FLASKHere you will also learn how  to integrate Reinforcement Learning(PPO) to Large Language Model, in order to fine them with Human Feedback based.  Candidates will learn how to code and debug in C/C++ Programming languages at least in intermediate level.

LLM Models used:

  • The Falcon,

  • LLAMA2,

  • BLOOM,

  • MPT,

  • Vicuna,

  • FLAN-T5,

  • GPT2/GPT3, GPT NEOX

  • ..

  1. Learning and Installation of Docker from scratch

  2. Knowledge of Javscript

  3. Ready to solve any programming challenge with C/C++

  4. Read to tackle Deployment issues on Edge Devices as well as Cloud Areas

  5. Large Language Models Fine Tunning

  6. Large Language Models Hands-On-Practice: 5, FLAN-T5 family

  7. Large Language Models Training, Evaluation and User-Defined Prompt IN-Context Learning/On-Line Learning

  8. Human FeedBack Alignment on LLM with Reinforcement Learning (PPO) with Large Language Model : BERT and FLAN-T5

  9. How to Avoid Catastropich Forgetting Program on Large Multi-Task LLM Models.

  10. How to prepare LLM for Multi-Task Problems such as Code Generation, Summarization, Content Analizer, Image Generation.

  11. Quantization of Large Language Models with various existing state-of-art techniques

  • Importante Note:      In this course, there is not nothing to copy & paste, you will put your hands in every line of project to be successfully LLM and Web Application Developer.

You DO NOT need any Special Hardware component. You will be delivering project either on CLOUD or on Your Local Computer.

Enroll now

What's inside

Learning objectives

  • What is docker and how to use docker
  • Advance docker usage
  • What are opencl and opengl and when to use ?
  • (lab) tensorflow and pytorch installation, configuration with docker
  • (lab)dockerfile, docker compile and docker compose debug file configuration
  • (lab)different yolo version, comparisons, and when to use which version of yolo according to your problem
  • (lab)jupyter notebook editor as well as visual studio coding skills
  • (lab)learn and prepare yourself for full stack and c++ coding exercies
  • (lab)tensorrt precision float 32/16 model quantiziation
  • Key differences:explicit vs. implicit batch size
  • (lab)tensorrt precision int8 model quantiziation
  • (lab) visual studio code setup and docker debugger with vs and gdb debugger
  • (lab) what is onnx framework c plus and how to use apply onnx to your custom c ++ problems
  • (lab) what is tensorrt framework and how to use apply to your custom problems
  • (lab) custom detection, classification, segmentation problems and inference on images and videos
  • (lab) basic c ++ object oriented programming
  • (lab) advance c ++ object oriented programming
  • (lab) deep learning problem solving skills on edge devices, and cloud computings with c++ programming language
  • (lab) how to generate high performance inference models on embedded device, in order to get high precision, fps detection as well as less gpu memory consumption
  • (lab) visual studio code with docker
  • (lab) gdb debugger with sonarlite and sonarcube debuggers
  • (lab) yolov4 onnx inference with opencv c++ dnn libraries
  • (lab) yolov5 onnx inference with opencv c++ dnn libraries
  • (lab) yolov5 onnx inference with dynamic c++ tensorrt libraries
  • (lab) c++(11/14/17) compiler programming exercies
  • Key differences: opencv and cuda/ opencv and tensorrt
  • (lab) deep dive on react development with axios front end rest api
  • (lab) deep dive on flask rest api with react with mysql
  • (lab) deep dive on text summarization inference on web app
  • (lab) deep dive on bert (llm) fine tunning and emotion analysis on web app
  • (lab) deep dive on distributed gpu programming with natural language processing (large language models))
  • (lab) deep dive on generative ai use cases, project lifecycle, and model pre-training
  • (lab) fine-tuning and evaluating large language models
  • (lab) reinforcement learning and llm-powered applications, align fine tunning with user feedback
  • (lab) quantization of large language models with modern nvidia gpu's
  • (lab) c++ oop tensorrt quantization and fast inference
  • (lab) deep dive on hugging face library
  • (lab)translation ● text summarization ● question answering
  • (lab)sequence-to-sequence models, only encoder based models, only decoder based models
  • (lab)define the terms generative ai, large language models, prompt, and describe the transformer architecture that powers llms
  • (lab)discuss computational challenges during model pre-training and determine how to efficiently reduce memory footprint
  • (lab)describe how fine-tuning with instructions using prompt datasets can improve performance on one or more tasks
  • (lab)explain how peft decreases computational cost and overcomes catastrophic forgetting
  • (lab)describe how rlhf uses human feedback to improve the performance and alignment of large language models
  • (lab)discuss the challenges that llms face with knowledge cut-offs, and explain how information retrieval and augmentation techniques can overcome these challen
  • Recognize and understand the various strategies and techniques used in fine-tuning language models for specialized applications.
  • Master the skills necessary to preprocess datasets effectively, ensuring they are in the ideal format for ai training.
  • Investigate the vast potential of fine-tuned ai models in practical, real-world scenarios across multiple industries.
  • Acquire knowledge on how to estimate and manage the costs associated with ai model training, making the process efficient and economic
  • Distributing computing for (ddp) distributed data parallelization and fully shared data parallel across multi gpu/cpu with pytorch together with retrieval augme
  • The roberta model was proposed in roberta: a robustly optimized bert pretraining approach
  • Show more
  • Show less

Syllabus

All course summary

Course Github LINK:


https://github.com/fikratgasimovsoftwareengineer/FullStack_Web_APP/tree/main

Read more
React Hooks
Course Overview by Me
React DOM
React Rest API&Axios
Flask Rest API
Javascript Basics Concepts
Javascript Advance concepts
Course Description and what you will learn
Recommed Course - DeepLearning
Some Demos

Please find all projects laying on the repository:

https://github.com/fikratgasimovsoftwareengineer/FullStack_Web_APP


WebApp-Object Detection Demo
Set up Docker Images,Containers, and Visual Code

My Github Profile:
https://github.com/fikratgasimovsoftwareengineer/FullStack_Web_APP

Docker File Configuration
Docker Build and Set Up
How to Run Docker RUN
Configuration of Docker Container with Visual Code
Prepare YoloV7 Fast Precision Server Side
Yolov7 Start Implementation
Yolov7 Server Implementation 2
Yolov7 Server Implementation 3
Yolov7 Server Implementation 4
Yolov7 Server Implementation 5
Yolov7 Server Implementation 6
Flask Server Implementation for High Security Web App
Flask Server Implementation 1
Flask Server Implementation 2
Flask Server Sign In Implementation
Flask Server Registration Implementation
Flask Server with YoloV7 Deep Learning Integration
Flask Server & Yolov7 Integration
Flask Server & Yolov7 Integration part 2
Flask Server & Yolov7 Integration part 3
Flask Server Web APP Design
Flask Server & Web APP design part 1
Flask Web App DL Inference
Flask Web App DL Image Inference
Flow Diagram for Back-End&Front-End
React Web App Inference with Emotion Detection NLP
Custom Web App Emotion Detection, BERT, Hugging FACE, React JS, Flask, MySql
How to start for Prototyping Large Language Model with Web APP and Flask
BERT & Hugging Face Feature Engineering Part 1
Feature Engineering and Preprocessing part 2
Feature Engineering and Preprocessing part 3
Feature Engineering and Preprocessing part 4
Pytorch Dataloader & Hugging Face Framework(Large Language Models)
Dataloader,Hugging Face Integration
Dataloader,Hugging Face Integration Part 2
Dataloader,Hugging Face Integration Part 3
BERT NLP Transformer : Model Freezing
BERT_FINE Part 1
Prepare Training and Validation Step with BERT

Bert Model has been saved within zip folder.

training part 2
train and val part 3
Train&Val successful
React, Flask, Bert Emotion Inference

This is pretrained model zip file that contains both pretrained model and tokenizer.Please, unzip it and locate it in local working directory!

Where we are and where we have to ??
preprocessing setup
Model BackBone setup
Model Inference Part 1
Model Inference Part 2
Flask Server Integration with Model Pretrained
Flask Server & Inference Part 1
Flask Server & Inference Part 2
Flask Server & Inference Part 3
React Development Web App
React Familiarity
React Installation
React set up part 1
react successful installation
Main React Component
Evaluate Implementation
Emotion Analysis component
User FeedBackk Route API
Non User Feedback Route API
Emotion Analysis Implementation Return
Demo Emotion Analysis Successfully Implementated
Question&Answering React WEB and LLM Transformer Based PDF Analizer
Demo Transformer-React
React Question Answer Component
React Question Answer Component 2
LLM Transormer Explanation
Flask Route Based Implementation
CPlus_Cplus TensorRT Tutorial & Demo
CPlus_Cplus TensorRT&Onnx With YoloV4
How to implement Onnx Cplus_cplus with YoloV5 Inference
Deep Dive into Generative AI and Large Language Models PART 1
Generative AI & LLM
LLM use cases and Tasks
Text generation before transformers
Transformer Archiecture Part 1
Transformer Archiecture Part 2
Transform Based-Translation Task
Transform-Encoder-Decoder
Prompt&Prompt Engineering

Good to know

Know what's good
, what to watch for
, and possible dealbreakers
Develops skills and knowledge in Generative AI and Large Language models which are key to tackling real world problems in today's tech industry
Taught by PhD Researcher AI & Robotics Scientist Fikrat Gasimov who is recognized for their work in AI and Robotics Science
Examines various state-of-the-art Generative AI models such as The Falcon, LLAMA2, BLOOM, MPT, Vicuna, FLAN-T5, GPT2/GPT3, GPT NEOX
Provides hands-on practice with Tensorflow, Pytorch, Keras models, HuggingFace with Docker Service to gain practical experience with industry-standard tools
Covers how to quantize TensorRT frameworks for deployment in a variety of sectors, enhancing the efficiency and applicability of models
Teaches how to integrate Reinforcement Learning (PPO) to Large Language Models, enabling learners to customize and fine-tune models based on human feedback

Save this course

Save Learn Everything about Full-Stack Generative AI, LLM MODELS to your list so you can find it easily later:
Save

Activities

Be better prepared before your course. Deepen your understanding during and after it. Supplement your coursework and achieve mastery of the topics covered in Learn Everything about Full-Stack Generative AI, LLM MODELS with these activities:
Organize Course Materials
Organize and prepare your learning materials to improve your understanding and engagement with the course.
Show steps
  • Gather all materials, including syllabus, notes, assignments, and readings.
  • Create a system for organizing and storing your materials.
  • Review materials on a regular to familiarize yourself with the course content.
Read 'Deep Learning for Coders with Fastai and PyTorch'
Gain a deeper understanding of deep learning concepts by reading this comprehensive book.
Show steps
  • Read through the book thoroughly, taking notes and highlighting important concepts.
  • Work through the exercises and examples provided in the book.
  • Apply the knowledge gained from the book to your own projects.
Join a Study Group
Enhance your understanding through peer collaboration and support.
Show steps
  • Find or form a study group with fellow classmates or online learners.
  • Meet regularly to discuss course materials, share perspectives, and work on assignments together.
  • Engage in group discussions, brainstorming sessions, and peer review.
Five other activities
Expand to see all activities and additional details
Show all eight activities
Practice Docker Commands
Reinforce your understanding of Docker commands by completing practice exercises.
Browse courses on Docker
Show steps
  • Review the Docker documentation and tutorials.
  • Create a simple Dockerfile and build an image.
  • Run Docker commands to manage containers.
  • Troubleshoot common Docker issues.
  • Experiment with different Docker features.
Learn about TensorRT
Enhance your knowledge of TensorRT by following guided tutorials.
Show steps
  • Find online tutorials or documentation on TensorRT.
  • Follow the tutorials to install and configure TensorRT.
  • Learn about TensorRT's key features and capabilities.
  • Apply TensorRT to your own projects or models.
  • Engage in online forums or communities to ask questions and learn from others.
Literature on Real-World AI Applications
Expand your knowledge of real-world AI applications by compiling a collection of relevant resources.
Show steps
  • Search for research papers, articles, and case studies on various industries where AI is being applied.
  • Organize and categorize the resources based on application area or industry.
  • Summarize the key findings and insights from the resources.
  • Share your compilation with classmates or online communities.
Build a Custom Web Application
Solidify your understanding of web application development by creating your own custom project.
Browse courses on Web Development
Show steps
  • Plan and design your web application.
  • Choose appropriate technologies and frameworks.
  • Implement the front-end and back-end components.
  • Test and debug your application.
  • Deploy and maintain your application.
Contribute to Open Source AI Projects
Gain practical experience and contribute to the AI community by working on open source projects.
Browse courses on Open Source
Show steps
  • Identify open source AI projects that align with your interests and skills.
  • Read the project documentation and familiarize yourself with the codebase.
  • Start contributing by fixing bugs, implementing new features, or improving documentation.
  • Engage with the project community through online forums or discussions.

Career center

Learners who complete Learn Everything about Full-Stack Generative AI, LLM MODELS will develop knowledge and skills that may be useful to these careers:

Reading list

We haven't picked any books for this reading list yet.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Here are nine courses similar to Learn Everything about Full-Stack Generative AI, LLM MODELS.
Generative AI and LLMs on AWS
Most relevant
Generative AI Fundamentals with Google Cloud
Most relevant
Generative AI Fluency
Most relevant
Generative AI Architecture and Application Development
Most relevant
Complete AWS Bedrock Generative AI Course + Projects
Most relevant
LLMOps & ML Deployment: Bring LLMs and GenAI to Production
Most relevant
Llama for Python Programmers
Most relevant
Evaluating Large Language Model Outputs: A Practical Guide
Most relevant
NVIDIA-Certified Associate - Generative AI LLMs (NCA-GENL)
Most relevant
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2024 OpenCourser