This course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on TensorFlow , Pytorch, Keras models, HuggingFace with Docker Service.
This course is diving into Generative AI State-Of-Art Scientific Challenges. It helps to uncover ongoing problems and develop or customize your Own Large Models Applications. Course mainly is suitable for any candidates(students, engineers,experts) that have great motivation to Large Language Models with Todays-Ongoing Challenges as well as their deeployment with Python Based and Javascript Web Applications, as well as with C/C++ Programming Languages. Candidates will have deep knowledge on TensorFlow , Pytorch, Keras models, HuggingFace with Docker Service.
In addition, one will be able to optimize and quantize TensorRT frameworks for deployment in variety of sectors. Moreover, They will learn deployment of LLM quantized model to Web Pages developed with React, Javascript and FLASKHere you will also learn how to integrate Reinforcement Learning(PPO) to Large Language Model, in order to fine them with Human Feedback based. Candidates will learn how to code and debug in C/C++ Programming languages at least in intermediate level.
LLM Models used:
The Falcon,
LLAMA2,
BLOOM,
MPT,
Vicuna,
FLAN-T5,
GPT2/GPT3, GPT NEOX
..
Learning and Installation of Docker from scratch
Knowledge of Javscript
Ready to solve any programming challenge with C/C++
Read to tackle Deployment issues on Edge Devices as well as Cloud Areas
Large Language Models Fine Tunning
Large Language Models Hands-On-Practice: 5, FLAN-T5 family
Large Language Models Training, Evaluation and User-Defined Prompt IN-Context Learning/On-Line Learning
Human FeedBack Alignment on LLM with Reinforcement Learning (PPO) with Large Language Model : BERT and FLAN-T5
How to Avoid Catastropich Forgetting Program on Large Multi-Task LLM Models.
How to prepare LLM for Multi-Task Problems such as Code Generation, Summarization, Content Analizer, Image Generation.
Quantization of Large Language Models with various existing state-of-art techniques
Importante Note: In this course, there is not nothing to copy & paste, you will put your hands in every line of project to be successfully LLM and Web Application Developer.
You DO NOT need any Special Hardware component. You will be delivering project either on CLOUD or on Your Local Computer.
Course Github LINK:
https://github.com/fikratgasimovsoftwareengineer/FullStack_Web_APP/tree/main
Please find all projects laying on the repository:
https://github.com/fikratgasimovsoftwareengineer/FullStack_Web_APP
My Github Profile:
https://github.com/fikratgasimovsoftwareengineer/FullStack_Web_APP
Bert Model has been saved within zip folder.
This is pretrained model zip file that contains both pretrained model and tokenizer.Please, unzip it and locate it in local working directory!
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.