It’s always crucial to address and monitor safety and quality concerns in your applications. Building LLM applications poses special challenges.
It’s always crucial to address and monitor safety and quality concerns in your applications. Building LLM applications poses special challenges.
In this course, you’ll explore new metrics and best practices to monitor your LLM systems and ensure safety and quality. You’ll learn how to:
1. Identify hallucinations with methods like SelfCheckGPT.
2. Detect jailbreaks (prompts that attempt to manipulate LLM responses) using sentiment analysis and implicit toxicity detection models.
3. Identify data leakage using entity recognition and vector similarity analysis.
4. Build your own monitoring system to evaluate app safety and security over time.
Upon completing the course, you’ll have the ability to identify common security concerns in LLM-based applications, and be able to customize your safety and security evaluation tools to the LLM that you’re using for your application.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.