This course will introduce the concepts of interpretability and explainability in machine learning applications. The learner will understand the difference between global, local, model-agnostic and model-specific explanations. State-of-the-art explainability methods such as Permutation Feature Importance (PFI), Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP) are explained and applied in time-series classification. Subsequently, model-specific explanations such as Class-Activation Mapping (CAM) and Gradient-Weighted CAM are explained and implemented. The learners will understand axiomatic attributions and why they are important. Finally, attention mechanisms are going to be incorporated after Recurrent Layers and the attention weights will be visualised to produce local explanations of the model.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.