We may earn an affiliate commission when you visit our partners.
Take this course
Brinnae Bent, PhD

As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Explainable Machine Learning (XAI), empowering you to develop AI solutions that are aligned with responsible AI principles.

Through discussions, case studies, programming labs, and real-world examples, you will gain the following skills:

Read more

As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Explainable Machine Learning (XAI), empowering you to develop AI solutions that are aligned with responsible AI principles.

Through discussions, case studies, programming labs, and real-world examples, you will gain the following skills:

1. Implement local explainable techniques like LIME, SHAP, and ICE plots using Python.

2. Implement global explainable techniques such as Partial Dependence Plots (PDP) and Accumulated Local Effects (ALE) plots in Python.

3. Apply example-based explanation techniques to explain machine learning models using Python.

4. Visualize and explain neural network models using SOTA techniques in Python.

5. Critically evaluate interpretable attention and saliency methods for transformer model explanations.

6. Explore emerging approaches to explainability for large language models (LLMs) and generative computer vision models.

This course is ideal for data scientists or machine learning engineers who have a firm grasp of machine learning but have had little exposure to XAI concepts. By mastering XAI approaches, you'll be equipped to create AI solutions that are not only powerful but also interpretable, ethical, and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice.

To succeed in this course, you should have an intermediate understanding of machine learning concepts like supervised learning and neural networks.

Enroll now

What's inside

Syllabus

Model-Agnostic Explainability
In this module, you will be introduced to the concept of model-agnostic explainability and will explore techniques and approaches for local and global explanations. You will learn how to explain and implement local explainability techniques LIME, SHAP, and ICE plots, global explainable techniques including functional decomposition, PDP, and ALE plots, and example-based explanations in Python. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
Read more

Traffic lights

Read about what's good
what should give you pause
and possible dealbreakers
Covers LIME, SHAP, ICE plots, PDP, and ALE plots, which are standard techniques used in industry for interpreting model predictions and ensuring fairness
Requires an intermediate understanding of machine learning concepts like supervised learning and neural networks, so learners should come prepared with a solid foundation
Explores emerging approaches to explainability for large language models (LLMs) and generative computer vision models, which are cutting-edge topics in the field
Includes hands-on programming labs using Python, allowing learners to immediately apply XAI techniques to real-world problems and datasets
Presented by Duke University, which is known for its research and educational programs in artificial intelligence and machine learning
Examines interpretable attention and saliency methods for transformer model explanations, which are crucial for understanding the decisions of complex models

Save this course

Create your own learning path. Save this course to your list so you can find it easily later.
Save

Reviews summary

Explainable ml: techniques and practice

According to learners, this course provides a strong foundation in Explainable Machine Learning. Students particularly value the hands-on programming labs (positive) that allow them to implement key techniques like LIME and SHAP (positive). Reviewers frequently mention the clear explanations (positive) of complex concepts. The course is seen as having a practical focus (positive), emphasizing application over deep theory, and is kept up-to-date with modern topics (positive) such as explaining large language models and generative AI (neutral). It is important to note that learners advise ensuring you have the required intermediate ML and Python prerequisites (warning) as the course moves at a solid pace.
Assumes strong prior machine learning knowledge.
"You definitely need an intermediate ML and Python background for this course."
"Some parts were challenging without solid prerequisites in scikit-learn and neural networks."
"Assumes a certain comfort level with core ML concepts and coding."
Includes modern topics like generative AI explainability.
"Really appreciated the module on explaining LLMs and generative models. Very current topics."
"Nice to see coverage of generative computer vision model explainability."
"Includes cutting-edge XAI topics which are highly relevant today."
Concepts are explained clearly and effectively.
"The instructor explained complex XAI ideas very well, making them accessible."
"Lectures were easy to follow and provided good intuition."
"Helped clarify often confusing concepts around model interpretability."
Covers essential XAI techniques like LIME, SHAP.
"Excellent coverage of LIME and SHAP. The way they were explained and implemented was very clear."
"Finally understood how to apply PDP and ALE plots thanks to this course."
"Provides a good overview and practical implementation of the main XAI methods."
Hands-on programming labs are highly valued.
"The hands-on coding exercises were invaluable for truly grasping the techniques."
"I appreciated the practical implementation focus in the labs."
"The programming labs are the strongest part of the course for me."

Activities

Be better prepared before your course. Deepen your understanding during and after it. Supplement your coursework and achieve mastery of the topics covered in Explainable Machine Learning (XAI) with these activities:
Review Python Fundamentals
Strengthen your Python programming skills, which are essential for implementing XAI techniques in this course.
Browse courses on Python
Show steps
  • Review basic syntax, data structures, and control flow in Python.
  • Practice writing functions and classes in Python.
  • Familiarize yourself with common Python libraries like NumPy and Pandas.
Review Machine Learning Concepts
Solidify your understanding of machine learning concepts, including supervised learning and neural networks, which are prerequisites for this course.
Browse courses on Machine Learning
Show steps
  • Review the fundamentals of supervised learning algorithms.
  • Study the architecture and training of neural networks.
  • Understand the concepts of model evaluation and hyperparameter tuning.
Read 'Interpretable Machine Learning' by Christoph Molnar
Gain a deeper understanding of XAI techniques by reading a comprehensive book on the subject.
Show steps
  • Obtain a copy of 'Interpretable Machine Learning' by Christoph Molnar.
  • Read the chapters relevant to the techniques covered in the course.
  • Take notes on key concepts and examples.
Four other activities
Expand to see all activities and additional details
Show all seven activities
Implement LIME and SHAP
Reinforce your understanding of LIME and SHAP by implementing them on different datasets.
Browse courses on LIME
Show steps
  • Select a dataset and a pre-trained machine learning model.
  • Implement LIME to explain individual predictions of the model.
  • Implement SHAP to understand the feature importance for the model.
  • Compare and contrast the explanations provided by LIME and SHAP.
XAI for Healthcare Diagnosis
Apply XAI techniques to a real-world healthcare diagnosis problem to gain practical experience.
Show steps
  • Choose a healthcare dataset with diagnostic information.
  • Train a machine learning model to predict diagnoses.
  • Apply XAI techniques to explain the model's predictions.
  • Evaluate the usefulness of the explanations for healthcare professionals.
Blog Post: Explainable Generative AI
Deepen your understanding of explainable generative AI by writing a blog post summarizing the key concepts and challenges.
Browse courses on LLMs
Show steps
  • Research emerging approaches to explainability in LLMs and generative computer vision.
  • Summarize the key concepts and challenges in a blog post.
  • Provide examples of how XAI can be applied to generative models.
  • Share your blog post on social media and relevant online communities.
Read 'Explainable AI: Interpreting, Explaining and Visualizing Machine Learning' by Christoph Molnar
Expand your knowledge of XAI by reading a comprehensive book that covers a wide range of techniques.
Show steps
  • Obtain a copy of 'Explainable AI: Interpreting, Explaining and Visualizing Machine Learning' by Christoph Molnar.
  • Read the chapters that cover techniques not fully explored in the course.
  • Experiment with the code examples provided in the book.

Career center

Learners who complete Explainable Machine Learning (XAI) will develop knowledge and skills that may be useful to these careers:
Machine Learning Engineer
A Machine Learning Engineer is responsible for the design, development, and deployment of machine learning models. This course helps you develop an understanding of explainable AI (XAI) techniques that are vital for ensuring transparency and trust in these models, especially in high-stakes domains. The course covers both local and global explainability methods, including LIME, SHAP, ICE, PDP, and ALE plots, which are essential tools for a Machine Learning Engineer. The ability to visualize and explain neural networks, as well as explore emerging approaches for large language models and generative computer vision models directly translates to the practical work a Machine Learning Engineer would do. This course provides a solid foundation for building and deploying AI models that are not only accurate but also understandable and ethical. By emphasizing hands-on programming labs with a focus on practical implementation in Python, the course provides valuable skills needed by a Machine Learning Engineer.
Data Scientist
A Data Scientist uses data analysis and machine learning to derive insights and build predictive models. This course directly addresses the crucial need for explainability in AI models, an essential aspect of a Data Scientist's responsibilities. The course introduces different XAI techniques including implementation of local and global explainability, such as LIME, SHAP, ICE, PDP, and ALE plots. By gaining hands-on experience visualizing and explaining neural network models, a Data Scientist will be better equipped to present findings and defend their modeling choices. This course helps Data Scientists build trust in AI models, especially when working in sensitive fields like healthcare or finance, through the study of interpretable attention and saliency methods for transformer model explanations. The course’s attention to emerging approaches for large language models and generative models is especially important for Data Scientists who need to remain current with the state of the art.
Artificial Intelligence Researcher
An Artificial Intelligence Researcher explores the frontiers of AI, developing new algorithms and techniques for machine learning. This course provides a deep dive into model-agnostic and model-specific explainability, which is a core component of responsible AI development. An Artificial Intelligence Researcher will gain hands-on experience implementing XAI techiques for both local and global explainability using methods like LIME, SHAP, PDP and ALE plots. This course is essential for researchers that are involved in developing new neural network architectures. The course also introduces emerging approaches to explainability for large language models and generative computer vision models. An Artificial Intelligence Researcher needs to understand how to explain models, and this can lead to more innovation and improved performance.
AI Ethicist
An AI Ethicist specializes in the responsible and ethical development and deployment of artificial intelligence technologies. This course will help an AI Ethicist understand the techniques that help provide model transparency and address bias in AI systems. The course explores model-agnostic and model-specific explainability methods using LIME, SHAP, PDP, and ALE plots. These techniques are essential to understanding how AI models work. By investigating methods used to explain neural networks and large language models, an AI Ethicist can identify potential ethical issues and propose solutions to improve these systems. The ability to critically evaluate interpretable attention and saliency methods also helps an AI Ethicist develop a thorough understanding of current AI limitations.
Machine Learning Consultant
A Machine Learning Consultant advises businesses on the application of machine learning and artificial intelligence to their operational and strategic challenges. The course helps a Machine Learning Consultant by providing expertise in explainable AI (XAI), which is increasingly vital for client adoption of AI solutions. By understanding the practical implementation of local explainability techniques such as LIME, SHAP, and ICE plots, as well as global techniques such as Partial Dependence Plots and Accumulated Local Effects plots, a Machine Learning Consultant can help clients understand the inner workings of a model, building trust and fostering understanding of client needs. A Machine Learning Consultant needs to explain the model to stakeholders, and this course provides the tools to do just that.
Data Analyst
Data Analysts gather, process, and analyze data to identify trends and insights to support business decisions. This course on explainable machine learning may help those Data Analysts that are looking to expand their skillset into the realm of machine learning and model interpretation. This course helps a Data Analyst understand the inner workings of machine learning models through the exploration of explanations of neural networks and other complicated models, using techniques such as LIME, SHAP, ICE, PDP, and ALE plots. This ability to create interpretable models will help the Data Analyst use AI to better understand data and communicate results to stakeholders. This course is especially useful for Data Analysts that want to be able to not just build AI models, but to describe their functionality as well.
Software Engineer
A Software Engineer designs, develops, and maintains software applications and systems. This course can help a Software Engineer looking to specialize in machine learning and artificial intelligence. By covering Python implementations of local and global explainability techniques, such as LIME, SHAP, ICE, PDP, and ALE plots, a Software Engineer will gain critical hands-on knowledge. The exploration of neural network visualization techniques and model explanations help the Software Engineer deploy models that are transparent and understandable. This course is especially useful for Software Engineers who are responsible for implementing and maintaining AI systems, especially in high-stakes domains.
AI Product Manager
An AI Product Manager is responsible for the strategy, roadmap, and execution of AI-powered products. This course helps an AI Product Manager gain the understanding to build trust in AI systems. The course dives into the practical implementations of local and global explainability techniques, using LIME, SHAP, ICE, PDP, and ALE plots. An AI Product Manager will especially benefit from the lessons on explaining neural networks and large language models, as this directly relates to many of the current AI products on the market and helps them understand their limitations and potential biases. By mastering the practical aspects of XAI, the AI Product Manager can better guide the development of responsible and user-friendly AI tools.
Quantitative Analyst
A Quantitative Analyst applies mathematical and statistical methods to financial and risk management problems. This course may be useful for a Quantitative Analyst, particularly if they are involved in developing and using machine learning models for trading or risk assessment. An understanding of XAI methods and techniques learned in the course will help a Quantitative Analyst understand the decision-making processes of complex models, especially when using techniques like LIME, SHAP, ICE, PDP, and ALE. Explainability is particularly important in finance, for compliance and risk management. The practical approach of the course, including hands-on implementation, provides a Quantitative Analyst with the tools to explain and defend model choices.
Bioinformatician
A Bioinformatician develops and uses computational tools to analyze biological and genomic data. This course may be valuable for a Bioinformatician who applies machine learning techniques to analyze complex datasets. The course will help the Bioinformatician understand and explain their AI models using local and global explainability techniques, including LIME, SHAP, ICE, PDP, and ALE plots. The course's emphasis on neural network visualization and explanation also directly benefits the kinds of models commonly used in bioinformatics. The need for transparency and trust is critical in this field. The ability to explain AI models is particularly important for clinical applications, and this course may provide some of those skills.
Research Scientist
A Research Scientist conducts research and experimentation to advance knowledge in a scientific field. This course may help a Research Scientist who is researching the use of AI or machine learning models. An understanding of model interpretability, obtained through exploring local and global explainability techniques such as LIME, SHAP, ICE, PDP, and ALE, allows a Research Scientist to better understand a model's behavior and limitations. Moreover, the course's treatment of neural network visualization techniques and explanations may help a Research Scientist develop sound research methods when it comes to data analysis. By learning about explainable AI, the Research Scientist may be able to develop innovative approaches to solve difficult problems.
Statistician
A Statistician applies statistical methods to collect, analyze, and interpret quantitative data. This course may be beneficial to a Statistician, allowing them to expand the tools they use to build and interpret models. The course's in-depth study of local and global explainability techniques via LIME, SHAP, ICE, PDP, and ALE plots may allow a Statistician to better understand the inner working of complex machine learning models. By enhancing their ability to analyze model behavior, statisticians can derive stronger and more reliable conclusions. This course is particularly helpful for statisticians who are interested in advancing their skillset in the direction of machine learning.
Financial Analyst
A Financial Analyst analyzes financial data, prepares reports, and provides recommendations to support business decisions. This course may be useful for Financial Analysts, particularly if they use machine learning models for forecasting or risk analysis. The course will enable a Financial Analyst to understand the inner workings of AI models. The course’s exploration of local and global explainability techniques, such as LIME, SHAP, ICE, PDP, and ALE plots provides an understanding of the decision process for these complex models. This focus on explanation is particularly important in finance. This course may help a Financial Analyst be able to better communicate the results of model analysis to management.
Business Intelligence Analyst
A Business Intelligence Analyst (BI Analyst) uses data to drive business strategy and to improve decision-making. This course may help a BI Analyst who is looking to understand and utilize AI for their work. The course introduces local and global explainability techniques, which are implemented in Python. The study of neural network visualization and explanation helps the BI Analyst understand the kinds of complex models that are increasingly used in business. Through LIME, SHAP, ICE, PDP, and ALE plots, a BI Analyst can effectively integrate AI into their work. This course may provide skills that a BI Analyst can use to better communicate model insights to stakeholders.
Policy Analyst
A Policy Analyst researches and analyzes policy issues, providing recommendations to inform government and organizational decision-making. This course may be valuable for understanding how AI systems work and how they might affect policy. By exploring model explainability, including methods such as LIME, SHAP, ICE, PDP, and ALE plots, a Policy Analyst may be better able to evaluate the transparency and potential biases of AI systems, especially in high-stakes domains like criminal justice and healthcare. This course may help a Policy Analyst develop policies that address ethical concerns for emerging technologies such as machine learning.

Reading list

We've selected two books that we think will supplement your learning. Use these to develop background knowledge, enrich your coursework, and gain a deeper understanding of the topics covered in Explainable Machine Learning (XAI).
Provides a comprehensive overview of various interpretable machine learning techniques. It covers many of the methods discussed in the course, such as LIME, SHAP, and PDP. It serves as an excellent reference for understanding the theoretical foundations and practical applications of XAI. This book is highly recommended for anyone looking to deepen their knowledge of interpretable machine learning.
Provides a comprehensive overview of explainable AI techniques, covering both model-agnostic and model-specific methods. It delves into the theoretical foundations of XAI and offers practical guidance on implementing these techniques. It valuable resource for understanding the nuances of XAI and applying it effectively. This book is commonly used as a reference by both academics and industry professionals.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Similar courses are unavailable at this time. Please try again later.
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2025 OpenCourser