This course will help us to evaluate and compare the models we have developed in previous courses. So far we have developed techniques for regression and classification, but how low should the error of a classifier be (for example) before we decide that the classifier is "good enough"? Or how do we decide which of two regression algorithms is better?
This course will help us to evaluate and compare the models we have developed in previous courses. So far we have developed techniques for regression and classification, but how low should the error of a classifier be (for example) before we decide that the classifier is "good enough"? Or how do we decide which of two regression algorithms is better?
By the end of this course you will be familiar with diagnostic techniques that allow you to evaluate and compare classifiers, as well as performance measures that can be used in different regression and classification scenarios. We will also study the training/validation/test pipeline, which can be used to ensure that the models you develop will generalize well to new (or "unseen") data.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.