Are you looking forward to developing interesting computer vision applications? If yes, then this Learning Path is for you.
Packt’s Video Learning Paths are a series of individual video products put together in a logical and stepwise manner such that each video builds on the skills learned in the video before it.
Are you looking forward to developing interesting computer vision applications? If yes, then this Learning Path is for you.
Packt’s Video Learning Paths are a series of individual video products put together in a logical and stepwise manner such that each video builds on the skills learned in the video before it.
Computer vision and machine learning concepts are frequently used together in practical projects based on computer vision. Whether you are completely new to the concept of computer vision or have a basic understanding of it, this Learning Path will be your guide to understanding the basic OpenCV concepts and algorithms through amazing real-world examples and projects.
OpenCV is a cross-platform, open source library that is used for face recognition, object tracking, and image and video processing. By learning the basic concepts of computer vision algorithms, models, and OpenCV’s API, you will be able to develop different types of real-world applications.
Starting from the installation of OpenCV on your system and understanding the basics of image processing, we swiftly move on to creating optical flow video analysis and text recognition in complex scenes. You’ll explore the commonly used computer vision techniques to build your own OpenCV projects from scratch. Next, we’ll teach you how to work with the various OpenCV modules for statistical modeling and machine learning. You’ll start by preparing your data for analysis, learn about supervised and unsupervised learning, and see how to use them. Finally, you’ll learn to implement efficient models using the popular machine learning techniques such as classification, regression, decision trees, K-nearest neighbors, boosting, and neural networks with the aid of C++ and OpenCV.
By the end of this Learning Path, you will be familiar with the basics of OpenCV such as matrix operations, filters, and histograms, as well as more advanced concepts such as segmentation, machine learning, complex video analysis, and text recognition.
Meet Your Experts:
We have combined the best works of the following esteemed authors to ensure that your learning journey is smooth:
David Millán Escrivá was eight years old when he wrote his first program on an 8086 PC with Basic language, which enabled the 2D plotting of basic equations. In 2005, he finished his studies in IT through the Universitat Politécnica de Valencia with honors in human-computer interaction supported by computer vision with OpenCV (v0.96).
Prateek Joshi is an artificial intelligence researcher, published author of five books, and TEDx speaker. He is the founder of Pluto AI, a venture-funded Silicon Valley startup building an analytics platform for smart water management powered by deep learning.
Joe Minichino is a computer vision engineer for Hoolux Medical by day and a developer of the NoSQL database LokiJS by night. At Hoolux, he leads the development of an Android computer vision-based advertising platform for the medical industry.
Before we jump into OpenCV functionalities, we need to understand why those functions were built. Let’s understand how the human visual system works so that we can develop the right algorithms.
Real-life problems require us to use many blocks together to achieve the desired result. So, we need to know what modules and functions to use. Let's understand what OpenCV can do out of the box.
Now that we know what tasks we can do with OpenCV, Let’s see how to get OpenCVupandrunning on various operating systems, viz. Windows, Mac and Linux.
We are going to use CMake to configure and check all the required dependencies of our project. So, let’s learn basic CMake configuration files and creating a library.
CMake has the ability to search our dependencies and external libraries, giving us the facility to build complex projects depending on external components in our projects and by adding some requirements. One of the most important dependency is, of course, OpenCV. Let’s learn how to add it to our projects.
Now that we know managing dependencies, let’s take a look at a bit more complex script. This video we will show us a more complex script that includes subfolders, libraries, and executables, all in only two files and a few lines, as shown in the script.
The most important structure in computer vision is without any doubt the images. The image in computer vision is a representation of the physical world captured with a digital device. Let’s now learn about images and matrices.
After the introduction of matrix, we are ready to start with the basics of the OpenCV code. This video will guide us how to read and write images.
We now know how to read and write images but reading video can be a bit tricky. This video introduces us reading a video and camera with simple example.
We have learned about the Mat and Vec3b classes, but we need to learn about otherclasses as well. In this video, we will learn about the most basic object types required in most of the projects.
In many applications, such as calibration or machine learning, when we are done with the calculations, we need to save the results in order toretrieve them in the next executions. Before we finish this section, we will explore the OpenCV functions to storeand read our data.
OpenCV has its own cross-operating system user interface that allows developers to create their own applications without the need to learn complex libraries for theuser interface. This video will introduce the OpenCV user interface and help us creating a basic UI with OpenCV.
The QT user interface gives more control and options to work with images. Let’s explore the interface and learn how to use it.
Mouse events and slider controls are very useful in Computer Vision and OpenCV. Using these controls, users can interact directly with the interface and change the properties of their input images or variables. However, using these controls can be a bit tricky. Let’s see how to use them.
Now that we have learned how to create normal or QT interfaces and interact with them using a mouse and slider, let’s see how we can create different types of buttons to add more interactivity.
OpenCV includes OpenGL support which is a graphical library that is integrated in graphic cards as a standard. OpenGL allows us to draw from 2D to complex 3D scenes. This video shows us how to use OpenGL support.
Prepare a CMake script file that enables us to compile our project, structure, and executable.
The main graphical user interface can be used in the application to create single buttons.
Histogram is a statistical graphic representation of variable distribution that allows us to understand the density estimation and probability distribution of data.
Image equalization obtains a histogram with a uniform distribution of values.
Lomography is a photographic effect used in different mobile applications, such as Google Camera or Instagram.
The Cartoonize effect creates an image that looks like a cartoon
Isolating different parts or objects in a scene.
Create our new application.
Extract the information from image.
Extract each region of interest of our image where our target objects appear.
Pattern recognition and the learning theory in artificial intelligence and are related to computational statistics.
We will learn how to implement our own application that uses machine learning to classify objectsin a slide tape.
We will be able to recognize different objects to send notifications to a robot or put each one in different boxes.
To extract the features of each object.
It is simply a concatenation of a set of weak classifiers that can be used to create a strong classifier.
You have to avoid huge redundancy during the area computation, to avoid this, we can use integral images.
You have to load the cascade file and use it to detect the faces in an image.
You have to overlay sunglass on face.
You have to track nose, mouth and ears.
The background subtraction technique performs really well where we need to detect moving objects in a static scene.
We cannot keep a static background image that can be used to detectobjects.
Formulating and implementing a mixture of gaussians.
Morphological Image processing is used in processing the shapes of features in the image.
To apply various morphological operators on image.
Understand what characteristics can be used to make our tracking robust and accurate.
We want to randomly pick an object, learn the characteristics of the selected object and track it automatically.
Detect interest points in the image.
Improve the overall quality of image.
Tracking individual feature points across successive frames in the video.
Basics of OCR.
Classification results can be improved greatly if the input text is clear so Adjust the text.
Install Tesseract on Windows or Mac.
Studying tesserct API.
This video gives an overview of the entire course.
Learn about machine learning.
Learn how to extract features.
Extracting features from an image.
Recognize handwritten digits.
Detect pictures containing cars.
Operate car detection.
Operate face recognition.
Operate flower recognition
Operate color quantization.
Operate handwritten digit recognition.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.