We may earn an affiliate commission when you visit our partners.
Seyedali Mirjalili

Machine learning is an extremely hot area in Artificial Intelligence and Data Science. There is no doubt that Neural Networks are the most well-regarded and widely used machine learning techniques.

A lot of Data Scientists use Neural Networks without understanding their internal structure. However, understanding the internal structure and mechanism of such machine learning techniques will allow them to solve problems more efficiently. This also allows them to tune, tweak, and even design new Neural Networks for different projects.

Read more

Machine learning is an extremely hot area in Artificial Intelligence and Data Science. There is no doubt that Neural Networks are the most well-regarded and widely used machine learning techniques.

A lot of Data Scientists use Neural Networks without understanding their internal structure. However, understanding the internal structure and mechanism of such machine learning techniques will allow them to solve problems more efficiently. This also allows them to tune, tweak, and even design new Neural Networks for different projects.

This course is the easiest way to understand how Neural Networks work in detail. It also puts you ahead of a lot of data scientists. You will potentially have a higher chance of joining a small pool of well-paid data scientists.

Why learn Neural Networks as a Data Scientist?

Machine learning is getting popular in all industries every single month with the main purpose of improving revenue and decreasing costs. Neural Networks are extremely practical machine learning techniques in different projects. You can use them to automate and optimize the process of solving challenging tasks.  

What does a data scientist need to learn about Neural Networks?  

The first thing you need to learn is the mathematical models behind them. You cannot believe how easy and intuitive the mathematical models and equations are. This course starts with intuitive examples to take you through the most fundamental mathematical models of all Neural Networks. There is no equation in this course without an in-depth explanation and visual examples. If you hate math, then sit back, relax, and enjoy the videos to learn the math behind Neural Networks with minimum efforts.

It is also important to know what types of problems can be solved with Neural Networks. This course shows different types of problems to solve using Neural Networks including classification, regression, and prediction. There will be several examples to practice how to solve such problems as well.  

What does this course cover?

As discussed above, this course starts straight up with an intuitive example to see what a single Neuron is as the most fundamental component of Neural Networks. It also shows you the mathematical and conceptual model of a Neuron. After learning how easy and simple the mathematical models of a single Neuron are, you will see how it performs in action live.

The second part of this course covers terminologies in the field of machine learning, a mathematical model of a special type of neuron called Perceptron, and its inspiration. We will go through the main component of a perceptron as well.

In the third part, we will work with you on the process of training and learning in Neural networks. This includes learning different error/cost functions, optimizing the cost function, gradient descent algorithm, the impact of the learning rate, and challenges in this area.

In the first three parts of this course, you master how a single neuron works (e.g. Perceptron). This prepares you for the fourth part of this course, which is where we will learn how to make a network of these neurons. You will see how powerful even connecting two neurons are. We will learn the impact of multiple neurons and multiple layers on the outputs of a Neural Network. The main model here is a Multi-Layer Perceptron (MLP), which is the most well-regarded Neural Networks in both science and industry. This part of the course also includes Deep Neural Networks (DNN).

In the fifth section of this course, we will learn about the Backpropagation (BP) algorithm to train a multi-layer perceptron. The theory, mathematical model, and numerical example of this algorithm will be discussed in detail.

All the problems used in Sections 1-5 are classification, which is a very important task with a wide range of real-world applications. For instance, you can classify customers based on their interest in a certain product category. However, there are problems that require prediction. Such problems are solved by regression modes. Neural Networks can play the role of a regression method as well. This is exactly what we will be learning in Section 6 of this course. We start with an intuitive example of doing regression using a single neuron. There is a live demo as well to show how a neuron plays the role of a regression model. Other things that you will learn in this section are: linear regression, logistic (non-linear) regression, regression examples and issues, multiple regressions, and an MLP with three layers to solve any type of repression problems.

The last part of this course covers problem-solving using Neural Networks. We will be using Neuroph, which is a Java-based program, to see examples of Neural Networks in the areas and hand-character recognitions and image procession. If you have never used Neuroph before, there is nothing to worry about. There are several videos showing you the steps on how to create and run projects in Neuroph.

By the end of this course, you will have a comprehensive understanding of Neural Networks and able to easily use them in your project. You can analyze, tune, and improve the performance of Neural Networks based on your project too.

Does this course suit you?

This course is an introduction to Neural Networks, so you need absolutely no prior knowledge in Artificial Intelligence, Machine Learning, and AI. However, you need to have a basic understanding of programming especially in Java to easily follow the coding video. If you just want to learn the mathematical model and the problem-solving process using Neural Networks, you can then skip the coding videos.

Who is the instructor?

I am a leading researcher in the field of Machine Learning with expertise in Neural Networks and Optimization. I have more than 150 publications including 80 journal articles, 3 books, and 20 conference papers. These publications have been cited over 13,000 times around the world. As a leading researcher in this field with over 10 years of experience, I have prepared this course to make everything easy for those interested in Machine Learning and Neural Networks. I have been counseling big companies like Facebook and Google in my career too. I am also a star-rising Udemy instructor with more than 5000 students and 1000 5-star reviews, I have designed and developed this course to facilitate the process of learning Neural Networks for those who are interested in this area. You will have my full support throughout your Neural Networks journey in this course.  

There is no RISK.

I have some preview videos, so make sure to watch them to see if this course is for you.  This course comes with a full 30-day money-back guarantee, which means that if you are not happy after your purchase, you can get a 100% refund no question.

What are you waiting?

Enroll now using the “Add to Cart” button on the right and get started today.

Enroll now

Here's a deal for you

We found an offer that may be relevant to this course.
Save money when you learn. All coupon codes, vouchers, and discounts are applied automatically unless otherwise noted.

What's inside

Syllabus

Preliminaries and Essential Definitions in Artificial Neural Networks

Let's start with a quick and intuitive analogy to see what the purpose of a neuron is.
PLEASE NOTE: In this video, I use a simplified example to explain classification using neural networks. The example of 'boys love gaming and girls love shopping' was based on personal experiences with my partner and was intended purely for illustration. However, I understand that interests vary greatly across individuals, regardless of gender. I value inclusivity and diversity, and this example was not meant to reinforce any stereotypes.

Read more

This lesson shows the mathematical of an artificial neuron.

This lesson shows how we model the mathematical equations in the last video, as a mode of neuron.

This lecture shows how an artificial neuron works in action. I have written a program that allows you to interactively change weights and bias of a neuro to see how it changes the shape of the output line.

This lecture introduces the terminologies in the areas of Machine Learning and Neural Networks.

Perceptron is a neuron with a special transfer or activation function. This lecture shows how to mathematically model a perceptron with more than 2 inputs.

No you know how a single neuron and perceptron work with two or more than two inputs. It is time to learn their inspiration.

This lecture covers the concepts of training and learning in Neural Networks. You will learn about the problem of training/learning in Neural network which is to minimize the cost function.

In the last lecture, we realized that we have to minimize the cost function in Neural Networks to classify a data set. There are different cost functions in the field of Neural Networks, and we will learn about the most popular ones in this video.

This video shows you the process of minimizing a cost function using the Gradient Descent algorithm.

To better underestand the Gradient Descent algorithm, this video taks you through a numerical example. In this lecture, we focus on finding optimal values for the connection weights.

This lecture shows how to find the optimal values for the biases in Neural Networks using the Gradient Descent algorithm.

Learning rate has a significant impact on the performance of the Gradient Descent Algorithm. This videos shows the impact of learning rate and some recommendation to define a good value for it.

There are several challenges when training Neural Networks. This video discusses the most important ones to be considered when solving real-world problems.

This lecture takes you though the steps of implementing a Perceptron in Java.

We have been using one transfer function so far for our models. The step transfer function is good for binary classification problems. For other types of problems, we might need different transfer functions. This lecture introduces a wide range of transfer functions.

After mastering Perceptron and the process of training it, it is time to see how to make a network of neurons. Yes, this is called Neural Networks. We see what will happen when adding one more neuron.

In this lecture, we will learn the impact of adding a new layer in a Multi-Layer Perceptron (MLP).

This lecture shows you the impact of changing the weights and biases of an MLP on the shape of its output.

In the MLP model, we can add as many layers as we can, but the question is: what will happen when we include more layers. Let's find out the answer to this question by watching this video.

The Back Propagation (BP) algorithm is gradient-based for training MLPs. This videos takes you through the theory and steps of this algorithm.

Momentum is a new parameter in the BP algorithm in addition to the learning rate. It assists BP to jump outside the locally optimal solutions. In this video, we will see its impact on the performance of BP.

This lecture takes you through the process of solving regression and prediction problems using MLPs. We will be learning about both linear and logistic regression using MLPs.

This video shows you a live example of an MLP designed for doing regression. You will learn about the impact of the connection weights and biases on the output shape of MLPs.

This lesson covers examples and issues in the process of doing regression using MLPs.

In the above video, we have learned about regression problems with 1 independent variable which requires an MLP with 1 input in the first layer. But the question is: what if we have more than 1 independent variable? This video will answer this question.

Do you want to see a live demo of how MLP solves problems require multiple regression? Well, let's watch this video them.

There is a theory in the field of Neural Network called Universal Approximator. This video is about this theory.

This vidoe shows the Neuroph website and its user interface.

This lesson includes the steps to create an artificial neuron, train, and test it in Neuroph.

This lesson shows the steps of designing, training, and testing MLPS in Neuroph.

Neuroph has a large number of sample projects, which are very good for learning. This video shows where to find and how to use them.

Neuroph allows a wide range of visualization methods, and it is very user friendly. This video takes you through the steps of using one of the visualization tool to see the output of an MLP.

This lesson shows the process of recognizing hand-written character recognition in Neuroph.

In this lesson, you will learn how to recognize images using MLPs in Neuroph.

Download my book on NNs below.

Traffic lights

Read about what's good
what should give you pause
and possible dealbreakers
Covers the mathematical models behind neural networks, which is essential for data scientists to tune, tweak, and design new networks for different projects
Starts with intuitive examples and visual aids to explain the fundamental mathematical models of neural networks, making it accessible even for those who dislike math
Explores different types of problems that can be solved with neural networks, including classification, regression, and prediction, with several examples for practice
Includes coding videos that demonstrate the implementation of neural networks, requiring a basic understanding of programming, especially in Java, to follow along
Uses Neuroph, a Java-based program, to provide examples of neural networks in areas like hand-character recognition and image processing, offering practical problem-solving experience
Taught by an experienced researcher in machine learning with expertise in neural networks and optimization, offering insights from over 150 publications and practical experience

Save this course

Create your own learning path. Save this course to your list so you can find it easily later.
Save

Reviews summary

Foundational neural network concepts

According to learners, this course offers a clear and intuitive introduction to the fundamental mathematical models and core concepts of Neural Networks and Deep Learning. Students appreciated the step-by-step explanations of topics like perceptrons, MLPs, gradient descent, and backpropagation. However, some note that while the theoretical grounding is strong, the practical application relies on Neuroph, a Java-based tool, which may feel outdated or less relevant compared to popular Python frameworks like TensorFlow or PyTorch for many current industry applications. The course is often described as ideal for beginners seeking to understand *how* NNs work, less so for those needing immediate production-level coding skills in modern libraries.
Strong theoretical basis; less code-heavy.
"This course is excellent for understanding the 'why' and 'how' of NNs from a theoretical standpoint."
"I appreciate the focus on the underlying mathematical models rather than just using libraries blindly."
"If you want a deep dive into the concepts and math, this is a great place to start."
"It provided the necessary theoretical background before diving into any practical application."
Instructor is knowledgeable and clear.
"The instructor is very knowledgeable and explains complex ideas clearly."
"You can tell the instructor is an expert in the field."
"Appreciated the instructor's enthusiasm and effort to make the math intuitive."
"The lecturer's pace and style were well-suited for a beginner's course."
Accessible even with no prior AI/ML.
"As someone completely new to AI, this course was a perfect starting point."
"Requires no prior knowledge of neural networks, exactly as advertised."
"It builds understanding from the ground up, making complex topics approachable for newcomers."
"A very gentle introduction to a potentially intimidating subject area."
Explains core NN concepts intuitively.
"The explanations of perceptrons, MLPs, and backpropagation were very clear and easy to follow."
"I finally understand the math behind neural networks thanks to the instructor's intuitive approach."
"This course provides a solid introduction to the basic building blocks and learning processes of NNs."
"The way the gradient descent algorithm was broken down numerically made it much easier to grasp."
Uses Neuroph, which can feel outdated.
"While Neuroph is simple to use, I wish the course included examples in Python with TensorFlow or PyTorch."
"Using a Java-based tool like Neuroph feels less relevant for current data science jobs."
"The Neuroph section felt a bit disconnected from the core theoretical content and less practical."
"It would be much more valuable with examples in a more widely used framework."

Activities

Be better prepared before your course. Deepen your understanding during and after it. Supplement your coursework and achieve mastery of the topics covered in Introduction to Artificial Neural Network and Deep Learning with these activities:
Review Linear Algebra Fundamentals
Strengthen your understanding of linear algebra concepts, which are foundational to understanding the mathematical underpinnings of neural networks.
Browse courses on Linear Algebra
Show steps
  • Review matrix operations such as addition, subtraction, and multiplication.
  • Practice solving systems of linear equations.
  • Understand vector spaces and linear transformations.
Brush Up on Calculus Concepts
Revisit key calculus concepts like derivatives and gradient descent, essential for understanding how neural networks learn.
Browse courses on Calculus
Show steps
  • Review the concept of derivatives and their applications.
  • Understand the gradient descent algorithm and its role in optimization.
  • Practice calculating derivatives of simple functions.
Read 'Neural Networks and Deep Learning' by Michael Nielsen
Supplement your learning with a widely respected book that provides a strong foundation in neural networks.
View Melania on Amazon
Show steps
  • Read the chapters on backpropagation and convolutional neural networks.
  • Work through the examples and exercises provided in the book.
Four other activities
Expand to see all activities and additional details
Show all seven activities
Follow TensorFlow Tutorials
Gain hands-on experience by working through TensorFlow tutorials, a popular deep learning framework.
Show steps
  • Complete the TensorFlow tutorials on building basic neural networks.
  • Experiment with different network architectures and hyperparameters.
  • Try implementing a simple image classification model.
Build a Simple Image Classifier
Apply your knowledge by building a project that classifies images using a neural network.
Show steps
  • Gather a dataset of images for classification.
  • Design and train a convolutional neural network (CNN) model.
  • Evaluate the performance of your model on a test dataset.
  • Fine-tune the model to improve accuracy.
Write a Blog Post on Backpropagation
Solidify your understanding of backpropagation by explaining it in your own words in a blog post.
Show steps
  • Research and gather information on the backpropagation algorithm.
  • Write a clear and concise explanation of the algorithm.
  • Include diagrams and examples to illustrate the concepts.
  • Publish your blog post online.
Read 'Deep Learning' by Goodfellow, Bengio, and Courville
Expand your knowledge with a comprehensive textbook that delves into the theoretical aspects of deep learning.
View Deep Learning on Amazon
Show steps
  • Read the chapters on advanced topics such as recurrent neural networks and generative models.
  • Study the mathematical derivations and proofs presented in the book.

Career center

Learners who complete Introduction to Artificial Neural Network and Deep Learning will develop knowledge and skills that may be useful to these careers:
Machine Learning Engineer
A Machine Learning Engineer develops and implements machine learning algorithms. This course is designed to help develop a foundational understanding of neural networks, an important area in machine learning. The course covers the mathematical models behind neural networks, training, and optimization, which are essential for a machine learning engineer to efficiently solve problems. The course also covers problem-solving using neural networks, including classification, regression, and prediction. A future Machine Learning Engineer should enroll in this course to understand neural networks in detail.
Deep Learning Engineer
Deep Learning Engineers design, build, and train deep learning models. This course may be useful in helping develop a comprehensive understanding of neural networks and their applications. The course covers both Multi Layer Perceptrons and Deep Neural Networks, and also covers the Backpropagation Algorithm. By the end of the course, a Deep Learning Engineer will hopefully be able to analyze, tune, and improve the performance of Neural Networks based on project requirements.
Data Scientist
Data Scientists analyze large datasets to extract meaningful insights. Many data scientists use neural networks, and this course on artificial neural networks and deep learning may be particularly helpful. The course helps develop an understanding of the internal structure of neural networks, which is crucial for tuning, tweaking, and even designing new neural networks. The course also covers different types of problems to solve with neural networks, including classification and regression. A Data Scientist may benefit from this course to improve their expertise.
AI Application Developer
AI Application Developers create AI-powered applications. This course may be directly relevant to this career, helping provide a solid understanding of neural networks. The course covers the fundamental principles, mathematical models, and practical implementation of neural networks. The course includes hands on experience using Neuroph, a Java based program, to solve problems in areas such as hand-character recognition and image processing.
Artificial Intelligence Researcher
An Artificial Intelligence Researcher explores new approaches to AI. This course on artificial neural networks may be very useful in understanding the underlying functionality and mathematics that drive neural networks. The course covers the mathematical models behind neural networks, training, and optimization. The course also covers the Backpropagation algorithm used to train multi layer perceptrons. An Artificial Intelligence Researcher who takes this course may benefit from a practical understanding of neural networks.
Computer Vision Engineer
Computer Vision Engineers develop algorithms that allow computers to "see" and interpret images. Since neural networks are essential in computer vision, this course may be particularly relevant in understanding the use of Multi Layer Perceptrons to recognize images. The course includes a section of the course covering hand character recognition and image processing. The course also covers various aspects of neural networks, including the mathematical models and training processes.
Natural Language Processing Engineer
Natural Language Processing Engineers develop algorithms that allow computers to understand and generate human language. Neural networks are used in natural language processing for tasks like language modeling and machine translation. This course helps provide a foundational understanding of neural networks, including their mathematical models and training processes. The course also covers different types of problems that can be solved with neural networks. A future Natural Language Processing Engineer may find this course useful.
Robotics Engineer
Robotics Engineers design, build, and program robots. Robots often use machine learning algorithms, including neural networks, for tasks such as perception and control. Understanding the mathematical models behind Neural Networks can allow a Robotics Engineer to solve problems more efficiently. This course provides an introduction to neural networks, covering the mathematical models, training processes, and different types of problems that can be solved.
Research Scientist
Research Scientists conduct research in various fields. Those in machine learning or artificial intelligence would study neural networks. This course helps build a foundational knowledge of neural networks, including their mathematical models and applications. The course also covers different types of problems that can be solved with neural networks. A Research Scientist hopefully benefits from this introductory course, and may need an advanced degree (master's or phd).
Quantitative Analyst
Quantitative Analysts develop and implement mathematical models for financial analysis and risk management. Neural networks can be used in quantitative finance for tasks such as time series forecasting and algorithmic trading. This course helps develop a comprehensive understanding of neural networks, including their mathematical models and training processes. The course also covers regression models that can be used for prediction. A Quantitative Analyst hopefully benefits from this introductory course.
AI Product Manager
AI Product Manager defines the vision, strategy, and roadmap for AI products. The course dives into the mathematical principles and the core components of neural networks, including the perceptron and multi layer perceptron. The course helps provide a solid foundation for anyone looking to work with AI powered products. A AI Product Manager who seeks to lead their team to success may find this course helpful.
Data Analyst
Data Analysts interpret data and identify trends. While not always essential, knowledge of neural networks can be valuable for certain types of data analysis. This course may be useful in developing a basic understanding of neural networks and their applications in areas like classification and prediction. The course covers the mathematical models behind neural networks, as well as the training processes. A Data Analyst who wishes to expand their skill set may want to take this course.
Software Developer
A Software Developer writes and maintains code for software applications. While not always directly related, knowledge of neural networks can be beneficial for software developers working on AI-related projects. This course may be useful in providing an introduction to neural networks and their applications. If you want to learn about Artificial Intelligence, Machine Learning, and AI, then a job as a Software Developer may be for you.
Business Intelligence Analyst
Business Intelligence Analysts analyze business data to identify trends and insights. While neural networks are not always essential for this role, they can be used for advanced analytics and prediction. This course helps build a foundational knowledge of neural networks, including their mathematical models and applications. The course also covers different types of problems that can be solved with neural networks. A Business Intelligence Analyst hopefully benefits from this introductory course.
Data Engineer
Data Engineers build and maintain the infrastructure for data storage and processing. While they may not directly work with neural networks, understanding machine learning concepts can be beneficial. This course may be helpful in providing an overview of neural networks and their applications. It covers the mathematical models behind neural networks and training processes. A Data Engineer who wants to broaden their knowledge may benefit from this course.

Reading list

We've selected two books that we think will supplement your learning. Use these to develop background knowledge, enrich your coursework, and gain a deeper understanding of the topics covered in Introduction to Artificial Neural Network and Deep Learning.
Provides a comprehensive and theoretical treatment of deep learning. It covers a wide range of topics, including convolutional neural networks, recurrent neural networks, and generative models. It is more suitable for those who want a deeper understanding of the mathematical foundations of deep learning. This book is often used as a textbook in graduate-level courses.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Similar courses are unavailable at this time. Please try again later.
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2025 OpenCourser