We may earn an affiliate commission when you visit our partners.
Prof. Dr. Bastian Leibe and Christian Schmidt M.Sc.

Artificial neural networks form the foundation of modern AI systems. “Deep Learning” offers participants a comprehensive introduction to the core principles and fundamental building blocks used in today’s neural networks. The course covers the most important types of neural networks, like MLPs, CNNs, RNNs, and Transformers, as well as practical techniques for efficient training and the reuse large pre-trained models.

Read more

Artificial neural networks form the foundation of modern AI systems. “Deep Learning” offers participants a comprehensive introduction to the core principles and fundamental building blocks used in today’s neural networks. The course covers the most important types of neural networks, like MLPs, CNNs, RNNs, and Transformers, as well as practical techniques for efficient training and the reuse large pre-trained models.

Throughout the course, students will gain a robust understanding of the general training process and key differences between different network types, as well as practical knowledge through hands-on programming exercises.

By the end of the course, students will be equipped with the knowledge and skills to understand, train, and apply deep neural networks to a variety of problems, laying a strong foundation for advanced exploration of the field.

What's inside

Learning objectives

  • Multi-layer perceptrons
  • Efficient optimization methods
  • Convolutional neural networks
  • Recurrent neural networks
  • Attention & transformers
  • Large-scale learning & efficient fine-tuning

Syllabus

Week 1 – Introduction to Deep Learning
In the first week, we will give an overview of the history of deep learning, covering important milestones and factors of rapid progress that the field has experienced in recent years. In addition to this overview, you will learn about the essentials of neural network training: multi-layer perceptrons, activation functions, error functions, and the backpropagation algorithm.
Read more
Week 2 – Practical Deep Learning
In week two, we will have a closer look at some of the important practical aspects of training neural networks. In particular, we will cover data preprocessing and weight initialization, adaptive first-order optimization methods, and some of the typical tricks that enable reliable training and good task performance.
Week 3 - Convolutional Neural Networks
Week 3 introduces Convolutional Neural Networks (CNNs) as a practical way to process image data with neural networks. We will motivate CNNs as parameter-efficient learnable image filters, present typical CNN operators like pooling layers and give an overview of tried and proven architectures for image classification. Additionally, we introduce the encoder-decoder architecture, which is not only a popular pattern to tackle dense image processing tasks, but also a generally useful way to map inputs to outputs.
Week 4 - Recurrent Neural Networks
In week 4, we will present network architectures for natural language processing. In particular, we will introduce Recurrent Neural Networks (RNNs) for sequence processing and analyse their training behaviour, uncovering reasons for unstable training. We will also study Long Short-Term Memory (LSTM) networks, which exhibit more stable training behaviour, and the attention mechanism as a way to improve network architectures for sequence-to-sequence processing.
Week 5 - Transformers
In week 5, we will introduce the fundamental architecture behind modern deep neural networks, the Transformer. Starting from a learnable key-value storage mechanism as a motivating example, we present the basic building blocks of the architecture, covering self- and cross-attention, positional encodings, Transformer Encoder and Decoder, as well as three examples of powerful applications of this versatile architecture.
Week 6 - Large-Scale Learning
In the final week, we will cover large-scale learning systems. Motivated from empirical findings on the benefits of scaling to large models and large amounts of data (so-called scaling laws), we present the typical pretraining and fine-tuning paradigm. Additionally, we show how these large-scale models enable easy construction of multi-modal models from existing architectural building blocks.

Good to know

Know what's good
, what to watch for
, and possible dealbreakers
Provides a comprehensive introduction to core principles and fundamental building blocks used in modern neural networks, making it suitable for those seeking a strong foundation
Covers important types of neural networks, including MLPs, CNNs, RNNs, and Transformers, which are essential for understanding modern AI systems
Explores practical techniques for efficient training and reuse of large pre-trained models, which are crucial skills for applying deep learning in real-world scenarios
Includes hands-on programming exercises, allowing students to gain practical knowledge and apply their understanding of deep neural networks
Examines efficient optimization methods, which are essential for training deep neural networks effectively and achieving good performance
Discusses large-scale learning systems and the pretraining and fine-tuning paradigm, which are important for working with large models and datasets

Save this course

Save Deep Learning to your list so you can find it easily later:
Save

Activities

Be better prepared before your course. Deepen your understanding during and after it. Supplement your coursework and achieve mastery of the topics covered in Deep Learning with these activities:
Review Linear Algebra Fundamentals
Solidify your understanding of linear algebra concepts, which are essential for understanding neural network operations and optimization algorithms.
Browse courses on Linear Algebra
Show steps
  • Review matrix operations such as addition, multiplication, and transposition.
  • Study vector spaces, linear independence, and basis vectors.
  • Practice solving systems of linear equations.
Brush up on Calculus
Strengthen your calculus knowledge, focusing on derivatives and gradients, which are crucial for understanding backpropagation and optimization in deep learning.
Browse courses on Calculus
Show steps
  • Review differentiation rules and techniques.
  • Study partial derivatives and the concept of gradients.
  • Practice finding critical points of functions.
Read 'Deep Learning' by Goodfellow, Bengio, and Courville
Supplement your learning with a comprehensive textbook that covers the theoretical foundations and practical applications of deep learning.
View Deep Learning on Amazon
Show steps
  • Read the chapters relevant to the current week's topics.
  • Work through the examples and exercises in the book.
  • Take notes on key concepts and definitions.
Four other activities
Expand to see all activities and additional details
Show all seven activities
Implement Neural Networks from Scratch
Reinforce your understanding of neural networks by implementing them from scratch using a library like NumPy.
Browse courses on Neural Networks
Show steps
  • Implement a multi-layer perceptron with backpropagation.
  • Experiment with different activation functions and optimization algorithms.
  • Test your implementation on a simple classification task.
Read 'Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow' by Aurélien Géron
Gain practical experience with deep learning frameworks by working through the examples and exercises in a hands-on machine learning book.
Show steps
  • Work through the chapters on neural networks and deep learning.
  • Implement the examples using TensorFlow or Keras.
  • Experiment with different datasets and architectures.
Build an Image Classifier with CNNs
Apply your knowledge of CNNs by building an image classifier using a framework like TensorFlow or PyTorch.
Show steps
  • Choose a suitable dataset for image classification (e.g., CIFAR-10).
  • Design and implement a CNN architecture.
  • Train and evaluate your model.
  • Experiment with different hyperparameters and architectures to improve performance.
Write a Blog Post on Transformers
Deepen your understanding of Transformers by writing a blog post explaining their architecture and applications.
Browse courses on Transformers
Show steps
  • Research the Transformer architecture and its key components.
  • Write a clear and concise explanation of the self-attention mechanism.
  • Discuss the applications of Transformers in natural language processing.
  • Include diagrams and examples to illustrate your points.

Career center

Learners who complete Deep Learning will develop knowledge and skills that may be useful to these careers:
Deep Learning Researcher
A deep learning researcher focuses on the theoretical underpinnings of neural networks and develops new methods for training and applying them, and a course on deep learning is directly relevant for this career, especially for those who wish to pursue an advanced degree. This course provides a comprehensive introduction to the core principles and fundamental building blocks of neural networks. Someone working in research would find the discussion of various network types, including MLPs, CNNs, RNNs, and Transformers, particularly useful, as well as the section on large scale learning systems. This course offers practical techniques for efficient training and the reuse of large pre-trained models. The insights into the training process and the key differences between network types would be directly relevant to a career in research. Training, understanding, and applying deep networks, which are all components of this course, are critical skills for a deep learning researcher.
Computer Vision Engineer
A computer vision engineer develops systems that can 'see' and interpret images, typically using neural networks, and this course provides a strong foundation for this career. The course introduces Convolutional Neural Networks which are essential for processing image data. The course covers CNN operators, pooling layers, and architectures often used in image classification. A computer vision engineer would find the knowledge of encoder-decoder architectures, which can assist in dense image processing tasks, particularly relevant. This course also will help build understanding of the training process and key differences between network types, which is especially useful when understanding how to optimize a computer vision system. The hands-on exercises are useful in gaining practical experience.
Natural Language Processing Engineer
A natural language processing engineer designs systems that can understand and process human language, and therefore would benefit from the training provided in this course on Recurrent Neural Networks and Transformers. This course covers RNNs for sequence processing and analyzes their training behavior. It then moves towards the more stable Long Short-Term Memory networks, as well as the attention mechanism, which will be beneficial to an NLP engineer. Furthermore, it introduces the Transformer architecture which is a fundamental part of modern NLP systems. Deep learning is a must for those who wish to understand the basics for language processing. The knowledge and skills gained in this course will equip an engineer to work with deep neural networks in language based problems.
Artificial Intelligence Specialist
An Artificial Intelligence specialist designs and develops AI solutions, and this course will be valuable as it offers a comprehensive introduction to neural networks. The core principles covered in this course will help someone in this role to understand how AI systems are built. The course covers the most important types of neural networks including MLPs, CNNs, RNNs, and Transformers, providing a specialist with a broad understanding of the field. Furthermore the practical knowledge gained through hands on programming will help solidify the concepts for those working in this career. An AI specialist would be well-equipped with the knowledge and skills taught in the course, particularly when it comes to understanding, training, and applying deep neural networks to diverse problems.
Machine Learning Engineer
A machine learning engineer focuses on developing and implementing machine learning models, and this course in deep learning provides a helpful introduction to the core principles of neural networks, which are a central component of many modern machine learning systems. This role requires the ability to understand, train, and apply various network types such as MLPs, CNNs, RNNs, and Transformers, all of which are covered in the course. The practical techniques for efficient training taught in the course are essential for the effective implementation of machine learning models. Understanding the training process and the key differences between network types, along with hands-on programming exercises, will significantly help a machine learning engineer. This course may also be particularly helpful for anyone looking to work with large pre-trained models.
Data Scientist
A data scientist uses data to gain insights, often using machine learning techniques, and this course on deep learning will be helpful as it provides a comprehensive introduction to neural networks. A data scientist will find the knowledge of different network types, and the ability to understand, train, and apply deep neural networks valuable. The course teaches practical techniques for efficient training, which is essential for the effective application of machine learning models to data. This course may also help a data scientist to work with large pre-trained models where appropriate. By the end of this course a data scientist will have built a solid foundation for advanced exploration of the field. This course may help those in data science with diverse problems.
Image Processing Specialist
An image processing specialist focuses on the manipulation and analysis of images, which frequently uses deep learning methods, and this course will provide helpful background. The course's introduction of Convolutional Neural Networks as a method for processing image data will be particularly useful. The coverage of typical CNN operators, pooling layers, and architectures for image classification are relevant to this field. This course also introduces the encoder-decoder architecture, which is useful for dense image processing tasks. This course may also help a specialist who wishes to understand the practical knowledge gained through hands on programming exercises. This work will involve understanding differences between network types and efficient training methods.
Robotics Engineer
A robotics engineer designs and develops robots, often using machine learning for perception and control, and a course in deep learning provides an introduction to the core concepts to help in this career. This deep learning course helps the robotics engineer to understand the neural network concepts that are a component of many robotic systems. An engineer in this field will find topics such as MLPs, CNNs, RNNs, and Transformers to be particularly valuable. Robotics design can also make use of the practical techniques for efficient training, and understanding how to reuse large pre-trained models that are taught in this course. The hands on programming and the key differences between network types would provide important background knowledge. This course may help a robotics engineer build a strong foundation for advanced work.
Research Scientist
A research scientist conducts experiments and analyzes results, often with the goal of advancing science or technology, and this course will help those whose research focus coincides with the topic of deep learning. The course provides a comprehensive introduction to the core principles and fundamental building blocks of neural networks. Research scientists can use this knowledge to explore and contribute to theoretical and practical advancements in artificial intelligence. The course covers various types of neural networks such as MLPs, CNNs, RNNs, and Transformers, as well as practical techniques for efficient training and the reuse of large pre-trained models. This course will help a scientist to build a foundation for advanced exploration.
Algorithm Developer
An algorithm developer creates and optimizes algorithms for various applications, and may find this course in deep learning helpful for those wishing to develop algorithms in this field. The course provides a comprehensive introduction to neural networks, including MLPs, CNNs, RNNs, and Transformers. The practical techniques for efficient training and the reuse of large pre-trained models will also be beneficial for an algorithm developer. The hands-on programming exercises will help in understanding how the algorithms work in practice. This course will help an algorithm developer who wants to learn about machine learning systems.
Data Analyst
A data analyst examines data to identify trends, often using machine learning techniques, and this course in deep learning may be useful as an introduction to neural networks, which are part of the machine learning skill set. A data analyst may find that the key network types mentioned in the course, such as MLPs, CNNs, RNNs, and Transformers, provide useful background on complex data processing. The course provides practical techniques for efficient training and the reuse of large pre-trained models. An analyst who wishes to incorporate machine learning techniques will find that this course may help with that goal, especially with regards to understanding the general training process and key differences between network types. This course may help with understanding a subset of techniques important to data analysis.
Software Developer
A software developer designs and builds software applications, and this course will help someone who wishes to develop applications related to deep learning. This course provides an important foundation in the core principles and building blocks used in today's neural networks. The course covers the most important types of neural networks such as MLPs, CNNs, RNNs, and Transformers, as well as efficient training. The hands-on programming exercises are particularly useful for a software developer. As a software developer becomes more familiar with the field, these skills will help them to better understand how to train and implement their software. This course will help a software developer gain valuable knowledge about this important field.
Academic Researcher
An academic researcher investigates a focused subject, usually in a university; typically requiring a PHD. Depending on that topic, a course in deep learning can be helpful, as this course provides a comprehensive introduction to the core principles and fundamental building blocks of modern neural networks. The course covers a variety of neural network types, such as MLPs, CNNs, RNNs, and Transformers, and also discusses practical techniques for efficient training and the reuse of large pre-trained models. This course may be helpful for an academic researcher who wishes to delve into machine learning. It is important to note that academic research roles typically require a PhD.
Quantitative Analyst
A quantitative analyst develops and implements mathematical and statistical models, often in the finance sector, and a deep learning course may be valuable for those who wish to apply machine learning to their work. This course will provide a good introduction to the core principles and building blocks of neural networks, as well as training techniques. The course may be particularly helpful for those wishing to learn about MLPs, CNNs, RNNs, and Transformers. A quantitative analyst may want to understand how to work with data and the training process. It is important to note that a typical role as a quantitative analyst often requires an advanced degree.
Bioinformatics Specialist
A bioinformatics specialist uses computational methods to understand biological data, and this deep learning course may be useful for those interested in applying machine learning to this field. A bioinformatics specialist will find the information on neural networks particularly valuable as they can be applied to a wide variety of biological data sets. The course introduces Multi Layer Perceptrons, Convolutional Neural Networks, Recurrent Neural Networks, and Transformers. This can be a challenging career, and often requires an advanced degree.

Reading list

We've selected two books that we think will supplement your learning. Use these to develop background knowledge, enrich your coursework, and gain a deeper understanding of the topics covered in Deep Learning.
Provides a comprehensive overview of deep learning concepts, covering everything from basic neural networks to advanced architectures like CNNs, RNNs, and Transformers. It is widely used as a textbook in deep learning courses and offers a strong theoretical foundation. Reading this book concurrently with the course will greatly enhance your understanding of the material. It serves as an excellent reference for understanding the underlying principles and mathematical foundations of deep learning.
Provides a practical introduction to machine learning and deep learning using popular frameworks like Scikit-Learn, Keras, and TensorFlow. It covers a wide range of topics, including neural networks, CNNs, and RNNs, with hands-on examples and exercises. This book is particularly useful for those who prefer a more practical, code-focused approach to learning. It can be used as a reference for implementing and experimenting with different deep learning models.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Similar courses are unavailable at this time. Please try again later.
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2025 OpenCourser