Module 2: Convolutional Neural Networks (CNNs)
In module 2, we will discuss Convolutional Neural Networks (CNNs). A CNN, also known as ConvNet, is a specialized type of deep learning algorithm mainly designed for tasks that necessitate object recognition, including image classification, detection, and segmentation. Particularly, we will discuss the important layers in CNNs, such as convolution, pooling. We will also show different CNN applications.
Module 3: Deep Learning Tips
In module 3, we will provide important practical deep learning tips including activation function chosen, adaptive gradient descent learning methods, regularization and dropout.
Module 4: Recurrent Neural Networks (RNNs)
In module 4, we will discuss Recurrent Neural Networks (RNNs) which are used for sequential data. RNN is a type of Neural Network where the output from the previous step is fed as input to the current step. Particularly we will discuss Vanila version RNNs and Long Short-term Memory (LSTM). We will also discuss the learning problems on RNNs.
Module 5: Generative Models (GANs) and Diffusion Models (DMs)
In module 5, we will discuss the generative models. Particularly, Generative Adversarial Networks (GANs) and Diffusion Models (DMs). GANs are a way of training a generative model by framing the problem as a supervised learning problem with two sub-models: the generator model that we train to generate new examples, and the discriminator model that tries to classify examples as either real or fake. DMs are Markov chain of diffusion steps to slowly add random noise to data and then learn to reverse the diffusion process to construct desired data samples from the noise.
Module 6: Self-attention and Transformers
In module 6, we will discuss a powerful deep learning model - transformer. The transformer is a neural network component that can be used to learn useful representations of sequences or sets of data-points. The transformer has driven recent advances in natural language processing, computer vision, and spatio-temporal modelling.
Module 7: Neural Network Compression
In module 7, we will discuss neural network compression. Model compression reduces the size of a neural network without compromising accuracy. This size reduction is important because bigger neural networks are difficult to deploy on resource-constrained devices.
Module 8: Transfer Learning
In module 8, we will discuss transfer learning. Transfer learning is a machine learning technique that reuses a completed model that was developed for one task as the starting point for a new model to accomplish a new task. Particularly, we will discuss fine-tuning, multitask learning, domain adverbial training and zero-shot learning.
Summative Course Assessment
This module contains the summative course assessment that has been designed to evaluate your understanding of the course material and assess your ability to apply the knowledge you have acquired throughout the course.