Business with Deep Learning & Machine Learning
The second module “Business with Deep Learning & Machine Learning” first focuses on various business considerations based on changes to come due to DL (Deep Learning) and ML (Machine Learning) technology in the lecture “Business Considerations in the Machine Learning Era.” In the following lecture “Business Strategy with Machine Learning & Deep Learning” explains the changes that are needed to be more successful in business, and provides an example of business strategy modeling based on the three stages of preparation, business modeling, and model rechecking & adaptation. The next lecture “Why is Deep Learning Popular Now?” explains the changes in recent technology and support systems that enable the DL systems to perform with amazing speed, accuracy, and reliability. The last lecture “Characteristics of Businesses with DL & ML” first explains DL and ML based business characteristics based on data types, followed by DL & ML deployment options, the competitive landscape, and future opportunities are also introduced.
Deep Learning Computing Systems & Software
The third module “Deep Learning Computing Systems & Software” focuses on the most significant DL (Deep Learning) and ML (Machine Learning) systems and software. Except for the NVIDIA DGX-1, the introduced DL systems and software in this module are not for sale, and therefore, may not seem to be important for business at first glance. But in reality, the companies that created these systems and software are indeed the true leaders of the future DL and ML business era. Therefore, this module introduces the true state-of-the-art level of DL and ML technology. The first lecture introduces the most popular DL open source software TensorFlow, CNTK (Cognitive Toolkit), Keras, Caffe, Theano, and their characteristics. Due to their popularly, strong influence, and diverse capabilities, the following lectures introduce the details of Google TensorFlow and Microsoft CNTK. Next, NVIDIA’s supercomputer DGX-1, that has fully integrated customized DL hardware and software, is introduced. In the following lectures, the most interesting competition of human versus machine is introduced in the Google AlphaGo lecture, and in the ILSVRC (ImageNet Large Scale Visual Recognition Challenge) lecture, the results of competition between cutting edge DL systems is introduced and the winning performance for each year is compared.
Basics of Deep Learning Neural Networks
The module “Basics of Deep Learning Neural Networks” first focuses on explaining the technical differences of AI (Artificial Intelligence), ML (Machine Learning), and DL (Deep Learning) in the first lecture titled “What is DL (Deep Learning) and ML (Machine Learning).” In addition, the characteristics of CPUs (Central Processing Units) and GPUs (Graphics Processing Units) used in DL as well as the representative computer performance units of FLOPS (FLoating-Point Operations Per Second) and IPS (Instructions Per Second) are introduced. Next, in the NN (Neural Network) lecture, the biological neuron (nerve cell) and its signal transfer is introduced followed by an ANN (Artificial Neural Network) model of a neuron based on a threshold logic unit and soft output activation functions is introduced. Then the extended NN technologies that uses MLP (Multi-Layer Perceptron), SoftMax, and AutoEncoder are explained. In the last lecture of the module, NN learning based on backpropagation is introduced along with the learning method types, which include supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.
Deep Learning with CNN & RNN
The module “Deep Learning with CNN & RNN” focuses on CNN (Convolutional Neural Network) and RNN (Recurrent Neural Network) technology that enable DL (Deep Learning). First the lectures introduce how CNNs used in image/video recognition, recommender systems, natural language processing, and games (like Chess and Go) are made possible through processing in the convolutional layer and feature maps. The lecture also introduces how CNNs use subsampling (pooling), LCN (Local Contrast Normalization), dropout, ensemble, and bagging technology to become more efficient, reliable, robust, and accurate. Next, the lectures introduce how DL with RNN is used in speech recognition (as in Apple's Siri, Google’s Voice Search, and Samsung's S Voice), handwriting recognition, sequence data analysis, and program code generation. Then the details of RNN technologies are introduced, which include S2S (Sequence to Sequence) learning, forward RNN, backward RNN, representation techniques, context based projection, and representation with attention. As the last part of the module, the early model of RNN, which is the FRNN (Fully Recurrent NN), and the currently popular RNN model LSTM (Long Short-Term Memory) is introduced.
Deep Learning Project with TensorFlow Playground
The module “Deep Learning Project with TensorFlow Playground” focuses on four NN (Neural Network) design projects, where experience on designing DL (Deep Learning) NNs can be gained using a fun and powerful application called the TensorFlow Playground. The lectures first teach how to use the TensorFlow Playground, which is followed by guidance on three projects so you can easily buildup experience on using the TensorFlow Playground system. Then in Project 4 a “DL NN Design Challenge” is given, where you will need to make the NN “Deeper” by adding hidden layers and neurons to satisfy the classification objectives. The knowledge you obtained in the lecture of Modules 1~5 will be used in these projects.