What if you could build a character that could learn while it played? Think about the types of gameplay you could develop where the enemies started to outsmart the player. This is what machine learning in games is all about. In this course, we will discover the fascinating world of artificial intelligence beyond the simple stuff and examine the increasingly popular domain of machines that learn to think for themselves.
What if you could build a character that could learn while it played? Think about the types of gameplay you could develop where the enemies started to outsmart the player. This is what machine learning in games is all about. In this course, we will discover the fascinating world of artificial intelligence beyond the simple stuff and examine the increasingly popular domain of machines that learn to think for themselves.
In this course, Penny introduces the popular machine learning techniques of genetic algorithms and neural networks using her internationally acclaimed teaching style and knowledge from a Ph.D in game character AI and over 25 years experience working with games and computer graphics. In addition she's written two award winning books on games AI and two others best sellers on Unity game development. Throughout the course you will follow along with hands-on workshops designed to teach you about the fundamental machine learning techniques, distilling the mathematics in a way that the topic becomes accessible to the most noob of novices.
Learn how to program and work with:
genetic algorithms
neural networks
human player captured training sets
reinforcement learning
Unity's ML-Agent plugin
Tensorflow
Contents and Overview
The course starts with a thorough examination of genetic algorithms that will ease you into one of the simplest machine learning techniques that is capable of extraordinary learning. You'll develop an agent that learns to camouflage, a Flappy Bird inspired application in which the birds learn to make it through a maze and environment-sensing bots that learn to stay on a platform.
Following this, you'll dive right into creating your very own neural network in C# from scratch. With this basic neural network, you will find out how to train behaviour, capture and use human player data to train an agent and teach a bot to drive. In the same section you'll have the Q-learning algorithm explained, before integrating it into your own applications.
By this stage, you'll feel confident with the terminology and techniques used throughout the deep learning community and be ready to tackle Unity's experimental ML-Agents. Together with Tensorflow, you'll be throwing agents in the deep-end and reinforcing their knowledge to stay alive in a variety of game environment scenarios.
By the end of the course, you'll have a well-equipped toolset of basic and solid machine learning algorithms and applications, that will see you able to decipher the latest research publications and integrate the latest developments into your work, while keeping abreast of Unity's ML-Agents as they evolve from experimental to production release.
What students are saying about this course:
Absolutely the best beginner to Advanced course for Neural Networks/ Machine Learning if you are a game developer that uses C# and Unity.
A perfect course with great math examples and demonstration of the TensorFlow power inside Unity. After this course, you will get the strong basic background in the Machine Learning.
The instructor is very engaging and knowledgeable. I started learning from the first lesson and it never stopped. If you are interested in Machine Learning , take this course.
This lecture is a welcome to the course and introduction to the instructor and an overview of the course content.
In this lecture students will learn about the types of learning models implemented in machine learning.
This article provides guidances on how to study this course.
This article addresses common general questions about my courses.
Here's an overview of machine learning I gave at the Unity 2019 Conference in Sydney. It's a brief overview of the domain without the jargon or mathematics.
Genetic Algorithms are one technique classified under the larger umbrella of evolutionary computing. In this domain researchers use biological systems as the basis for designing code. Genetic algorithms are simple in design but are capable of producing extraordinatry learned behaviours.
In this lecture we begin creating a very simple genetic algorithm that will learn a player's colour preference.
This lecture completes the code for producing your first genetic algorithm application that will breed a set of sprites set to your favourite colour.
Test your knowledge of Genetic Algorithms.
The values stored in the genes can be used to code anything from colour, to movement, to speed. In this lecture we will examine how we can use a single gene to control movement and teach a population to walk along a beam.
In this lecture we will finish the movement strategy example of implementing a genetic algorithm to determine the best way to move to stay alive the longest.
Modify the single gene example code in the previous lectures and instead of testing for fitness on how long each bot survives, test for distance travelled. The result should be a population that prefer to walk along the beam. They will want to get as far away from their starting position as possible without falling off.
This article is a short note about issues with Unity version compatibility with solution package files from the course.
In these next few lectures we will build a new genetic algorithm series that can train a group of bots to stay on a platform by teaching them when to turn and when to move forward.
In this lecture we finish the first phase of the genetic algorithm that trains bots to stay on a platform. We then make a few tweaks to how it senses the environment and discuss some improvements.
In this lecture we will finalise the training for the genetic algorithm before adding the Ethan third-person character into the scene to replace the capsule bot.
In this challenge you will be asked to create a genetic algorithm to traverse a maze. The video shows you the initial setup of the environment and allows you pausing time to build the application before one solution is given.
This lecture concludes stepping through the solution of the maze walking challenge.
In this lecture we will explore genetic algorithms further by creating a longer gene sequence and use it to train 2D birds to get through an obstacle course.
In this second half we will complete the application by setting up the bird prefabs and adding the PopulationManager.
Additional resources to help expand your knowledge of Genetic Algorithms beyond the scope of the course.
A perceptron is the smallest functioning unit of a neural network. However, by itself it can still produce some stunning results. Development of this fundamental algorithm will introduce students to the nature of neural nets and how they function.
Produce a line by line perceptron using a spreadsheet.
In this workshop student's will follow along in Unity to create a perceptron in C#.
An exercise focused on improving your knowledge of perceptrons.
In this lecture you will learn how the weights are used by the perceptron to define a decision boundary that helps it classify inputs.
In this lecture you will create a perceptron to act as the brain of an NPC as you teach it to dodge balls.
After a perceptron is trained, all its 'knowledge' is contained in the weights. By saving these final weights you are essentially saving the artificial brain. The saved values can be reloaded to create an instantly trained perceptron.
This lecture provides a brief overview of artificial neural networks along with their architecture and uses.
In this lecture we will begin to program our own artificial neural network from scratch.
In the second part of this workshop to build a neural network we will finish creating the code and give it some training examples.
Having programmed an Artificial Neural Network we will now put it through it's paces and discuss training variables.
This lecture addresses three of the most frequently asked questions in neural network development:
1) what activation function should I use;
2) how many layers to I need, and
3) how many neurons do I need?
Once you've got your neural network code setup its a simple matter to add and use more activation functions. In this lecture you will learn how to add more activation functions to your code and analyse their usefulness.
Take a look at the Sinusoid, ArcTan and SoftSign. Write the code to program these into your ANN.
Additional resources to help expand your knowledge of Artificial Neural Networks beyond the scope of the course.
In this lecture we will start using the ANN for something game related and what better way than to create an NPC that plays Pong.
In the second part of the Pong playing neural network workshop, we will complete the code and examine the NPCs performance.
In the final part of creating an ANN that plays Pong we look at extending the training set by including more complex data that involves reflections.
Extend the game of Pong and add another paddle to act as the other player.
In the real world training a neural network with data gathered from the real world can introduce problems that don't show up in purely academic examples. In these next few videos we will create a simple racing scenario, gather data from the game player's racing and inject this data into a neural networked player to train them to drive the track.
We continue on from the previous lecture by finishing our capture of player data to use in a neural network training set. We will examine a way to normalise and compress the large amount of collected information into something more suitable for a neural network.
Once you've collected the training data from the player you can begin training the neural network. In this lecture we will start writing the script to attach to the ANN driven kart.
This short note discusses how gathering data from different sensor angles isn't an issue.
In this lecture we will complete writing the code required to train an NPC ANN to navigate a racing track circuit given the collected user data as well as discuss some of the nuances of training in complex problem spaces with real data.
This lecture completes the training exercise for the go kart racing scenario by adding in code to load previously trained weights and finishes by examining ways to optimise the trained data even further to get the SSE down.
Deep learning can be achieved through the reinforcement learning technique called Q-Learning. In this lecture we will explore the algorithm based on this theory getting ready to implement it with our own neural network.
This lecture begins our integration of Q-Learning into the existing neural network code. We will examine Q-Learning in this context to train a platform to balance a ball.
In this second part we will continue coding the Brain for the system and work through the integration of the critical Q-Learning algorithm and Bellman's equation to create the reward feedback system.
In the final part of this series we will complete the Brain code and run it to explore how well the platform balanace the ball.
Modify the balance beam ANN to create a Flappy Bird that learns to hover.
Additional resources to help expand your knowledge beyond the scope of the course.
In this lecture we will cover the setting up of the python and Tensorflow environment essential for training the ML-Agents.
In this video we will take a look at an overview of the ML-Agent Project structure and step through building and training an example project.
This document outlines the changes you'll need to take into consideration in migrating ML-Agents 0.2 to 0.3.
In this article I will address the more common and simple questions raised by students with regards to the ML-Agent use and setup.
Now that you have Tensorflow and the Unity ML-Agents working it's time to create your own - from scratch. In these next lectures we will take a closer look at what makes the ML-Agent's system tick and explore the settings and options.
This lecture completes the development of a simple ML-Agent's application and steps through the training process.
A cheat sheet of quick help for working with ML-Agents.
In this lecture we will start developing an agent in a 2D environment that will learn to avoid an enclosing boundary, discuss more ML-Agent settings and examine discrete and continuous actions.
In this lecture we'll complete the avoidance agent by examining the difference between training with continuous and discrete values.
Modify the cat agent developed in the previous lecture to use raycasts to sense the proximity of the border colliders instead of the method currently being used.
In this lecture I'll share with you the top ten things I have learnt when using ML-Agents system to help you design your agents better and debug issues that may arise.
In this lecture we will revisit the 2D floating cat example, train it to use raycast sensors and throw in some moving dogs to dodge.
In this lecture we will train an agent to reach a goal position.
We will extend the abilities of our agent from the last lecture here and teach it to jump over a wall.
Why this section is depreciated.
This lecture takes students step by step through the setup of Tensorflow.
This article is a step-by-step setup guide for Windows specific Tensorflow and ML-Agent issues.
This article is a step-by-step setup guide for Mac specific Tensorflow and ML-Agent issues.
In this lecture we will use a sample Unity application to demonstrate the steps involved in training a working brain with Tensorflow.
A short video with some final words from Penny.
This link provides further information on the courses you can look at taking based on your interests and skill level.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.