For a long time, web applications were usually a single application that handled everything—in other words, a monolithic application. This monolith handled user authentication, logging, sending email, and everything else. While this is still a popular (and useful) approach, today, many larger scale applications tend to break things up into microservices. Today, most large organizations are focused on building web applications using this approach, and with good reason.
For a long time, web applications were usually a single application that handled everything—in other words, a monolithic application. This monolith handled user authentication, logging, sending email, and everything else. While this is still a popular (and useful) approach, today, many larger scale applications tend to break things up into microservices. Today, most large organizations are focused on building web applications using this approach, and with good reason.
Microservices, also known as the microservice architecture, are an architectural style which structures an application as a loosely coupled collection of smaller applications. The microservice architecture allows for the rapid and reliable delivery of large, complex applications. Some of the most common features for a microservice are:
it is maintainable and testable;
it is loosely coupled with other parts of the application;
it can deployed by itself;
it is organized around business capabilities;
it is often owned by a small team.
In this course, we'll develop a number of small, self-contained, loosely coupled microservices that will will communicate with one another and a simple front-end application with a REST API, with RPC, over gRPC, and by sending and consuming messages using AMQP, the Advanced Message Queuing Protocol. The microservices we build will include the following functionality:
A Front End service, that just displays web pages;
An Authentication service, with a Postgres database;
A Logging service, with a MongoDB database;
A Listener service, which receives messages from RabbitMQ and acts upon them;
A Broker service, which is an optional single point of entry into the microservice cluster;
A Mail service, which takes a JSON payload, converts into a formatted email, and send it out.
All of these services will be written in Go, commonly referred to as Golang, a language which is particularly well suited to building distributed web applications.
We'll also learn how to deploy our distributed application to a Docker Swarm and Kubernetes, and how to scale up and down, as necessary, and to update individual microservices with little or no downtime.
An overview of Microservices and what we'll cover in this course.
Just a bit of information about me and my background.
Obviously we'll need Go installed on our system, so let's make sure we have it installed, and have the latest version.
If you don't have an IDE, Visual Studio Code will do the job.
Installing make will make our lives easier, so let's take care of that now.
We'll be using Docker extensively in this course, so let's get it installed.
I don't mind helping at all, but make it easy for me to help you.
Mistakes are part of the software development process. I'll make some, and I won't hide them.
Just an overview of our goals for this section: create a front end application, set up a broker service, and make running that service in Docker very simple.
Let's install some starter code, and set up a Workspace in Visual Studio Code.
Let's take a quick run through of the front end source code and see how it works.
Let's get started writing the code for our Broker microservice.
We have to build a docker image for our Broker services, so let's create a Dockerfile for it, and then get it running in Docker by writing and running a docker-compose.yml file.
Let's go back to the front end application and write the necessary HTML and JavaScript to hit the Broker microservice, just to make sure that everything works as expected.
Let's make it easier to work with JSON by building helper functions to read and write JSON files, and one to send an error message back as JSON when things go wrong.
Let's set up a Makefile that will make it simple to bring our Docker images up, take them down, build our front end, build our microservices, and start and stop the front end.
Let's get a Makefile up and running for Windows users.
An overview of our goals for this section: implementing an Authentication microservice which is backed by a Postgres database, modifying the Broker service to accept a standard request payload, and making sure that everything works as expected.
Let's get started writing a minimal version of our authentication service.
Let's write the code necessary to connect our Authentication service to a Postgres database.
We need to add the authentication microservice to docker compose, as well as Postgres. Let's take care of that now.
Let's put some data in our database, so that we have something to authenticate against.
Our service requires both a route and a handler in order to be useful, so let's write that now.
We need to make some changes to our Broker microservice: first, we have to design a standard JSON format that we'll use for every call to the service; and second, we need to implement a route and handler that will take care of having the broker contact the Authentication service, get a response, process it, and send the appropriate response back to the end user.
The moment of truth is up on us: let's update the front end to hit our authentication service through the broker, and see if it works as expected.
Let's get started writing some code for our Logger microservice.
We'll need some code that will allow Go to interact with our Mongo database, so let's get started writing a data package for our logger microservice.
Let's finish up writing the database functions that will allow us to interact with the logs collection in our Mongo database.
Let's get our Logger microservice to the point where it can accept and process requests.
Let's get a local instance of MongoDB running through Docker, and try to compile and run the logger service.
Let's modify our Makefile and docker-compose.yml files to get the logger-service up and running in Docker.
We need to update the broker to handle requests that log information to MongoDB through the Logger microservice. Let's take care of that now.
Now that we have the Broker microservice updated, let's write some JavaScript on the front end application in order to test things out.
While we're at it, we might as well implement logging in the Authentication microservice, so that we can log authentication requests.
Let's try out our new functionality and make sure that it works as expected.
In this section, we'll set up a mail microservice that allows us to send email from any of our other services, and to use custom email templates when doing so.
We're going to need some kind of mail server to send email through, and Mailhog will do the job. Let's add it to our docker-compose.yml file.
Let's get the basic code in place for our Mailer microservice.
Let's take care of the code that actually sends email. We'll build a system that allows us to send a message with two versions: plain text, and formatted HTML.
In order to take advantage of the Mail and Message type, we'll need to modify main.go, add a route to routes.go, and create a handler. Let's take care of that now.
Let's apply what we've learned. I'm going to ask you to add the new Mail microservice to our docker-compose.yml, to our Makefile, and to create the necessary dockerfile. Give it a try.
Here's how I did the challenge.
Let's modify the Broker microservice to accept a JSON payload which will be sent off to the Mail microservice.
Let's add some JavaScript and a button to the front, and make sure that everything works as expected.
In this section, we'll build a queue that implements the Advanced Messaging Queue Protocol (AMQP), and push events to that queue. We'll also build a Listener microservice that receives events from the queue, consumes them, and calls a microservice based on the content found in that event.
Let's build a stub version of our Listener microservice.
We'll need an instance of RabbitMQ added to our docker-compose.yml file, so let's take care of that now.
Let's update our Listener microservice so that it connects to RabbitMQ.
In order to interact with RabbitMQ, we'll need to write a few functions. Let's take care of that now.
Let's write a function that will call the Logger microservice from the Listener microservice when we receive a message from RabbitMQ telling us to do so.
Let's update the URL to RabbitMQ in our Listener service's main.go file.
Let's create a Dockerfile for the listener service, update the Makefile, and bring up our images.
The Broker service has to be able to interact with RabbitMQ before it can emit events to the queue. Let's get started setting that up now.
Our Broker service is going to need some means of pushing (or publishing) events onto the queue, so let's take care of writing that functionality now.
Let's write a new function in the Broker that will push events (emit them) to RabbitMQ.
Let's give things a try, and fix any mistakes that I might have made along the way.
In order to handle RPC calls, our Logger microservice has to be able to receive them. Let's get started.
Let's update our Logger microservice to listen for RPC connections.
Before we can try things out, we'll need to modify the Broker service to send an RPC request to the Logger service. Let's take care of that now.
Let's try things out, and see if our RPC client and server behave as expected.
On overview of what we are going to cover in this section.
Working with gRPC requires that we have the necessary tooling installed. Let's take care of that now.
The .proto file is at the heart of the gRPC process. It defines the kinds of data we are going to pass around, and it exposes the functions we want to be made available to gRPC. Let's write one for our Logger microservice.
Let's take advantage of the tools we installed and the .proto file we wrote to have gRPC automatically generate some source code for us.
Let's get started writing the code for our gRPC server. We'll start by implementing the one function we exposed in our logs.proto file: WriteLog().
Now that we have the server code in place, let's start listening for gRPC connections.
We need to update the Broker microservice to make a gRPC request to the Logger microservice. Let's take care of that now.
In order to try things out, we'll need to add some HTML and JavaScript to the front end. Let's take care of that now.
Let's try things out, and see if our gRPC client and server behave as expected.
An overview of Docker Swarm, and why it is a good alternative to Kubernetes for small teams and individual developers.
In order to take full advantage of Swarm, we'll need to push our Docker images to Docker Hub. Let's take care of building and tagging the images now.
Swarm needs a deployment file, much like a docker-compose.yml file. Let's build one.
Let's initialize Docker swarm, and start up our deployment.
Let's start up the front end application, and try hitting our swarm.
One of the great advantages of Docker swarm, or any container orchestration service, is that we can have multiple instances of our services running at the same time. Let's try scaling a few services up and down.
When you make an update to your code base for any microservice, you'll need to update the docker image for that service, and then update your Docker swarm. You can actually do that with no downtime in most cases. Let's give it a try.
Here is how you can stop, and optionally entirely delete, your Docker swarm.
In order to put everything in our Docker swarm, we'll need to make a few changes to the front end and the Broker, and build a docker image for the front end. Let's get started.
Here is how I solved the challenge.
Let's add our new front end Docker image to the Docker Swarm file as a new microservice.
Right now, we have no means of accessing the front end or the Broker service. Let's fix that by building a custom Caddy Dockerfile, and adding it to the Docker swarm.
Let's add an entry to our local hosts file, and try bringing up the Docker Swarm.
A challenge: let's fix a problem with a URL in our application.
Here's how I solved the challenge.
There are lots of providers out there who offer Virtual Private Servers, including Digital Ocean, Linode, Vultr, and many more. I'm going to use Linode, but the process is nearly identical regardless of the provider you choose. Let's set up two new servers on Linode.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.