PPL2021 — Docker Orchestration — Tupperware For Your Code

Photo by Dominik Lückmann on Unsplash

Deploying you program to the server is hard, but with docker, it’s easier

Have you ever wondered how your code run in the server? I remember it clearly the first time when I initialise my first project. I have to install the python first, and then pip, and then all the dependencies, and then the database, and let’s not forget if you use the NPM library for your front-end, the list goes on and on. You can’t expect the server computer to have the same program and library the same as you. Not only it was time-consuming and resource-consuming.

What is docker?

Docker is a platform built based on container technology. Yes, you heard that right, it’s basically a container like your mom’s Tupperware. Docker will ease your development pain. It will unite all your dependency and software library in one place. So you can deploy your software on every computer (or server) without having to install all the dependency, again and again, which exhausting.

But, Docker is not a VM (Virtual Machine). VM-Based cloud must include the whole operating system below your app. With the container-based platform, the same app sharing the same operating system will share the resource. This way, container-based technology will save a lot of memory on the server and be easier to run on the client computer. That’s why usually docker will use image upon image. On the docker image, docker will save the app, library that was sharing the same operating system. See the illustration below for a better understanding of container-based vs VM-based.

Why docker?

The main advantage of using docker for your project is less error. Like I said before, docker will save all your and library and dependency. There are so many cases when your code runs well on your local machine. But, when you deploy your application, it becomes a mess because you forget to include your dependency on your Gradle, requirements, XML file, and many configuration files out there.

Because docker also saves all your dependencies and library, every time another developer wants to test the code, all they have to do is set up the docker configuration. It will run without a problem. Less stress, more productive for you and your team.

Docker Architecture

Source: https://docs.docker.com/get-started/overview/

Docker can be separated into some components. Such as Docker Client, Docker Daemon/Engine, Docker Containers, Docker Images, and Docker Registry. Docker uses a client-server architecture, which connects the client and the daemon. Both communicate with each other using RESTfull API.

  • Docker Daemon & Docker Client

Docker daemon and client is the brain of the whole docker. This daemon and client usually will run the images and dependency on the docker server. Usually, it will use the existing environment if able, if not, it will create a new environment for the development purpose. Docker is developer and community-powered, meaning that if someone already has the same public image, you can use that without having to do the same thing again. Docker client runs on the Docker server, so you have to use the Docker Client / CLI to send and receive the API response from the server.

  • Docker Image

Docker image is where your library and app will live. Usually, it can be called Dockerfile. Using the magic, start.sh command. It will set up the Gunicorn and which port to use. On one image, there are usually several operating system and apps. You can build several docker containers with one image too. As long it can share the same operating system. For example, you can’t compose one image if you have two different apps, the first one only runs on the Linux kernel, and the other app only runs on the Windows kernel,

  • Docker Container

The container will wrap all your image in the same place. Your “lean” operating system will also live on the container. So in practice, you can compose multiple images on the same container that shared the same operating system. Below I attach how we build our container for our project. Because we already leverage Heroku’s Postgres SQL and our front end using classic HTML template and legacy javascript, our docker only contains python image. Your image will vary on your app and dependencies.

So what happens there? So we tell the docker CLI to download the latest python 3.8 image. After that, we will run our image on the container on the 0.0.0.0 server and listen to any input or change on the 8000 port (the default port in python Django).

  • Docker Registry

Docker registry is a library of docker image (can be public image or private repository). We call the public Docker as Docker Hub (see https://hub.docker.com/). Docker hub behaves the same way as git version control because you can pull existing images or push your image to be used by other people.

Setting up the environment

Please note that because we are using Heroku as our server, we can’t use docker-compose because Heroku already has its docker. Also, because we use Heroku’s PostgreSQL, we don’t have to declare which database port to listen to. But, even though we are not using docker-compose, we still have to set up the Django env. Because we already have requirements written, all the docker have to do is make sure that python and pip installed.

Deployment

Because Heroku already has its docker, all we have to do now on the Dockerfile.web is to set up the environment and make sure that the Heroku container will do the python regime: makemigrations, migrate, and then runserver. Below is our Dockerfile.web. And then all you have to is made heroku.yml to told the Heroku server to use this configuration every time they compose a new container for our project.

Don’t forget to update your GitLab config to include your docker image in the CD phase ( see my other article about Gitlab CI/CD). Below is an example of our GitLab configuration for the docker image.

And then you’re good to go!

Benefit of using Docker

  • Community Powered — Sure that some of the features need you to pay some money ( docker run server need money to run after all). Most of the time, if your project is small enough and you don’t need too many image or dependency. The free docker plan is good enough for you to use.
  • Sharing is caring — As I said on the point above. Because Docker is community-powered, it means every time you make a public image on Docker Hub, other people can use it too.
  • Less Error — The point of using a container is to minimalise dependency error. So your code can run the same on your local computer, on the server, on the user device, and your teammate's machine.

Under-graduated Students Majoring in Computer Science