Easy Development with Docker

Michael Susanto
10 min readApr 3, 2021
Source: https://www.docker.com/

This article is written as a part of Individual Review of Fasilkom UI’s Software Engineering Project Course 2021.

Containerization

Scenario 1: Imagine you are currently building codebases for your software. Your friend complains to you that your codebases didn’t run on his/her machine. What will you do?

Scenario 2: Imagine you’ve made the documentation on how to run your codebases in local. Your friend complains to you of how difficult it is to just run your multiple codebases in their machine. What would you say?

Docker come to the rescue!

By ‘wrapping’ or ‘packaging’ your codebases into Docker Containers and orchestrating them to run and stop using single command would be much help to your friends and other developers!

Docker

Overview

Docker is an open platform to ‘wrapping’, ‘packaging’, ‘running’, and ‘delivering’ applications. Docker has helped developers to change the way of writing server software nowadays. It reduces the delay between writing code and running it in production.

Containers vs Virtual Machines (VMs)

Containers vs VMs (Source: https://www.docker.com/blog/containers-replacing-virtual-machines/)

Virtual Machines are an abstraction of physical hardware, turning a one server to behave like multiple servers. You can think of it as there are multiple Operating Systems inside a single host Operating System (OS). Each of them is think that they run their own server.

What’s good about Virtual Machines? We can take full control of them, because they are just like computers inside computer. Can you imagine how costly it is to run many VMs on single computer?

On the other side, a Container is standardized unit of software. It is not general-purposed like VMs, but it delivers some functionality of your application. Container packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Yes, you can run it on anywhere without worrying how to setup your dependencies! Moreover, containers are more flexible, lightweight, and scalable than VMs.

How does it work?

Docker’s Flow (Source: https://docs.docker.com/get-started/overview/)

Docker uses client-server architecture. Your computer will have the Docker Client (examples are Docker Desktop, Docker Compose, etc) to talk with the Docker Daemon. Docker Daemon can be on the same host as your Docker Client, or your Docker Client can talk to remote Docker Daemon.

You, with Docker Client, tells the Docker Daemon to build, pull, or run something on a container. If the requested image isn’t in your system, Docker Daemon will search them in registry and download it for you (using docker pull command). After that, the pulled image will be placed / installed on certain container (using docker build command). Then, you can run the container with docker run command. The docker run command can do all the previous commands if you don’t have the image.

For example, let’s do this together.

  1. Run docker run hello-world
Run hello-world on Docker.

2. Previously, we don’t have hello-world image on our machine, so Docker Daemon will pull it on registry.

Pull hello-world image from registry.

3. Docker will place the image on a container for you and run it immediately.

Successfully run hello-world image on docker.

4. You can verify that docker created container for you.

Docker created hello-world container before running it.

Images vs Containers

You may be wondering the relation between images and containers. Docker Images are read-only files. We can think of them as templates or blueprints to build our Container. They can’t be run outside of the container. To put it simply, we can imagine Docker Image as an Operating System and Docker Container is our computer. Our Operating System run on our Computer. The same concept applies with Docker Image and Docker Container. We can run our Docker Image inside a Container.

Move Docker Images to Another Machines

Can we move our Image inside a Container into another host? The answer is Yes. Just like Operating Systems, we create an .iso file so that it can be installed on another machine. The simplest way is to use DockerHub, which is you can create your own images, push them, and pull them from another host to build containers. Let’s see an example:

  • Let’s create a simple image that prints out Hello World through Dockerfile.
Simple hello world image.
  • Run docker build -t <image_name> . to build our image according to the Dockerfile’s definition.
Build previously Dockerfile into an image called my-image.
Our image is listed in docker images command.
  • Let’s create a container with our own image.
We can see that our container is using my-image
  • Now, let’s push our own image into DockerHub. First, create a repository on DockerHub.
Create repository in DockerHub.
  • Fill in the description for your repository, then click Create.
Fill in fields when creating repository in DockerHub.
  • Now, let’s login into DockerHub from our machine.
Login into DockerHub from our terminal.
  • Let’s commit our custom image’s changes inside our container.
docker commit <container_id> <dockerhub_username>/<image_name>
  • Push the image into DockerHub
docker push <dockerhub_username>/<image_name>
  • Now we can see our DockerHub repository, which is already contains our my-image.

Now, from another machine, you can do:

# pull the image
docker pull <dockerhub_username>/<image_name>
docker pull michaelsusanto81/my-image
# check if the image is in our docker images list
docker images -a
Pull our image from DockerHub.
We can see that the image is in our docker images list.

Orchestration with Docker Compose

Imagine you have many images and containers in your system. It would be painful to pull, build, and run them one by one. Docker Compose can save us time for that!

Docker Compose is a tool to build and run multi-containers. You can just write a YAML script file to configure each container, then you can run with docker-compose command with the configured YAML script file for the containers. We will see how to configure docker containers with Docker Compose in the next section.

Implementation in Software Engineering Project Course 2021 Fasilkom UI

Our team — Magic People — orchestrates our project with Docker Compose to configure, build, and run multiple services on our backend project. There are 3 services (Web Server, Nginx, Database) on the backend project to be containerized and orchestrated. Here’s the steps:

Setup Dockerfile on Web Server

# Dockerfile
# 1. pull official base image
FROM python:3.8.3-alpine
# 2. set work directory
WORKDIR /usr/src
# 3. set environment variables
# This prevents Python from writing out pyc files
ENV PYTHONDONTWRITEBYTECODE 1
# This keeps Python from buffering stdin/stdout
ENV PYTHONUNBUFFERED 1
# 4. install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev libffi-dev bash
# 5. install Pillow dependencies
RUN apk add jpeg-dev zlib-dev libjpeg
# 6. install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# 7. copy project
COPY . .

The Dockerfile above is used on Development-level of our Web Server. Here are the steps:

  1. Our Backend Web Server uses Django Framework, so we will create a container with Python-based image.
  2. We want to change our working directory in the container to /usr/src.
  3. We make some configurations to our Python Container.
  4. We will attach PostgreSQL Database to our Django Web Server (psycopg2), so we need to install some dependencies such as postgresql on our container.
  5. Our team project works with images, so we need some image-based dependencies to be installed.
  6. Install Django dependencies.
  7. Copy the current project to our container.
# Dockerfile.prod
###########
# BUILDER #
###########
# 1. pull official base image
FROM python:3.8.3-alpine as builder
# 2. set work directory
WORKDIR /usr/src
# 3. set environment variables
# This prevents Python from writing out pyc files
ENV PYTHONDONTWRITEBYTECODE 1
# This keeps Python from buffering stdin/stdout
ENV PYTHONUNBUFFERED 1
# 4. install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev libffi-dev bash
# 5. upgrade pip
RUN pip install --upgrade pip
COPY . .
# 6. install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/wheels -r requirements.txt
#########
# FINAL #
#########
# 1. pull official base image
FROM python:3.8.3-alpine
# 2. create directory for the app user
RUN mkdir -p /home/app
# 3. create the app user
RUN addgroup -S app && adduser -S app -G app
# 4. create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
RUN mkdir $APP_HOME/mediafiles
WORKDIR $APP_HOME
# 5. install dependencies
RUN apk update && apk add libpq postgresql-dev gcc python3-dev musl-dev libffi-dev bash
COPY --from=builder /usr/src/wheels /wheels
COPY --from=builder /usr/src/requirements.txt .
RUN pip install --no-cache /wheels/*
# 6. copy project
COPY . $APP_HOME
# 7. chown all the files to the app user
RUN chown -R app:app $APP_HOME
# 8. change to the app user
USER app

The Dockerfile.prod above is used on Production-level of our Web Server. This Dockerfile uses multi-stage builds to reduce size of the production container. We simply separate the build process of our project on one container, and then copy and run it on another container. Here are the steps:

The builder steps is similar with the Development-level of previous Dockerfile. The difference is that we use the help of Wheel package from Python to copy save our built project.

The final steps:

  1. Our Backend Web Server uses Django Framework, so we will create a container with Python-based image.
  2. Create a folder to store our project in container
  3. The previous folder is created by root, so we create a new group called app, and then create a new user called app and then add them to the app group.
  4. Create mediafiles to store images and staticfiles to store CSS / JavaScript-related files. Switch to our newly created working directory.
  5. Copy and apply the built project from previous stage into current container (remember that we place our built project at previous stage in a folder called wheels).
  6. Copy the current project into our project directory in the container.
  7. Change the Web Server folder’s permission from root to our created user called app (in step 3), with chown command.
  8. Login as app user which is created from step 3.

Setup Dockerfile on Nginx

# nginx/Dockerfile
FROM nginx:1.19.0-alpine
# Remove default config of nginx
RUN rm /etc/nginx/conf.d/default.conf
# Replace it with our config
COPY nginx.conf /etc/nginx/conf.d

This Dockerfile is used to configure our nginx service to serve static and media files. In this Dockerfile script, we simply pull the nginx image from the registry, then replace the default configuration with our defined configuration.

Orchestrate Web Server, Nginx, and PostgreSQL Database with Docker Compose.

Our team uses two different YAML configurations for Development-level and Production-level of our Web Server.

Development-level

docker-compose.yml for Development-level orchestration.

In the Development-level, all of the static and media files will be handled by the Django Web Server itself. Hence, we only need to configure two services: db and web.

The db service uses postgres image to store data with persistent volume defined on the bottom of the script. We also need to configure the postgres by giving database name, database user, and the password.

The web service will be built with the Development-level of Dockerfile we’ve defined previously. After the service is built, we need to run some command, such as migration on our web server, and those commands are run by a script called devops/deployment.sh. We tell this service to forward port 8000 on our host to the port 8000 in web service container. Of course this service is depends on the db to store data.

Production-level

docker-compose.prod.yml for Production-level orchestration.

The production-level of YAML configuration is identical with the development-level configuration.

We don’t want to expose secrets in Production-level, so we store the database configurations in a env file, so the db service will read the configurations in it.

Our Production-level web service will use the Production-level Dockerfile.prod. We don’t store static and media files on our web server in production, so we need to store them separately. In order to achieve this, we create two volumes to store static and media files.

We define a new service called nginx, to store static and media files. It uses nginx image from registry and served through port 80 in the host.

Run

We can simply run our project with one command:

# To use default docker-compose.yml
docker-compose up -d --build
# To use specific yaml (example: docker-compose.prod.yml)
docker-compose -f docker-compose.prod.yml up -d --build

To stop the containers, we also only need one command, awesome!

# Stop containers those are run by default yaml configuration
docker-compose down -v
# Stop containers those are run by specific yaml configuration (example: docker-compose.prod.yml)
docker-compose -f docker-compose.prod.yml down -v

My Thoughts

Docker has made our development process easier. Nowadays, Docker is widely used and has become the standard in developing servers.

We don’t need to worry anymore to setup and run our code in different machines. Just run one command and voila!

I personally recommend you to use Docker to speed up your development process with your team, so you won’t be bothered by hardware or setup issues. I’ve applied orchestration with Docker Compose in my team project, how about you?

Thank you for reading and happy orchestrating!

--

--

Michael Susanto

Currently studying at Faculty of Computer Science, Universitas Indonesia.