How to Deploy Docker Containers to The Cloud

Docker and GCP make sharing your work with the world easy

James BriggsPhoto by Zoltan Tasi on Unsplash

Docker containers are brilliant little things. They are essentially self-contained applications that can run across any OS.

Imagine you have a Python application, you bundle it, along with everything you need to run it into a Docker container — that container can now be run on any Windows, Linux, or macOS system, without installing anything!

Another great benefit of Docker is the level of support for containers on Cloud platforms, such as Google Cloud (GCP), which we will be using in this article.

We can quickly build an application, package it into a container with Docker, and deploy it across the globe with GCP.

That is what we will do in this article. We will take a simple Python API, package it with Docker, and deploy it with GCP, covering:

> Project Setup> Dockerfile
– FROM
– WORKDIR and COPY
– RUN
– CMD> Building the Docker Image> Deploy with Google Cloud Platform
– GCloud SDK
– Cloud Build and Container Registry
– Cloud Run

We won’t be focusing on the Python code simply because that is not the purpose of the article. Instead, bring-your-own-code — or, here’s something I made earlier.

Our gcp-api directory should look like this.

We store our code in a directory called gcp-api (call this anything you like) under the name app.py. Alongside our script, we need:

  • A Dockerfile — the instruction manual for Docker
  • requirements.txt — a set of Python modules for our Dockerfile to install

Of course, we’re using Docker, so; we need Docker too. It can be installed from here.

If you have any OS-specific issues installing Docker, the 1:46:21 mark in this video explains Windows installation, followed by the 1:53:22 mark for macOS.

The Dockerfile is our container building blueprint. It tells Docker exactly how to rearrange our scripts and files in a way that produces a self-contained application.

It’s like building a house.

Photo by Randy Fath on Unsplash

Our scripts and files are raw materials (timber, bricks, etc.). We create a set of instructions for what we want our house to be like (the Dockerfile), which we then give to our architect (Docker), who then does all the technical stuff to produce a house blueprint (the image).

Later, we will also give the blueprint to our builder (Google Build), who will construct the house (container) for us — but not yet.

So, our Dockerfile. It looks like this:

FROM python:3.6-slim-busterWORKDIR /app
COPY . .RUN pip install -r requirements.txtCMD exec gunicorn –bind :$PORT –workers 1 –threads 8 –timeout 0 app:app

Initially, it may look confusing — but it’s incredibly simple.

FROM python:3.6-slim-buster

The very first line of our Dockerfile initializes our container image with another pre-built Docker image.

This pre-built image is, in essence, nothing more than a lightweight Linux OS containing Python 3.6.

But why ‘slim-buster’? Well, buster is the codename for all version 10 variations of Debian (a Linux Distribution).

As for why they chose the word ‘buster’ — I think someone opens a dictionary and picks the first word they see.

Slim, on the other hand, does make sense. As you may have guessed, it means Debian 10.x — but trimmed down, resulting in a smaller package size.

A full list of official Python images is available here.

Firestore (NoSQL database) is offered through Google’s Firebase platform. Image source.

Warning: It’s also worth noting that we’re using Python 3.6 here; you don’t need to stick to this unless you will be using Google Firebase (which we won’t be using here, but it’s good to be aware of this).

If you do happen to use Google Firebase with Python, you will likely use the python-firebase module, which contains an import called async.

Unfortunately, Python 3.7 introduced that exact word as a keyword. We avoid the resultant SyntaxError by sticking with Python 3.6.

WORKDIR and COPY

Next up is WORKDIR and COPY.

We use WORKDIR to set the active directory inside our image (the construction site of our container) to /app. From now on, . outside of our image refers to our current directory (eg /gcp-api) and . inside our image refers to /app.

After WORKDIR , we COPY everything from our local directory /gcp-api to our internal active directory /app.

The reason we copy app.py to /app inside our image is because this is the structure that our Google Cloud instance will expect. We can change this, but this is what we will use here.

RUN

Now, we have our pip install instructions. We use RUN to tell Docker to run the following command. That following command is pip install -r requirements.txt.

By writing pip install -r requirements.txt we are telling Docker to run pip install recursively -r over every line contained within requirements.txt.

So what does requirements.txt look like?

pandas==1.1.1
gunicorn==20.0.4
flask==1.1.2
flask-api==2.0

When that is fed into our recursive pip install instruction, it is translated into:

pip install pandas==1.1.1
pip install gunicorn==20.0.4
pip install flask==1.1.2
pip install flask-api==2.0

Which I’m sure is something everyone recognizes.

CMD

Our final instruction is, depending on our app, not necessarily required. In this case, it is used to host our API using the gunicorn Python package.

Nonetheless, the CMD instruction is the equivalent to opening our computer’s command-line interface CLI and typing whatever commands we provide, in this case exec gunicorn — bind :$PORT –workers 1 –threads 8 –timeout 0 app:app.

Earlier in the article, we described the house building metaphor for Docker containers. So far, we’ve acquired our raw materials (scripts and files) and written a set of instructions explaining what we want our house to be like (the Dockerfile).

Now, it’s time to create our blueprint — the Docker image.

We can create this by executing the command:

docker build -t gcp-api .

  • Here, the Docker image build command is docker build
  • Next, we use the -t flag to specify our image name — gcp-api
  • Finally, we tell Docker to include everything from the current directory with .

At this point, we have our blueprint, and all we need now is our builder, the Cloud — so let’s begin setting it up.

There are three steps we need to take to deploy our container to the Cloud. First we:

  • Download the Google Cloud SDK, which we will use to —
  • Build our container with Cloud Build.
  • Upload the container to GCP’s Container Registry.
  • Deploy it with Cloud Run.

GCloud SDK

We can find the SDK installer here. Once it’s installed, we need to authenticate our GCloud SDK by opening CMD prompt (or your equivalent CLI) and typing:

gcloud auth login

This command opens our web browser and allows us to log in to Google Cloud as usual. We configure Docker to used our GCP credentials with:

gcloud auth configure-docker

Finally, set the active project to your project ID (mine is medium-286319) with:

gcloud config set project medium-286319

Cloud Build and Container Registry

Google’s Container Registry (GCR) service allows us to store Docker containers, which we can use as a launchpad for our deployments.

We need an existing project to deploy our container. Image by Author.

Before we can use GCR (or any other GCP services), we need to create a project. We can do this by navigating to the project selector page in the GCP console and clicking Create Project.

Setting up a new project is incredibly simple. Image by Author.

All we need to do here is give our project a name. I use medium.

If this is our first time using GCR in a project, we will need to enable the GCR API. Image by Author.

Now we have our project setup; we should be able to access Container Registry. Here, we should be able to see the name of our newly created project in the top bar.

To use GCR, we need to click Enable Container Registry API in the center of the console window.

Finally, we can upload our container to GCR by submitting it to Cloud Build — the GCP service that builds Docker containers.

To do this, we open our CLI in our project directory (gcp-api) and type:

gcloud builds submit –tag gcr.io/[PROJECT-ID]/gcp-api

  • gcloud builds submit submits the Docker image attached to our current directory to Cloud Build — where it will be packaged into a container.

Our Container Registry location is provided to the –tag flag, where:

  • gcr.io is the GCR hostname.
  • [PROJECT-ID] is our project ID; we saw this when creating our project — for me, it is medium-286319.
  • gcp-api is our image name.

Our project gcp-api in Container Registry. Image by Author.

If we go back to our GCR window, we should be able to see our newly uploaded Docker image. If it isn’t there yet, it’s likely still in the build process — which we can find in our Cloud Build dashboard.

Cloud Run

Now we have our Docker container ready; we can deploy it with Cloud Run.

A step-by-step process for deploying containers with Cloud Run. Image by Author.

In the Cloud Run interface, we deploy by (1) clicking Create Service, (2) configuring our deployment, (3–4) selecting the container, and (5) creating our deployment!

An in-progress deployment in Cloud Run. Image by Author.

We will see the deployment status in the Cloud Run console, which should take no longer than a few minutes.

On completion, a green tick and URL will appear next to our deployment name and region. Image by Author.

Once complete, we will see a green tick next to our deployment name and our deployment URL next to that.

That’s it, we’ve taken our Python app, packaged it into a Docker container, and deployed it to the web with Google Cloud!

Thanks to some brilliant tools — namely Docker and GCP — the process is painless and (typically) results in flawless deployments time after time.

Now, more than ever before in the history of humanity. We can take the ideas and concepts in our minds and give them a tangible presence in the real world — which can result in some genuinely awe-inspiring creations.

I hope this article will help some of you out there — if you have any questions, feedback, or ideas, feel free to reach out via Twitter or in the comments below. Thanks for reading!

Interested in learning about SQL on the Cloud? Try Google’s brilliant MySQL, PostgreSQL, and SQL Server database services:

Leave a Comment

Your email address will not be published. Required fields are marked *