Productive with Docker in 20 minutes

A better, more productive development workflow

Disclaimer: I talk a lot in real life. I’ve tried not to write that way.

How things used to be

Remember the day you first started at your job? IT gave you that nice, shiny new Mac and you discovered the joy of getting 8 repos onto your machine? One project used Node 0.12, another used Node 4.0, an iOS app, the Postgres database, and then there was the Java… Before you knew it you were installing for half a day. Remember the relief you felt when you were finally done, only to realize that you’d be syncing them all daily and need at least 8 terminal tabs open at all times?

Over time, we automated things and maybe we ended up with a few scripts: install.sh to clone all the repos and install their dependencies, start.sh and stop.sh scripts to bring everything up or tear it down. Better, but we still needed to use sleep commands to wait on race conditions, curate new package installs over time, database migrations, etc. Sometimes just once a day and sometimes every time we started a new branch. Sounds like a lot of time you could be coding, right?

If you take anything from this post, I hope you take this: Docker frees you from this life.


How things can be

While Docker has so many useful features, we’re going to just focus on a few things to make us more productive. In this world, the only things we need to install on our machine are Git, Docker for Mac, and VS Code. You can use whatever IDE suits you but I’ll be using VS Code. From there we only need to understand a few core concepts to be dangerous:

Docker. Since we’ve established that we just wanna stop opening so many tabs, think of Docker as opening and managing them for you.

Docker Compose. A CLI - with its own configuration file docker-compose.yml for making all those tabs open - to install everything, start container processes, etc.

Dockerfile. Just another name for a fancy bash script. The output creates an image.

Image. The blueprint of what a project would look like if you freshly installed its dependencies and internal packages. Think of it as the ubuntu CD you used to use. You only needed that single CD and you could put it on any number of computers.

Container. A running process that was initially started based off an image. You can create and kill containers all day long, without modifying the image its based off of.


Creating an Image with a Dockerfile

When we’re creating an image, we first create a script file that will run some commands that Docker recognizes called a Dockerfile. The output of these commands create an image, that we’ll start container processes based off of. Its important to remember that the commands are executed relative to the filesystem location they’re at. Here’s an example of starting from a base Ubuntu image and installing node, copying a project’s files into the container, and specifying the npm start command:

Then we need a docker-compose.yml file so that the docker-compose CLI we’re going to use will know what to do with this Dockerfile:

The only 2 other files we need are our package.json and server.js. Then inside the terminal we can run 2 commands from our project’s directory:

docker-compose build basic. this command tells the docker compose CLI to build a new image for the service we’ve named basic. It’ll look in our docker-compose.yml file and find the service definition so it can find where our Dockerfile is. It’ll then just run the Dockerfile as if we had typed all of it out ourselves into the terminal.

docker-compose up. once the build command has finished, we have an image blueprint and we can tell the CLI to “bring up” all the services; in this case we only have one, basic This is similar to saying, for every service I’ve defined: open a tab, install everything I need to run the code, and then start that service.

This is great, but it has some downsides too! What if I want to add new code? Or fix a bug? I have to ctrl + c to stop the containers, rebuild the image with my new code, and then bring the containers back up. We can do better.


Pre-baked images and file watching

Notice above we’re starting from scratch for ubuntu, installing nvm, node, etc. from scratch. This takes a lot of time. Thankfully, there’s Docker Hub which is like GitHub but for Docker images. This will definitely speed things up. Now, we can just start FROM an image that already has NodeJS installed.

This new image saves us a lot of install time and code. Now for that second point I made.

We can sync the files on our computer, to those same files inside of a running container. This lets us change files on our computer and have those changes be reflected in the container’s version of those same files. The file watching happens by telling docker compose to watch certain volumes that we care about.

But that alone isn’t enough to help us. We need to restart our node process inside the container whenever we change the files on our computer. We’ll use Nodemon to do this by simply adding the dependency to our package.json. Then we add the volumes we want to watch for basic

Now when we build our container images, Nodemon will get installed, and when we bring up our containers, Docker is watching the server.js file. Try it! Change what gets console.log-ed and after you save it you should see the process restart and logging what you changed.


Debugging effectively

One of the things we’re constantly doing is debugging. We can be more effective at debugging by using breakpoints that attach to the node process in our container; sorry console.log. NodeJS exposes a port, 9229, where we can attach and set breakpoints from inside VS Code. VS Code allows this by going to the “Debug” tab, and creating a launch.json file with various tasks we can run. For our purposes we just need a task to attach to our running container over port 9229. The localRoot is relative to your project structure, the remoteRoot to your container’s file system:

Then, just like port 4000 from earlier, we just need to expose the port 9229 in our Dockerfile, map the port inside of our docker-compose.yml, and then we need to add an argument to our npm start command to use the debugging port. To keep the article from getting too long, I’ve posted the new files here:

Now, if you docker-compose build your image and bring up the container, you’re able to start that debug task inside of VS Code and if you set a breakpoint on the console.log in server.js, VS Code will break on it. This helps make us crazy productive because we can hover over variables, run commands into our VS Code console, follow code paths, etc.


Ok, but we have to do real work

Up to this point, we’ve done some nifty things but we have to do real work around here! Typically that involves connecting to a database; Let’s use Postgres. Using Postgres locally is easy thanks to Docker. We can reference a base image for Postgres 9.6 inside our docker-compose.yml file that will automatically get downloaded from Docker Hub and be the image we use to start a Postgres container with docker-compose up

This time, we didn’t need to make our own Dockerfile and in just a few lines we added a base Postgres image. Now when we use the command docker-compose up, we will automatically download the Postgres image and start a new container from that image. Now to connect to it from basic.

We can use the Knex.js package, along with the Pg adapter for it, to have our server.js file connect to Postgres. Add these two packages to your package.json file and then docker-compose build basic will rebuild our image. Let’s connect to the database in server.js with:

const knex = require('knex')({
client: 'postgresql',
connection: {
debug: 'true',
host: 'localhost',
port: 5432,
user: 'postgres',
password: '',
database: 'docker-blog'
}
});

and then we can test this out by trying to use Knex in our heartbeat function

function logHeartbeat() {
knex.select('*').from('pg_catalog.pg_tables').asCallback(
function(err, result) {
console.log('Hi I`m still here in this callback!');
});
}

Uh oh, our logs are showing an error. Do you see Knex:warning — Connection Error: Error: connect ECONNREFUSED 127.0.0.1:5432 Hmmm yea that’s odd. Docker said it started Postgres so what gives? This brings us to an important concept and that is:

A container is acting like an entirely separate computer, even though it’s on your computer. So using the name localhost inside of a container is not the same as using localhost in code that is running on your computer.

We can clean this up by using environment variables and a little bit of Docker magic.

Environment variables with docker-compose.yml

We can add environment variables with the environments keyword in docker-compose.yml. We’re going to make use of this 2 ways. The Postgres image lets us use a POSTGRES_DB environment variable that will automatically create a database for us when the container starts. Then, instead of hard-coding the connection information in server.js, we’ll put it in the docker-compose.yml file and just use process.env inside of server.js to get the values we need. Our new docker-compose.yml file looks like this:

Once again, if you docker-compose build basic and then docker-compose up everything should be back to normal except that we have an actual connection to our Postgres container!

An important thing to note is that we told server.js that the host could be reached at the IP address postgres which is what we named our Postgres container. Docker creates its own IP address mappings for us so that basic can just say connect me to postgres and Docker takes care of it.

But our docker logs still show basic and postgres starting at once. And sometimes Postgres isn’t finished starting up before server.js tries to connect. Typically, we’d want the database to be ready before we bring up our server, right? Let’s set this up so that basic doesn't start until after postgres.

Healthchecks with Docker Compose

Docker compose lets us perform health checks by saying one container depends on another container (or more!) in our docker-compose.yml file. In our case, our basic container depends on our postgres container being ready.

There are 2 conditions that you can specify: service_started and service_healthy where service_started simply means docker was able to bring the container up but code running inside that container may not be ready yet, and service_healthy has a condition that must succeed.

Typically, we’re going to use service_healthy and specify a healthcheck. Health checks are just a test terminal command to be run after starting the container; typically to curl a server to see if it is up and running. You also specify how many times to retry the test and at what interval.

Our final docker-compose.yml file is below and if you docker-compose build basic again and then docker-compose up you’ll see that all of the postgres logs happen before we start server.js!

Conclusion

Hopefully by now you’re starting to see how Docker and container-based development is really going to save you time in the future. As well as make working and debugging code a breeze. We didn’t need to install Postgres with Homebrew or install node_modules, etc. We didn’t have to tweak and prune bash scripts for installs, starting, waiting, and stopping. We just installed Docker and started coding.

While our example is really simple, imagine the time we save if we’ve got multiple Postgres databases to maintain, a server for our web app, a server for our NodeJS api, a Java api that our NodeJS api talks to, some workers, etc. We could easily grow to tens of services that we have to keep in sync to do our job but now we can sync and start with 2 commands and a little bit of configuration.

Hopefully in the future, I’ll be able to write a few more of these with some advanced topics. You’ll always be able to see everything here:

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.