Back to blog home

A guide to Docker in practice

Introduction

So you’ve decided to use Docker and need some help. Great! Then you’ve come to the right place. 

There are various ways you can use Docker to develop, package and deploy your application, and all of them are valid given the context you’re working in. We’re going to explore some of these, and show you how you can choose an option based on your workflow.

Docker allows you to package a part of your application into a container – and almost your entire application stack into a Yaml file. You can then release these containers knowing everything that is required to work is there. You can now release not only your source code, but your software – with the correct dependencies such as a given PHP version or your message queue software.

Another benefit of Docker is that you have your application stack ready to be developed against locally. For any developer this should be a dream come true (yes, developing on production is in vogue again). However, to know which is a good option for us, let me show you some scenarios, and the pros and cons of each.

This is by no means an exhaustive list. The situations demonstrated here are common amongst new starters. If you are a little impatient like myself, you should be pleasantly surprised by the speed at which you can start developing against a full architectural stack, without spinning up multiple virtual machines (VMs).

The three common Docker practices we’ll look at are:

  1. Accessing the container via IP address without port mapping
  2. Accessing the container by DNS
  3. Scaling your web services via docker compose

Accessing the container via IP address 

This practice is suitable when you are brand new to Docker, want to get started straight away, and you are using either OSX or Windows.

You start a web container and try to go to open the page in the browser. However, when you try to use the container’s IP it won’t work because you can’t access the containers directly.

To be able to access the web page you need to create port mappings between the VM and the container so that the VM running knows how to route your request to the correct container.

Let’s look at the setup of Docker on OSX. The environment is created with a Virtual Machine that has an IP address. You’ll want to setup port mapping on the VM to forward all requests on a given port to the VM, to the given port on the Docker container. This is the result of one-to-one mapping between the VM and your Docker container. For further information about this, read this

The con with using this technique is that we use an available port and are stuck with a container mapped to a single available port on the VM. To introduce another web container we would need to map another port (for example, 81, 8080, 8181 etc.) to the new container and ask for it from the host VM by adding the port to the IP address.

As you can see, this can get cumbersome very quickly.

This approach, although valid, is still not great. The drawback here is that you’re losing time on mapping all the containers to ports, having to assign ports, and manually manage that process. Yuck!

To improve on this approach, let me show you another way to access these containers. You can configure your laptop to use the VM’s network for a given IP range. 

Let’s say that the Docker containers are in the IP range 10.0.0.1/24. We will use the VM IP as a gateway for this range, so we can directly talk to the containers.

You can set this up manually by following Tugdual’s blog post, or install Docker using Dock-Cl, and that will configure your system for you. 

Once you’ve setup the direct networking, you can simply run an nginx container for instance, and cURL it with its own IP:

docker run -d --name=my_nginx nginx

docker inspect my_nginx | grep -i ipaddress 

This will provide you with the IP address of the docker container directly:

$ curl 1.2.3.4
...nginx…. 

With this technique, you can run as many parallel containers exposing on the same port without any problem!

Accessing the container by DNS

Although the above approach is better, you need to always get the container IP address to access it and this IP changes each time the container restarts. This can be a real headache as you want to be able to test multiple infrastructure changes – and need to rebuild the container and rediscover the IP addresses after the container has been rebuilt.
    
We can use a special container that will expose a DNS server for all our running containers. You can set this up manually using dnsdock, or use Dock-Cli which will install this DNS container and configure your system to use it for you. This DNS server will manage the DNS resolution of the containers based on their image and container name.

Taking the previous example of our running nginx container, because its image name is ‘nginx’, you can directly access it using: curl http://nginx.docker.

If many containers of the same image are running at the same time, the DNS server will resolve randomly to any of these containers (Round-Robin DNS). You can also use the DNS name my_nginx.nginx.docker to access this particular container (‘my_nginx’  is the name of the container).

Scaling via Docker Compose

We have now chosen one of the many ways to develop using Docker. One of the real benefits of using Docker for me is the ability to still develop and test your app locally against the containers, knowing this will resemble the production environment. 

The possibilities become endless. Infrastructure testing, application testing, and load testing all become a reality, allowing you to spot potential issues before pushing the containers live. 

You should use Docker Compose to ensure you consistently set up your container. This is essentially the recipe of how you want your container to be built. More on this here.

This Dockerfile below is executed before your infrastructure is in place and runs the commands to finish setting up your web server with all the settings required.



FROM php:5.6-apache


RUN a2enmod rewrite
ADD docker/apache/vhost.conf /etc/apache2/sites-enabled/default.conf
ADD docker/php/php.ini /usr/local/etc/php/php.ini


ADD . /app
WORKDIR /app


CMD ["/app/docker/apache/run.sh"]

The example of a Docker compose below explains how your web and database containers can be built, and whether the web server can talk to the database server. 


web:
   build: 
   links:
       - mysql
   expose:
       - 80
   volumes:
       - .:/app
mysql:
   image: mysql
   environment:
       MYSQL_ROOT_PASSWORD: root
   expose:
       - 3306

You can now run docker-compose build or dock-cli build and your container will get created.

Using dock-cli now take a look using dock-cli ps and then use it to access your application. 

Try experimenting with docker-compose scale web=2 to create two web servers. Try stopping one or both of them.

You can use the DNS Address to access either container that uses a round-robin system using http://dockerexample_web.docker or access each container directly by using its DNS name such as http://dockerexample_web1.dockerexample_web.docker and http://dockerexample_web2.dockerexample_web.docker

The benefit is that you are no longer having to configure or map ports yourself, or even worry about downloading the right image as Docker Compose will take care of this consistently for you. 

This is great. You now get to focus on developing.

Conclusion

We have taken a look at some of the common usage scenarios for setting-up and using Docker.

Hopefully this will help you to use Docker and to be able to develop and initialise an environment quickly, without the complexity of spinning up multiple VMs.

We haven’t discussed the immutable container concept, so here’s a good read on that. Stay tuned for future Inviqa posts about using a platform called ContinuousPipe to deploy and test your pull-requests in real time using your Docker containers.