Your very own server with Docker
Mar 22, 2015
6 minutes read

Something really exciting with Docker is that the more time passes the more it is easy to deploy it virtually anywhere and with all the benefits of using such a technology. The other day I was willing to change my private dedicated server and I was thinking of all that I’ll have to do to process the migration from one to another, and I was like:

It would be so much simpler if it was running under Docker

So, here is my small guide to build your private server with Docker and Docker Compose the tool that orchestrate your machine spawnings.

Choose Your Base OS

The first thing you’ll have to do is to choose a base os for your server. While this seems not to be a complex task at all you must be aware that it has to be able to run Docker and Docker Compose. While some distributions seems to be particularly Docker oriented, they might not suit our use case where we only have one private server to run with. If I say that it’s because I previously tried to deploy all this stuff under a CoreOS distribution and I faced two major problems:

  • CoreOS does not allow you to install anything but containers. So you cannot deploy Docker Compose or Docker Swarm, you must go with their own ecosystem, thus said: fleet, etcd, cloudconfig, flannel and so on…

  • CoreOS is really not meant to deal with only one node in a cluster (we only have one machine). It does not make sense under three nodes, it is completely overkill using such a technology otherwise.

So, for my part, after my struggles using CoreOS (by the way, I learned a lot of things trying to use it, it is a really interesting piece of technology and I managed to have everything working on it, it just doesn’t fit in our use case), I decided to go with a classic Ubuntu Server which does the job pretty well.

Install the required tools

For our small experiment we mainly need two tools: Docker and Docker Compose.

Docker will run all the containers on the server, here we want to serve a ghost blog, and Docker Compose will take care of the orchestration of the machine spawning. So why using Docker Compose and not only Docker ? The reason is simple: simplicity. You may want to run all your docker containers manually but that is exhausting and once you need to restart them all, it will become unbearable. Docker Compose lets you specify all your containers in a single file docker-compose.yml and then, with a simple command, everything will be started in the right order regarding the dependencies between the different parts.

So, how to install those two tools:

  • To install Docker, simply run: curl -sSL https://get.docker.com/ubuntu/ | sudo sh

  • To install Docker Compose, run:

    $ curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    $ chmod +x /usr/local/bin/docker-compose
    

That’s all we need to install for now!

Setup your user

Before we can finaly start to play a bit with Docker on our server, you must do a last setup. In order to run properly, Docker requires some rights that your standard user does not have in a default Ubuntu system.

In order to fix this, there is a simple command to run either as root or by sudoing:

$ usermod -a -G docker youSuperUsername

That is all! Now you must logout and login again in your server for this change to be correctly applied to your session, otherwise you’ll be stuck with your previous environment settings.

Compose with Docker

We are now ready to play with Docker Compose and Docker. Everything that has to be setup has been done is the previous part, now you only have to focus on what you want to serve as services on your server.

For this article, I’ll be serving my Ghost blog, this very blog that you are reading right now and which is running under Docker on my brand new server :p.

So, first, the idea is that I’ll be serving more that just a blog. I will start with this simple example but I need something that can handle multiple web services in a near future, so the idea to publish directly the ghost container port on the port 80 of the host is definitely banned. To do so, I need a web-proxy, and Nginx is a good one. We also need Nginx to be able to do some service discovery so that it finds the spawned containers; Why? Simply because docker does not guarantee you the ip adress of the service is spawning, so you cannot say in a static Nginx conf that this container will be at this adress and so on. This is where docker-gen arrives. This container will listen to the docker socket and write Nginx conf based on a template and then notify it to reload. This is the same type of problem that some mechanisms you’ll find in CoreOS try to adress with etcd but that it’s local where etcd is distributed, and when using etcd each container has to actively declare himself, where in our solution, it is a tierce-party that listen so it is more passive.

So, here is the content of the docker-compose.yml file for this part:

globalnginx:
  image: nginx
  volumes:
    - "/tmp/nginx:/etc/nginx/conf.d"
    - "~/units/globalnginx/certs:/etc/nginx/certs"
  ports:
    - "80:80"
    - "443:443"

dockergen:
  image: jwilder/docker-gen
  volumes_from:
    - globalnginx
  volumes:
    - "/var/run/docker.sock:/tmp/docker.sock"
    - "~/units/dockergen/:/etc/docker-gen/templates"
  command: -notify-sighup units_globalnginx_1 -watch -only-exposed /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
  links:
    - globalnginx

Please note that units_globalnginx_1 must be adapted. Docker Compose, unless you pass special parameters will give a name composed of folder_service_number where number starts at 1.

So, since I have an Nginx and docker-gen container now, I only need to spawn my Ghost container and my blog will be live. Docker-gen, as stated with the -only-exposed parameter, will only serve containers with exposed ports but they also need to have a VIRTUAL_HOST environment variable set.

So here is my Ghost container, also in the docker-compose.yml file:

ghosterwyn:
  image: dockerfile/ghost
  volumes:
    - "~/units/ghosterwyn:/ghost-override"
  expose:
    - "2368"
  environment:
    - VIRTUAL_HOST=erwyn.github.io
  links:
    - dockergen

Here we can see that I exposed the 2368 port number and set the VIRTUAL_HOST to be erwyn.github.io. I also set dockergen in the links so that this container cannot be started before dockergen is, the same way that dockergen cannot until globalnginx is.

That’s all folks! We are at the end of this story, all you need now is to start the beast with a simple docker-compose up -d.

You’re done, your blog is now live running on top of Docker. You can add as many other web services/applications as you want following this example, all you need is a published port (the one that the webapp is listening at) and a VIRTUAL_HOST environment variable.

And if you want a proof, docker ps :

fb9a5e7f5b87        dockerfile/ghost:latest     "bash /ghost-start"    20 hours ago        Up 20 hours         2368/tcp                                   units_ghosterwyn_1    
5efe33f6ce8d        jwilder/docker-gen:latest   "/usr/local/bin/dock   20 hours ago        Up 20 hours                                                    units_dockergen_1     
61e997f32ada        nginx:latest                "nginx -g 'daemon of   20 hours ago        Up 20 hours         0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   units_globalnginx_1

And as a matter of fact, you’re reading this blog, so it works :p

– Amike


draw of the post


Back to posts


comments powered by Disqus