computing scientist and software programmer

Communicating docker-machines over networked physical hosts

The problem

Here at VivaReal in Brazil we're doing a lot of Scala and Akka development lately. We're also strong supports of Docker. Usually, this combination yields great results, but as most of our team program on MacBooks, Docker sometimes gives us minor annoyances for not being native to OS X.

This time I wanted to build an Akka Cluster in a Docker-based fashion and develop/test it across independent physical hosts over a network. It would be no problem if I was using Linux, running Docker natively, but those were Macs and we had a man-in-the-middle, Docker Machine.


To make your Docker container running on docker-machine accessible to other nodes in your host network, you need to not only map your docker-machine ports to your container ports using Docker's -p or -P parameters but also create a SSH tunnel from your host's ports to your docker-machine VM ports.


host: your physical machine, in this case, your Mac

docker-machine: your virtual machine where Docker will be run

docker-container: the Docker container which holds your application, for this sample case, a stock NGINX.

The basics

First of all, you should make sure all hosts are connected to the same network and visible to each other. A simple ping will do. Also, you should check your firewall for any blocking rules that may prevent your machines from talking to each other in the ports you need. This is not Docker-specific, so I'm not going into detail on this.

Once you've got the basics covered, it's time to start our NginX image.

Running NGINX

Assuming you have Docker up and running on your machine, running a NGINX server is as simple as:

> docker run --name web -d nginx

This will download the latest NGINX image from Docker Hub and get you a running server, but acessible only from within the image. This is not what we want, so we need to map our image's ports to our docker-machine ports.

Accessing NGINX through docker-machine

To make your container accessible, you need to map its ports to the docker-machine running it. The easiest way to do that is using the -P flag.

> docker run --name web -P -d nginx

This will map the exposed ports from your Docker container to ramdonly assigned ports on you docker-mahcine. If we want to retain control over which ports are mapped, we can use the -p parameters, as follows.

> docker run --name web -p 8080:80 -d nginx

The -p parameter works on a docker-machine:docker-container fashion. Here we map our docker-machine port 8080 to the port 80 in our NGINX container. You can find more about how -p and -P work reading the official documentation.

At this point, we can already interact with our container, but with some limitations. It will only be accessible from your host machine and through your docker-machine IP. This is quite useful already but your application still won't be visible to other physical clients over your network. Try this from different hosts and see what you get:

> curl -X GET

You can find your docker-machine IP using > docker-machine ip ${docker_machine_name}. If you don't know your docker-machine's name, it's probably default running on You can find more about your docker-machines with > docker-machine ls.

Mapping ports from your docker-machine to your physical host

SSH tunnels to the rescue. As we are able to SSH into our docker-machines, we can also do some local port forwarding using it. My first attempt was something like:

> docker_machine_name="default";
> docker_machine_ip=$(docker-machine ip $docker_machine_name);

> ssh -i ~/.docker/machine/machines/$(docker_machine_name)/id_rsa docker@$(docker_machine_ip) -N -L 8000:localhost:8080

Don't wait for this command to exit. It'll run indefinetely until you kill it.

With this we can forget the docker-machine IP and access our nginx server from our host through http://localhost:8000. But it's still not visible to the other physical machines in our network.

SSH's -L flag takes 4 parameters. The first one, called bind_address is optional and often left empty, but the behaviour for when it's empty depends on another setting, GatewayPorts (see ssh_config for details). By default, it's set to prevent remote hosts from connecting to forwarded ports.

To allow for remote hosts connection without messing with global configurations, we can explicitly define our bind_address as our host IP or a wildcard, '*'. For a machine with IP, we would have:

> docker_machine_name="default";
> docker_machine_ip=$(docker-machine ip $docker_machine_name);

> ssh -i ~/.docker/machine/machines/$(docker_machine_name)/id_rsa docker@$(docker_machine_ip) -N -L

Yay! We are able to access our Docker-deployed NGINX from any other host in our network, at last. :-)

Making the forwarding permanent (for a given docker-machine)

The SSH solution works, but if you find yourself running it too often it may be a good idea to make the forwarding permanent. Recalling that docker-machine is pretty much a VirtualBox VM, we can use VBoxManage to configure it from our prompt. Remember to change the example IP to your actual host IP.

> docker_machine_name="default";
> docker_machine_ip=$(docker-machine ip $docker_machine_name);

> VBoxManage controlvm $(docker_machine_name) natpf1 "tcp-port8080,tcp,,8000,localhost,8080"

These settings will live for as long as the docker-machine running on VirtualBox lives. If, for any reason you need to revert these forwardings, you can use:

> VBoxManage controlvm $(docker_machine_name) natpf1 delete "tcp-port8080"

If your docker-machine VM is not running, you need to change controlvm to modifyvm.

That's all

With the networking issues out of the way, you can go on and build your great containerized applications on Mac OSX. Myself, I'm diving right back into my Akka Cluster configuration with automatic seed node discovery, everything on top of Docker.

This post was cross-posted on VivaReal's Engineering Blog, where you can find more about how we're using technology to change the real estate market in Brazil.