overlay networks Innovation

Using Docker Overlay Networks: Configuration Guide

06/05/20 10 min. read

Docker and containerisation have changed the way to deploy applications and services and is now mainstream in DevOps and SysOps. However the networking features of docker are largely unknown for many. In this article, you’ll learn about docker’s network options and how to leverage on docker swarm and docker overlay networks to enhance and simplify the deployment and connectivity of multi-host container setups.


What’s an overlay network?

An “overlay network” is a virtual network that runs on top of a different network. Devices in that network are unaware that they are in an overlay. Traditional VPNs, for instance are overlay networks running over Internet.

The term “overlay” has come to be used extensively (instead of VPN) only after technologies different than PPTP or L2TP have been developed to run virtual networks in Cloud environments. For those environments, protocols like VXLAN or GENEVE have been developed to address specific needs.

The topic covered in this document is best detailed at https://docs.docker.com/network/. This document summarizes the key information available there.

What's an Overlay network?

Network drivers in docker

Docker provides different network drivers:

  • Bridge ? : The default network driver. Bridge networks are usually used when your applications run in standalone containers that need to communicate on the same host.
  • Host ⚙️ : Removes network isolation between the container and the Docker host, and uses the host’s networking directly.
  • Overlay ⛓ : Connects multiple Docker daemons together to create a flat virtual network across hosts where you can establish communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers.
  • Macvlan ? : Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network.
  • None ❌ : Disables all networking.

Driver use cases

User-defined bridge networks are best when you need multiple containers to communicate on the same Docker host.

Host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.

Overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.

Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.

On many occasions, deployments running on multiple docker daemons do not enter into designing their networking approach, but they can have great benefits by using overlays for a number of reasons:

  • All the overlay traffic among different hosts can be easily encrypted.
  • All containers in an overlay share the same name space. Hence, weird workarounds like using extrahosts or double name policies can be avoided.
  • There is no need to worry for routing management at the overlay level.

Configuring docker network overlays

Prerequisites ?

  • Firewall rules for Docker daemons using overlay networks
    • You need the following ports open to traffic to and from each Docker host participating on an overlay network:
      • TCP port 2377 for cluster management communications
      • TCP and UDP port 7946 for communication among nodes
      • UDP port 4789 for overlay network traffic
    • On certain Linux systems you may need to open the ports in the host firewall. This can be done with firewalld in systems like RedHat or with iptables. Here is an example of how to implement it in RedHat:
                                                 $ sudo firewall-cmd --add-port=2377/tcp --permanent
                                                 $ sudo firewall-cmd --add-port=7946/tcp --permanent
                                                 $ sudo firewall-cmd --add-port=7946/udp --permanent
                                                 $ sudo firewall-cmd --add-port=4789/udp –permanent
                                                 $ sudo firewall-cmd --reload
  • Before you can create an overlay network, you need to either initialize your Docker daemon as a swarm manager using docker swarm init or join it to an existing swarm using docker swarm join. Either of these creates the default ingress overlay network which is used by swarm services by default. You need to do this even if you never plan to use swarm services. Afterward, you can create additional user-defined overlay networks.

Setup ?

Let’s setup a network connecting three containers on three different docker hosts. The goal here is not to build swarms but improving the networking of standalone containers running on different hosts.

  1. Create a swarm

On host1:

                                                  $ docker swarm init

Note the token that the command output as you will need it later. Host1 becomes the swarm manager.

On host2 and host3:

                                                  $ docker swarm join --token <TOKEN> <IP-ADDRESS-OF-MANAGER>:2377
  1. On host1 (because it’s the manager), verify that the swarm is ready:
                                                  $ docker node ls
                                                  ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
                                                  d68ace5iraw6whp7llvgjpu48 *   ip-172-31-34-146    Ready               Active              Leader
                                                  nvp5rwavvb8lhdggo8fcf7plg     ip-172-31-35-151    Ready               Active
                                                  ouvx2l7qfcxisoyms8mtkgahw     ip-172-31-36-89     Ready               Active

  1. On host1 (because it’s the manager), verify the default networks:
                                                 $ docker network ls
                                                 NETWORK ID          NAME                DRIVER              SCOPE
                                                 495c570066be        bridge              bridge              local
                                                 961c6cae9945        docker_gwbridge     bridge              local
                                                 ff35ceda3643        host                host                local
                                                 trtnl4tqnc3n        ingress             overlay             swarm
                                                 c8357deec9cb        none                null                local
  1. If you do not need data encryption: On host1 (because it’s the manager), create the overlay network (for instance my_net). The attachable flag is needed to allow standalone containers to connect to my_net:
                                                  $ docker network create -d overlay –-attachable my_net
  1. All swarm management traffic is encrypted by default using AES in GCM mode. To encrypt application data traffic as well add –opt encrypted to the command above. This will build IPSEC tunnels among the docker hosts. Encryption will also use AES-GCM with 12 hour key rotation. Overlay encryption is not supported on Windows hosts.
                                                  $ docker network create –-opt encrypted -d overlay –-attachable my_net
  1. After all these steps you will have successfully built an overlay network called my_net spanning three docker different hosts.
  2. Give it a try. Create three test containers to confirm reachability and name resolution inside the overlay
  3. On host1:
                                                 $ docker run -it --name alpine1 --network my-net alpine
                                                 / #
  1. On host2:
                                                 $ docker run -it --name alpine2 --network my-net alpine
                                                 / #
  1. On host3:
                                                  $ docker run -it --name alpine3 --network my-net alpine
                                                  / #
  1. Now check connectivity. You should receive replies to all these commands demonstrating reachability and name resolution:

a) On host1:

                                                 / # ping alpine2
                                                 PING alpine2 ( 56 data bytes
                                                 64 bytes from seq=0 ttl=64 time=0.500 ms
                                                 / # ping alpine3
                                                 PING alpine3 ( 56 data bytes
                                                 64 bytes from seq=0 ttl=64 time=0.500 ms

b) On host2:

                                                 / # ping alpine1
                                                 PING alpine2 ( 56 data bytes
                                                 64 bytes from seq=0 ttl=64 time=0.500 ms
                                                 / # ping alpine3
                                                 PING alpine3 ( 56 data bytes
                                                 64 bytes from seq=0 ttl=64 time=0.500 ms

c) On host3:

                                                 / # ping alpine1
                                                 PING alpine2 ( 56 data bytes
                                                 64 bytes from seq=0 ttl=64 time=0.500 ms
                                                 / # ping alpine2
                                                 PING alpine3 ( 56 data bytes
                                                 64 bytes from seq=0 ttl=64 time=0.500 ms
  1. You can now exit from the alpine containers and remove them.
Setup overlay network

Architecting the swarm managers ??‍?

This is a summary of manager architecture. Find more information on swarm architecture https://docs.docker.com/engine/swarm/admin_guide/

Managers and workers

Swarm manager nodes take care of managing the swarm and storing its state. Losing the managers means that the swarm services will keep on running, but further management and changes will not be possible. To recover management you will need to create a new cluster.

So, for production you should always consider to setup several managers to provide HA to the swarm management.

Architecting the swarm managers

Managers maintain a consistent status of the swarm state using a Raft implementation. As in our previous example, they also run containers. They can be alleviated of this task if needed to avoid resource starvation. See the documentation. They can also be demoted to workers with docker node demote.

Workers just run containers. They can be promoted to managers in case a manager needs to be taken down for maintenance, for instance, with docker node promote.

HA of managers

Raft will allow for service survability while there is a super majority of manager nodes alive. So, it is recommended that you always use an odd number of manager nodes. See for instance the following table (figures = number of manager nodes).

Swarm Size Majority Fault Tolerance
1 1 0
2 2 0
3 2 1
4 3 1
5 3 2

There is no limit on the number of manager nodes. But having many nodes will involve a greater latency to commit changes to their state, especially when there’s some latency among them, as more nodes will need to acknowledge the proposals.

When deploying the managers, you should also consider the physical topology.

For a single region setup you should spread the managers across the three different availability zones. You could design having one manager per AZ for a 3 node setup or having two zones with two nodes and 1 zone with one node for a 5 node setup.

For a dual-region setup you could have one region with 3 nodes, one per AZ and another region with two nodes (always one per AZ).

While spreading the nodes across different regions will improve your DR capabilities, take care not to set up the nodes too far apart as latency may impact their performance as discussed previously.

How to add a manager

Adding a manager node is pretty much like adding a worker node.

On an existing manager node get the token required to enrol new managers:

                                                 $ docker swarm join-token manager
                                                 To add a manager to this swarm, run the following command:
                                                 docker swarm join \
                                                 --token SWMTKN-1-61ztec5kyafptydic6jfc1i33t37flcl4nuipzcusor96k7kby-5vy9t8u35tuqm7vh67lrz9xp6 \

Now, just run that command on the new docker node that needs to join as a manager to the swarm.

Conclusion ?

Here are the basics to deploy and use overlay docker networks. Other than its creation procedure, from a container perspective, they behave much like bridge networks and provide many features and flexibility that simplify networking on docker multihost scenarios.

Please note that you can specify options when creating a new overlay network. Check docker network create –help and https://docs.docker.com/network/overlay/.

You may find many explanations on the Internet if you google around a bit. This link, for instance, will show you how to deploy a Hyperledger Fabric network on various hosts using docker swarm and overlay networks.

Jaime Gómez

Jaime Gómez García


Architecture and IT & Telecom Infrastructure expert. I learn about the Internet, networks and applied cryptography every day since the mid 90’s.


👉 My LinkedIn profile


Other posts