Javascript required
Skip to content Skip to sidebar Skip to footer

Updated Docker Bridge but the Second Bridge Uses Different Cidr Range Again

image In this post, I'd like to cover some of the new Docker network features.  Docker 1.9 saw the release of user defined networks and the most contempo version 1.10 added some additional features.  In this post, we'll embrace the basics of the new network model too every bit show some examples of what these new features provide.

So what'southward new?  Well – lots.  To start with, allow's accept a wait at a Docker host running the newest version of Docker (i.10).

Notation: I'one thousand running this demo on CentOS 7 boxes.  The default repository had version 1.8 so I had to update to the latest by using the update method shown in a previous post here .  Before you keep, verify that 'docker version' shows you on the correct release.

You'll notice that the Docker CLI at present provide a new option to collaborate with the network through the 'docker network' command…

image
Alright – and so allow'southward beginning with the basics and run into what'due south already defined…

image

By default a base Docker installation has these three networks defined.  The networks are permanent and can not be modified.  Taking a closer look, you likely notice that these predefined networks are the same equally the network models nosotros had in earlier versions of Docker.  You could start a container in bridge mode by default, host mode my specifying '–net=host', and without an interface by specifying '–net=none'.  To level set – everything that was there before is nevertheless here.  To make sure everything still works as expected, let'south run through building a container under each network blazon.

Note: These verbal commands were taken out my earlier series of Docker networking 101 posts to show that the command syntax has non changed with the addition of multi-host networking.  Those posts can be institute here.

Host mode

docker run -d --proper noun web1 --internet=host jonlangemak/docker:webinstance1

Executing the above control volition spin up a test web server with the containers network stack being mapped directly to that of the host.  Once the container is running, we should exist able to admission the web server through the Docker hosts IP address…

Note: You either need to disable firewalld on the Docker host or add the advisable rules for this to work.

image Bridge Manner

docker run -d --name web1 -p 8081:lxxx jonlangemak/docker:webinstance1 docker run -d --proper name web2 -p 8082:80 jonlangemak/docker:webinstance2

Here we're running the default bridge way and mapping ports into the container.  Running those 2 containers should requite you the web server you're looking for on ports 8081 and 8082…

image
In addition, if we connect to the containers directly, nosotros tin run into that advice betwixt the two containers occurs directly beyond the docker0 bridge never leaving the host…

image
  Here we tin see that web1 has an APR entry for web2.  Looking at Web2, we can come across the MAC address is identical…

image
None manner

docker run -d --proper name web1 --internet=none jonlangemak/docker:webinstance1

In this example we can see that the container doesn't receive any interface at all…

image
Equally you can see, all three modes piece of work merely as they had in previous versions of Docker.  So now that nosotros've covered the existing network functions, lets' talk about the new user defined networks…

User divers bridge networks
The easiest user defined network to use is the bridge.  Defining a new bridge is pretty easy.  Here'due south a quick example…

docker network create --commuter=bridge \ --subnet=192.168.127.0/24 --gateway=192.168.127.1 \ --ip-range=192.168.127.128/25 testbridge

Here I create a new bridge named 'testbridge' and provide the post-obit attributes…

Gateway – In this case I set it to 192.168.127.1 which will be the IP of the bridge created on the docker host.  We can see this by looking at the Docker hosts interfaces…

image
Subnet – I specified this as 192.168.127.0/24.  We can come across in the output in a higher place that this is the CIDR associated with the bridge.

IP-range – If you wish to define a smaller subnet from which Docker can classify container IPs from your can utilise this flag.  The subnet you specify here must be inside the span subnet itself.    In my case, I specified the 2nd half of the defined subnet.  When I start a container, I'll get an IP out of the subnet if I assign the container to this bridge…

image
Our new bridge acts much like the docker0 span.  Ports can be mapped to the physical host in the same style.  In the above instance, we mapped port 8081 to port eighty in the container.  Despite this container beingness on a different span, the connectivity works withal…

image

We tin make this example slightly more interesting by removing the existing container, removing the 'testbridge', and redefining it slightly differently…

docker cease web1 docker rm web1  docker network rm testbridge  docker network create --driver=bridge \ --subnet=192.168.127.0/24 --gateway=192.168.127.1 \ --ip-range=192.168.127.128/25 –-internal testbridge

The only change here is the addition of the '—internal' flag.  This prevents any external advice from the bridge.  Let'southward check this out by defining the container like this…

docker run -d --net=testbridge -p 8081:lxxx --proper noun web1 jonlangemak/docker:webinstance1

Yous'll note that in this case, we can no longer admission the web server container through the exposed port…

image
It's obvious that the '—internal' flag prevents containers attached to the bridge from talking outside of the host.  So while we tin can now define new bridges and associate newly spawned containers to them, that by itself is not terribly interesting.  What would be more interesting is the power to connect existing containers to these new bridges.  As luck would have it, nosotros tin utilize the docker network 'connect' and 'disconnect' commands to add and remove containers from any divers span.  Let's start past attaching the container web1 to the default docker0 bridge (bridge)…

docker network connect bridge web1

If we look at the network configuration of the container, we can see that it now has two NICs.  Ane associated with 'bridge' (the docker0 bridge), and another associated with 'testbridge'…

image
If we check once more, nosotros'll run across that we tin now once again access the web server through the mapped port across the 'span' interface…

image
Next, let's spin upwardly our web2 container, and attach it to our default docker0 bridge…

docker run -d -p 8082:80 --name web2 jonlangemak/docker:webinstance2

Before we go likewise far – permit'southward take a logical look at where we stand…

image
Nosotros have a physical host (docker1) with a NIC called 'ENS0' which sits on the physical network with the IP address of 10.20.thirty.230.  That host has two Linux bridges called 'bridge' (docker0) and 'testbridge' each with their ain defined IP addresses.  We besides have two containers, one called web1 which is associated with both bridges and a second, web2, that's associated with simply the native Docker span.

Given this diagram, you might assume that web1 and web2 would be able to communicate directly with each other since they are connected to the same bridge.  Nonetheless, if yous recall our earlier posts, Docker has something called ICC (Inter Container Communication) way.  When ICC is set to to simulated, containers tin can't communicate with each other directly across the docker0 bridge.

Note: There's a whole department on ICC and linking down below so if you lot don't recall don't worry!

In the case of this example, I accept ready ICC style to false meaning that web1 can not talk to web2 across the docker0 span unless we define a link.  However, ICC mode only applies to the default span (docker0).  If we connect both containers to the bridge 'testbridge' they should be able to communicate directly across that span.  Let's requite information technology a try…

docker network connect testbridge web2

image
So let's effort from the container and see what happens…

image
Success.  User defined bridges are pretty easy to define and map containers to.  Before nosotros motion on to user defined overlays, I desire to briefly talk nearly linking and how it's inverse with the introduction of user defined networks.

Container Linking
Docker linking has been around since the early on versions and was commonly mistaken for some kind of network feature of function.  In reality, information technology has very picayune to practise with network policy, specially in Docker'due south default configuration. Let'southward accept a quick expect at how linking worked earlier user divers networks.

In a default configuration, Docker has the ICC value set to true. In this mode, all containers on the docker0 bridge can talk direct to each on whatsoever port they like. We saw this in activity earlier with the bridge mode example where web1 and web2 were able to ping each other. If we change the default configuration, and disable ICC, we'll see a dissimilar outcome. For instance, if we modify the ICC value to 'faux' in '/etc/sysconfig/docker', we'll notice that the higher up example no longer works…

image
If we want web1 to be able to access web2 we tin 'link' the containers.  Linking a container to another container allows the containers to talk to each other on the containers exposed ports.

image
To a higher place, you tin run across that one time the link is in identify, I can't ping web1 from web2, just I can access web1 on it's exposed port. In this case, that port is fourscore. So linking with ICC disabled but allows linked containers to talk to each other on their exposed ports.  This is the just way in which linking interests with network or security policy.  The other feature linking gives y'all is name and service resolution.  For instance, allow's look at the ecology variables on web2 once we link it to web1…

image
In addition to the ecology variables, yous'll as well notice that web2's hosts file has been updated to include the IP address of the web1 container.  This means that I tin at present access the container past name rather than by IP address.  As yous tin see, linking in previous versions of Docker had its uses and that same functionality is notwithstanding bachelor today.

That being said, user divers networks offering a pretty slick culling to linking.  So permit's go back to our example higher up where web1 and web2 are communicating across the 'testbridge' bridge.  At this point, we oasis't defined any links at all but lets' trying pinging web2 by proper name from web1…

image
Ok – then that'southward pretty cool, but how is that working?  Allow'south check the environmental variables and the hosts file on the container…

image
Nothing at all hither that would statically map the name web2 to an IP address.  And so how is this working?  Docker now has an embedded DNS server.  Container names are now registered with the Docker daemon and resolvable by any other containers on the same host.  However – this functionality ONLY exists on user defined networks.  You'll annotation that the above ping returned an IP address associated with 'testbridge', not the default docker0 bridge.

That means I no longer demand to statically link containers together in society for them to be able to communicate via name.  In addition to this automatic behavior, you lot tin also define global aliases and links on the user defined networks.   For example, now try running these two commands…

docker network disconnect testbridge web1 docker network connect --alias=thebestserver --link=web2:webtwo testbridge web1

Above nosotros removed web1 from 'testbridge' and and so re-added it specifying a link and an alias.  When using the link flag with user divers networks, it functions much in the same style as information technology did in the legacy linking method.  Web1 will be able to resolve the container web2 either by it's name, or it's linked alias 'webtwo'.  In addition, user defined networks besides provide what are referred to as 'network-scoped aliases'.  These aliases can exist resolved by whatever container on the user defined network segment.  So whereas links are divers by the container that wishes to apply the link, aliases are defined past the container advertising the alias. Let'south log into each container and endeavour pinging via the link and the alias…

image
In the case of web1, it's able to ping both the defined link as well every bit the alias name.  Let's try web2…

image
So we can run across that links used with user defined network are locally significant to the container.  On the other hand, aliases are associated to a container when information technology joins a network, and are globally resolvable by whatsoever container on that same user defined network.

User divers overlay networks
The second and last built-in user defined network type is the overlay.  Unlink the span type, the overlay requires an external primal value store in order to store information such as the networks, endpoints, IP address, and discovery information.  In most examples, that key value store is generally Delegate but it can also be Etcd or ZooKeeper.  And so lets look at the lab we're going to be using for this instance…image
Here we accept 3 Docker hosts.  Docker1 and docker2 live on 10.twenty.30.0/24 and docker3 lives on 192.168.30.0/24.  Starting with a bare slate, all of the hosts have the docker0 span defined but no other user defined network or containers running.

The first thing we demand to do is to tell all of the Docker hosts where to find the key value shop.  This is done by editing the Docker configuration settings in '/etc/sysconfig/docker' and adding some option flags.  In my instance, my 'OPTIONS' now look similar this…

OPTIONS='-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.20.30.230:8500/network --cluster-advertise=ens18:2375'

Make sure that you lot adjust your options to account for the IP address of the Docker host running Consul and the interface name divers nether the 'cluster-advertise' flag.  Update these options on all hosts participating in the cluster so make sure you restart the Docker service.

One time Docker is support and running, nosotros need to deploy the same key value store for the overlay driver to use.  As luck would have it, Consul offers their service equally a container. So let'southward deploy that container on docker1…

docker run -d -p 8500:8500 -h consul --proper name consul progrium/consul -server -bootstrap

In one case the Consul container is running, we're all set to offset defining overlay networks.  Permit'southward get over to the docker3 host and ascertain an overlay network…

docker network create -d overlay --subnet=10.10.10.0/24 testoverlay

Now if we expect at docker1 or docker2, we should run across the new overlay divers…

image
Perfect, then things are working equally expected.  Permit'southward at present run one of our web containers on the host docker3…

Annotation: Different bridges, overlay networks do not pre-create the required interfaces on the Docker host until they are used past a container.  Don't exist surprised if you don't see these generated the instant you create the network.

docker run -d --net=testoverlay -p 8081:80 --name web1 jonlangemak/docker:webinstance1

Null too exciting here.  Much similar our other examples, we tin at present access the web server past browsing to the host docker3 on port 8081…

image
Let's fire up the same container on docker2 and see what nosotros get…

image
So it seems that container names across a user defined overlay can't be common.  This makes sense, so let's instead load the second spider web instance on this host…

docker run -d --net=testoverlay -p 8082:80 --name web2 jonlangemak/docker:webinstance2

In one case this container is running, allow's examination the overlay by pinging web1 from web2…

image
Very cool.  If we await at the physical network betwixt docker2 and docker3 nosotros'll actually see the VXLAN encapsulated packets traversing the network betwixt the two concrete docker hosts…

image
It should be noted that in that location isn't a bridge associated with the overlay itself.  However –  there is a bridge defined on each host which can be used for mapping ports of the physical host to containers that are a fellow member of an overlay network.  For instance, let'due south await at the interfaces defined on the host docker3…

image
Notice that at that place'south a 'docker_gwbridge' bridge divers.  If nosotros expect at the interfaces of the container itself, we encounter that it also has two interfaces…

image
Eth0 is a fellow member of the overlay network, just eth1 is a member of the gateway span we saw defined on the host.  In the case that you need to expose a port from a container on an overlay network you lot would need to use the 'docker_gwbridge' bridge.  Withal, much similar the user defined span, yous can preclude external access by specifying the '—internal' flag during network creation.  This will prevent the container from receiving an additional interface associated with the gateway bridge.  This does non however prevent the 'docker_gwbridge' from beingness created on the host.

Since our last case is going to apply an internal overlay, let's delete the web1 and web2 containers likewise as the overlay network and rebuild the overlay network using the internal flag…

#Docker2 docker stop web2 docker rm web2  #Docker3 docker terminate web1 docker rm web1 docker network rm testoverlay docker network create -d overlay --internal --subnet=10.x.10.0/24 internaloverlay docker run -d --net=internaloverlay --name web1 jonlangemak/docker:webinstance1  #Docker ii docker run -d --internet=internaloverlay --proper noun web2 jonlangemak/docker:webinstance2

And so now we take ii containers, each with a single interface on the overlay network.  Let'due south make sure they can talk to each other…

image
Perfect, the overlay is working.  And then at this indicate – our diagram sort of looks like this…

image
Non very exciting at this point especially considering nosotros have no means to access the spider web server running on either of these containers from the outside world.  To remedy that, why don't we deploy a load balancer on docker1. To exercise this nosotros're going to use HAProxy so our first footstep will be coming up with a config file.  The sample I'm using looks like this…

global     log 127.0.0.i   local0     log 127.0.0.1   local1 notice  defaults     log     global     style    http     pick  httplog     option  dontlognull     selection forwardfor     option http-server-close     timeout connect 5000     timeout client 50000     timeout server 50000     stats enable     stats auth user:password     stats uri /haproxyStats  frontend all     bind *:80     use_backend webservice  backend webservice     balance roundrobin     option httpclose     choice forwardfor     server web1 web1:fourscore cheque     server web2 web2:80 cheque     option httpchk Caput /index.html HTTP/1.0

For the sake of this test – let's merely focus on the backend section which defines two servers.  One called web1 that's accessible at the address of web1:fourscore and a second chosen web2 that'south accessible at the address of web2:80.  Relieve this config file onto your host docker1, in my case, I put information technology in '/root/haproxy/haproxy.cfg'.  And then we just fire up the container with this syntax…

docker run -d --internet=internaloverlay --name haproxy -p 80:fourscore -5 ~/haproxy:/usr/local/etc/haproxy/ haproxy

After this container kicks off, our topology now looks more similar this…

image
Then while the HAProxy container can now talk to both the backend servers, we still can't talk to it on the frontend.  If you recall, nosotros defined the overlay as internal so we need to find another way to access the frontend.  This can easily be achieved by connecting the HAProxy container to the native docker0 bridge using the network connect command…

docker network connect bridge haproxy

Once this is washed, you should be able to hitting the forepart cease of the HAProxy container by hitting the docker1 host on port 80 since that's the port we exposed.

image image
And with any luck, you should see the HAProxy node load balancing requests betwixt the two web server containers.  Annotation that this also could have been accomplished past leaving the overlay network every bit an external network.   In that case, the port lxxx mapping we did with the HAProxy container would accept been exposed through the 'docker_gwbridge' bridge and nosotros wouldn't have needed to add a second bridge interface to the container.  I did information technology that fashion merely to show you lot that you take options.

Bottom line – Lots of new features in the Docker networking space.  I'm hoping to become another blog out shortly to talk over other network plugins.  Cheers for reading!

willcockligationly.blogspot.com

Source: https://www.dasblinkenlichten.com/docker-networking-101-user-defined-networks/