2. The purpose of these slides is to give some
information on the use of muticast streams
in a Dockerized context.
GOAL
3. So that Docker containers can communicate with each other but also with the outside world via the host
machine, then a networking layer is necessary. This network layer adds a portion of container isolation, and
therefore makes it possible to create Docker applications that work together securely.
Docker supports different types of networks that are suitable for certain use cases, which we will see through
this chapter.
The Docker network system uses drivers. Several drivers exist and provide different functionalities.
5. The bridge driver First, when you install Docker for the first time,
it automatically creates a bridge network
named bridge connected to the docker0
network interface (viewable with the ip addr
show docker0 command). Each new Docker
container is automatically connected to this
network unless a custom network is specified.
Furthermore, the bridge network is the most
commonly used type of network. It is limited to
containers on a single host running the Docker
engine. Containers that use this driver can
only communicate with each other, however
they are not accessible from the outside.
Before containers on the bridge network can
communicate or be accessible from the
outside world, you must configure port
mapping.
6. The none driver
This is the ideal network type, if you want to prohibit all internal and external communication with your
container, because your container will be devoid of any network interface (except the loopback / lo
interface).
7. The host driver
This type of network allows containers to use
the same interface as the host.
It therefore removes network isolation
between containers and will by default be
accessible from the outside.
As a result, it will take the same IP as your
host machine.
8. The host driver
Network context from the host :
$ ip addr show eno8403
eno8403:
<BROADCAST,MULTICAST,UP,LOWER_UP>
mtu 1500 qdisc noqueue state UP group default
qlen 1000
link/ether dc:85:de:ce:04:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.255 scope
global dynamic noprefixroute wlp3s0
valid_lft 54874sec preferred_lft 54874sec
inet6 fe80::335:f1f5:127d:b62c/64 scope link
noprefixroute
valid_lft forever preferred_lft forever
$ docker run -it --rm --network host --name net
alpine ip addr show eno8403
eno8403:
<BROADCAST,MULTICAST,UP,LOWER_UP>
mtu 1500 qdisc noqueue state UP group default
qlen 1000
link/ether dc:85:de:ce:04:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.255 scope
global dynamic noprefixroute wlp3s0
valid_lft 54874sec preferred_lft 54874sec
inet6 fe80::335:f1f5:127d:b62c/64 scope link
noprefixroute
valid_lft forever preferred_lft forever
→ Same results with the two context !
9. The overlay driver
If you want native multi-host
networking, you need to use an
overlay driver.
It creates a distributed network
between multiple hosts with the
Docker engine.
Docker transparently manages the
routing of each packet to and from
the right host and container.
10. The macvlan driver
Using the macvlan driver is
sometimes the best choice when
using applications that expect to be
directly connected to the physical
network, because the Macvlan
driver allows you to assign a MAC
address to a container, making it
appear as a device physically on
your network.
The Docker engine routes traffic to
containers based on their MAC
addresses.
12. The host driver
Managing multicast streams with Docker can be a bit tricky because Docker containers are primarily designed for unicast network
communication. Multicast networking requires special handling since multicast packets are sent to a group of hosts, not a single host. While
Docker doesn't natively support multicast, you can work with it using some workarounds and network configurations.
Here's a general approach to manage multicast streams with Docker:
Use Host Networking Mode:
When you run a Docker container, you can specify the network mode using the --network flag. To enable multicast within a container, you can
use the host network mode, which allows the container to share the host's network namespace. However, note that this approach is less isolated
and may not be suitable for all use cases.
bash
$ docker run --network host <your-image>
13. Enable Multicast on the Host:
Ensure that multicast is enabled on your host machine.
You might need to configure your host's network stack to accept multicast packets.
For Linux, this usually involves setting kernel parameters.
For example, you can enable multicast routing and add multicast routes as needed :
# Enable multicast routing (you may need to modify this based on your requirements)
$ echo 1 > /proc/sys/net/ipv4/ip_forward
# Add a multicast route (replace <multicast-group> with the actual multicast group)
$ ip route add <multicast-group> dev <interface> scope link
The host driver
14. Configure the Multicast Application:
Your multicast application running inside the Docker container should be configured to send or receive multicast packets using the appropriate
multicast group and port.
Specify the Multicast Group Address:
Ensure that your multicast application is set to use the specific multicast group address you intend to work with. This address must match the
multicast group address you set up on the host.
Test Your Setup:
Run your Docker container with the host network mode and test the multicast functionality within the container. This may involve sending or
receiving multicast packets as per your application's requirements.
Security Considerations:
Be mindful of the security implications of using host networking mode, as it grants the container more access to the host's network stack.
Ensure that your Docker setup is secure and that you follow best practices for container security.
Monitoring and Troubleshooting:
Use tools like tcpdump or Wireshark to monitor multicast traffic on the host and within the container. This can help you diagnose any network-
related issues.
The host driver
15. The bridge driver
Sinon en mode bridge c'est aussi possible en gérant à la mains les abonnement à igmp :
Multicast to/from a docker bridge network is currently not possible.
This is due to limitations with how linux kernels provide support for multicast routing.
Packets are forwarded to the docker bridge using iptables and the unicast routing table, but multicast packets are handled differently in linux
kernels.
A workaround is to run a tool like smcrouted (https://github.com/troglobit/smcroute) on the host (or in a container with access to the host
network).
This process does the work of managing the linux multicast forwarding cache.
16. The macvlan driver
Managing multicast streams using a macvlan driver in Docker can be a more straightforward approach compared to other
networking modes. The macvlan driver allows each Docker container to have its own unique MAC address and appear as
a separate device on the network. Here's how you can manage multicast streams using the macvlan driver:
Create a Macvlan Network:
$ docker network create -d macvlan
--subnet=<subnet>
--gateway=<gateway>
--ip-range=<ip-range>
-o parent=<physical-interface>
<network-name>
<subnet>: The subnet for your containers.
<gateway>: The gateway IP for your containers.
<ip-range>: The range of IPs that can be allocated to containers.
<physical-interface>: The name of your physical network interface.
<network-name>: The name of the macvlan network.
Replace the placeholders with your specific network configuration.
17. The macvlan driver
Run Containers with Macvlan Networking:
$ docker run --network=<network-name> -itd --name=<container-name> <your-image>
<network-name>: The name of the macvlan network you created.
<container-name>: A name for your Docker container.
<your-image>: The Docker image you want to run.
Within your Docker containers, you can manage multicast streams as you would on a physical host. Configure your multicast application to use the macvlan
network interface for sending and receiving multicast traffic.
Test your multicast streams to ensure that they are functioning as expected within the Docker containers.
Please note the following considerations:
Containers connected to a macvlan network have direct access to the physical network and may require appropriate permissions and configurations on your
network infrastructure.
Ensure that your multicast application inside the container is configured correctly to use the macvlan network interface for multicast communication.
Depending on your network and router configurations, you may need to set up multicast routing or enable multicast support on your network infrastructure to
ensure proper multicast traffic flow.
Always exercise caution when working with multicast traffic, as it can have complex interactions with network infrastructure and may require additional
configuration and permissions.
19. The command to create a Docker network is:
$ docker network create --driver <DRIVER TYPE> <NETWORK NAME>
In this example we will create a bridge type network named mon-bridge:
$ docker network create --driver bridge my-bridge
We will then list the Docker networks with the following command:
$ docker network ls
Result :
NETWORK ID NAME DRIVER SCOPE
58b8305ce041 bridge bridge local
91d7f01dad50 host host local
ccdbdbf708db mon-bridge bridge local
10ee25f56420 myimagedocker_default bridge local
6851e9b8e06e none null local
Create and collect information from a Docker network
20. It is possible to collect information on the Docker network, such as the network config, by typing the following command:
docker network inspect mon-bridge
Result :
[
{
"Name": "mon-bridge",
"Id": "ccdbdbf708db7fa901b512c8256bc7f700a7914dfaf6e8182bb5183a95f8dd9b",
...
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
...
"Labels": {}
}
]
21. You can override the Subnet and Gateway value by using the --subnet and --gateway options of the docker
network create command, as follows:
$ docker network create bridge --subnet=172.16.86.0/24 --gateway=172.16.86.1 my-bridge
For this example, we will connect two containers to our previously created bridge network:
$ docker run -dit --name alpine1 --network mon-bridge alpine
$ docker run -dit --name alpine2 --network mon-bridge alpine
22. If we inspect our mon-bridge network again, we will see our two new containers in the information returned:
docker network inspect mon-bridge
Result :
[
{
"Name": "mon-bridge",
"Id": "ccdbdbf708db7fa901b512c8256bc7f700a7914dfaf6e8182bb5183a95f8dd9b",
...
"Containers": {
"1ab5f1815d98cd492c69a63662419e0eba891c0cadb2cbdd0fb939ab25f94b33": {
"Name": "alpine1",
"EndpointID": "5f04963f9ec084df659cfc680b9ec32c44237dc89e96184fe4f2310ba6af7570",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
},
"a935d2e1ddf76fe49cdb1950653f4a093928020b49ebfea4130ff9d712ffb1d6": {
"Name": "alpine2",
"EndpointID": "3e009b56104a1bf9106bc622043a2ee06010b102279e24b4807c7b7ffec166dd",
"MacAddress": "02:42:ac:15:00:03",
"IPv4Address": "172.21.0.3/16",
"IPv6Address": ""
}
},
...
}
]
23. From the result, we can see that our alpine1 container has the IP address 172.21.0.2, and our alpine2 container
has the IP address 172.21.0.3. Let's try to make them communicate together using the ping command:
$ docker exec alpine1 ping -c 1 172.21.0.3
Result :
PING 172.21.0.3 (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.101 ms
$ docker exec alpine2 ping -c 1 172.21.0.2
Result :
PING 172.21.0.2 (172.21.0.2): 56 data bytes
64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.153 mss
24. For information, you cannot create a network host, because you use the interface of your host machine. Moreover, if you
try to create it then you will receive the following error:
docker network create --driver host my-host
Error :
Error response from daemon: only one instance of "host" network is allowed
You can only use the host driver but not create it. In this example we will start an Apache container on port 80 of the host
machine. From a networking perspective, this is the same level of isolation as if the Apache process was running directly
on the host machine and not in a container. However, the process remains completely isolated from the host machine.
This procedure requires that port 80 is available on the host machine:
$ docker run --rm -d --network host --name my_httpd httpd
Without any mapping, you can access the Apache server by accessing http://localhost:80/, you will then see the
message "It works!".
From your host machine, you can check which process is bound to port 80 using the netstat command:
$ sudo netstat -tulpn | grep:80
25. This is indeed the httpd process that uses port 80 without using port mapping:
tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN 5084/php
tcp6 0 0 :::80 :::* LISTEN 11133/httpd
tcp6 0 0 :::8080 :::* LISTEN 3122/docker-prox
Finally stop the container which will be deleted automatically because it was started using the --rm option:
$ docker container stop my_httpd
26. Remove, connect, and connect a Docker network
Before deleting your docker network, it is necessary to first delete any container connected to your docker
network, or otherwise just disconnect your container from your docker network without necessarily deleting it.
We will choose method 2, disconnecting all containers using the mon-bridge docker network:
$ docker network disconnect mon-bridge alpine1
$ docker network disconnect mon-bridge alpine2
Now, if you check the network interfaces of your containers based on the alpine image, you will only see the
loopback interface as for the none driver:
$ docker exec alpine1 ip a
Result :
lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
27. Once you have disconnected all your containers from the mon-bridge docker network, you can then delete it:
$ docker network rm mon-bridge
However, your containers are now without a bridge network interface, so you must reconnect your containers to
the default bridge network so that they can communicate with each other again:
$ docker network connect bridge alpine1
$ docker network connect bridge alpine2
Then check if your containers have received the correct IP:
$ docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
Result :
/alpine2 - 172.17.0.3
/alpine1 - 172.17.0.2
28. You can create as many bridge networks as you want, it remains a good way to secure communication between
your containers, because the containers connected to bridge1 cannot communicate with the containers on
bridge2, thus limiting unnecessary communications.
29. ## Create a docker network
docker network create --driver <DRIVER TYPE> <NETWORK NAME>
# List docker networks
docker network ls
## Delete one or more docker network(s)
docker network rm <NETWORK NAME>
## Collect information on a Docker network
docker network inspect <NETWORK NAME>
-v or --verbose: verbose mode for better diagnostics
## Delete all unused docker networks
docker network plum
-f or --force: force deletion
## Connect a container to a Docker network
docker network connect <NETWORK NAME> <CONTAINER NAME>
## Disconnect a docker network container
docker network disconnect <NETWORK NAME> <CONTAINER NAME>
-f or --force: force disconnection
## Start a container and connect it to a docker network
docker run --network <NETWORK NAME> <IMAGE NAME>
Summary :