I've ran some basic tests on Docker container network performances. I had discussions with KVM fan-boys on Docker network abstraction vs KVM para-virtualization network VirtIO, and they explained me virtio let KVM offer near-native network performances without the huge impact of a virtualized network stack.
I've used an ubuntu:14.04 (same as Host OS) container, installed iperf on tagged as "iperf" for further usage.
I first used the default (bridge) network settings for my docker container. As a result, I get 13Gb/s, so a significant 50% degradation in network performances.
Switching to host network mode I get 22Gb/s, so small impact on performance but a reasonable one, offering near-native network performances with benefit for binary container image deployment.
I also tried to run some tests with boot2docker VM, but VirtualBox degrades the network performances in a horrible way. I have an ubuntu VM running, configured to use a virtio para-virtualized network, here is iperf result accessing host :
Please note this later test is ran on my OSX MacBook, so might have distinct settings that can affect results, anyway this seems to demonstrate VirtualBox do offer 300x slower network performances compared to native host-to-host. So don't rely on boot2docker for benchmarks. I got some better but still bad metrics using VMWare Fusion, but I don't know much this environment so not sure if this is relevant.
Update : I have no idea how Docker network perfs compare to a well set VM isolation layer, based on KVM with virtio. Just wanted to check Docker impact on network perfs.
Update : I just noticed on docker dev list discussions on using Open vSwitch with Docker, that is comparable to (legacy ?) linux bridge device but with a modern design. There were also discussions on using macvlan (register one MAC address per container on same network physical device IIUC).
Docker doesn't actual virtualize network, but use Linux Ethernet Bridge so it can offer some basic SDN facilities. Anyway I wanted to check impact.
Disclaimer : I'm not a network engineer, the few commands I've used for this micro-benchmark, I've learned them few hours before I publish this article.
I've used iperf with default settings for this benchmark. I ran my benchmark on a MacBook running Ubuntu 14.04 - natively, not as a VM (never use a VM for benchmarks) - graphic card bug on this machine crash OSX, but is fine to run Ubuntu so children can play MineCraft and I can run some native Linux tests :P
nicolas@MacBookLinux:~$ iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------Host to host network do offer 26Gb/s
nicolas@MacBookLinux:~$ iperf -c 192.168.1.23 ------------------------------------------------------------ Client connecting to 192.168.1.23, TCP port 5001 TCP window size: 2.50 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.23 port 38470 connected with 192.168.1.23 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 30.8 GBytes 26.5 Gbits/sec
I've used an ubuntu:14.04 (same as Host OS) container, installed iperf on tagged as "iperf" for further usage.
I first used the default (bridge) network settings for my docker container. As a result, I get 13Gb/s, so a significant 50% degradation in network performances.
nicolas@MacBookLinux:~$ docker run -it iperf bash root@7256f0f1da2b:/# iperf -c 192.168.1.23 ------------------------------------------------------------ Client connecting to 192.168.1.23, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 172.17.0.5 port 45568 connected with 192.168.1.23 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 15.6 GBytes 13.4 Gbits/sec
Switching to host network mode I get 22Gb/s, so small impact on performance but a reasonable one, offering near-native network performances with benefit for binary container image deployment.
nicolas@MacBookLinux:~$ docker run --net=host -it iperf bash root@MacBookLinux:/# iperf -c 192.168.1.23 ------------------------------------------------------------ Client connecting to 192.168.1.23, TCP port 5001 TCP window size: 2.50 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.23 port 38478 connected with 192.168.1.23 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 26.0 GBytes 22.4 Gbits/sec
I also tried to run some tests with boot2docker VM, but VirtualBox degrades the network performances in a horrible way. I have an ubuntu VM running, configured to use a virtio para-virtualized network, here is iperf result accessing host :
nicolas@ubuntu:~$ iperf -c 192.168.1.13 ------------------------------------------------------------ Client connecting to 192.168.1.13, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.0.2.15 port 57202 connected with 192.168.1.13 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.1 sec 84.8 MBytes 69.6 Mbits/sec
Please note this later test is ran on my OSX MacBook, so might have distinct settings that can affect results, anyway this seems to demonstrate VirtualBox do offer 300x slower network performances compared to native host-to-host. So don't rely on boot2docker for benchmarks. I got some better but still bad metrics using VMWare Fusion, but I don't know much this environment so not sure if this is relevant.
Update : I have no idea how Docker network perfs compare to a well set VM isolation layer, based on KVM with virtio. Just wanted to check Docker impact on network perfs.
Update : I just noticed on docker dev list discussions on using Open vSwitch with Docker, that is comparable to (legacy ?) linux bridge device but with a modern design. There were also discussions on using macvlan (register one MAC address per container on same network physical device IIUC).
0 commentaires:
Enregistrer un commentaire