14 septembre 2014

is Docker ready for Production ?

This article is a response to IS DOCKER READY FOR PRODUCTION

"You embed a distro in a distro (or multiple distros in a distro)"

Sure, that's the reason dedicated Docker Linux distro have been created, like Boot2docker or CoreOS. Those are designed to be lightweight (~ 100Mb) and focus on production systems : stability and maintenance. Not even do they provide a package manager.

"your container will most likely weight more than 1GB" not really. The base image might weight few hundreds Mb, but you'll only download it once, and most companies will anyway try to standardize distro they use for applications, not just let developer creativity give esoteric distributions a try.

Initial setup for a Docker host will require to download base image and application image layers. The application download in most case is pretty quick, but the base image(s) might take some time. For this reason docker host should be provisioned with most commons base images (in company) pre-installed. On AWS this is a common perf improvement to (re)create AMI image, not just rely on a base AMI and run configuration manager to fully setup the box (at least, we do this way at CloudBees).

"Dreaming of a statically build binary"

Building from scratch image is a bit crazy, interesting exercise, but I'd recommend to rely on Busybox if you want to reduce image size. See David's java8 Dockerfile for sample. A bit hack-ish as Busybox wget don't have https support to download JDK 8 from Oracle web site, but anyway result in a minimalist image.

Complexity to build statically linked binaries depends on target environment. For ruby this seems to be painfull, with lots of dependencies, resulting on a 450Mb install. I guess the Dockerfile did install some build tools, compile, then delete build tools, but not within the same RUN command, resulting in layers in docker image to contain file that get actually deleted in union filesystem.

That's just an assumption. Could flatten image running :
run docker export containerId | docker import - name:latest

For such a setup, should either use David's approach to create a one-liner RUN command, or search for some solution to build the binaries (in a Dockerfile) and only include the actual result in another one (see feature request 7992).


"There’s no easy way logging with Docker" - there is one actually : just dump to stdout. Docker log can be used to retrieve logs, so can the daemon API and you can then plug various management tools. Read for sample this typical usage scenario for libswarm : http://blog.docker.com/2014/07/libswarm-demo-logging/.

For people to prefer syslog approach,  this doesn't break the "1 container, 1 process" philosophy ... until you try to package syslogd in your container. Read http://jpetazzo.github.io/2014/08/24/syslog-docker/ for a description about using Docker with syslog, the later being yet another container-ized service. I don't get the argument against container isolation. application and syslog communicate using a unix socket, what' wrong letting them talk ? 

"admin nightmare" ? yes, you need your sysadmin to understand and manage docker container and how to orchestrate them. Did someone told you docker will replace all of them ? They anyway had to manage apps to communicate with various services and resources with classic deployment setup, that's not a new challenge.

"Network management"

Docker networking is actually complicated, and the documentation first don't help to make you confident with it - but is useful once you understood it to get further. Sorry guy to have a detailed documentation :) http://blog.thestateofme.com/2014/09/12/docker-networking/ has very explicit schemas to explain docker networking, then cryptic iptable configuration samples ... 

Virtual network never has been a simple topic, and Docker way is only for simpler uses cases. Weave or Pipework can cover most complex scenarios. Discussing with some Network engineer about OpenStack capabilities this is definitively a topic that require some advanced skills. Anyway, most human being will only need the --link option for their Docker application, and that's pretty cool.

"Provisioning is not perfect at all"

I agree Dockerfiles are level 0 of software management. But nobody said you need all your process to rely on a Dockerfile. It's 100% valid to use a classic build system then just package application binaries with a Dockerfile to produce deployable application. It's better if all elements can be managed with Dockerfile, but you can combine few of them.

People who are used with Puppet/Chef/Ansible power will just create a base image with those tools setup and inherit from this one for every application, Dockerfile just importing the cookbooks and running chef-solo. This is a nice way to migrate existing infrastructure to Docker. As a result, the Docker image is create with Chef DSL power, but chef will only run once during image creation, the image then is immutable.

Packer is an alternative, and I guess we will se more to emerge in Docker ecosystem to offer a higher level of abstraction and more flexibility to build Docker images. Also you can build a Docker image with just plain old docker commit command, and can integrate it with you build tools, as long as you make it automated some way. Dockerfile is just the common denominator that allows DockerHub to build any image from sources and distributed to any developer.

"Process monitoring? Don’t even think about it

Containers require a new generation of monitoring agents. cAdvisor is one of them. For sure, migrating your existing monitoring system so it embrace Docker container is not trivial. For nagios integration, there's few nagios-docker plugins to be developed, I didn't experimented with any of them so can't tell maturity, but the metrics are available from docker daemon and cgroups API, read http://jpetazzo.github.io/2013/10/08/docker-containers-metrics/ to see how to use them. 

Right, this will require some effort to migrate your existing setup. Never had the promise Docker would feet your existing tools without any effort. libSwarm is especially focussing on this issue : it provides a neutral integration API, so you can plug your custom orchestration / audit / monitoring / whatever tools into a Docker environment. Right, this is just a prototype at this time.

"Porting your application to Docker increases complexity. Really."

Applying the "1 container, 1 process" at early beginning is hard. Consider it an opportunity to rethink your architecture. First use Docker as "lightwight virtual machines" and drop your application with dozen processes and deamons in a Dockerfile. Docker benefits (ability to run production equivalent system locally) and available orchestration tools (start with Fig, then consider alternatives for larger/complex setup) will let you refactor your application and related Docker containers into smaller, focussed service to collaborate together.

Right, I don't know anything about Botify infrastructure, and related constraints. Reading Frédéric's blog my feeling was they experimented with Docker (just for 2 weeks ?) as a major transition, trying to apply it far too strictly, then were hurt by actual constraints. I'd be very happy if I have the opportunity to meet him (he's French like me, so seems feasible) to discuss in details the issue they had.

With a customer of mine (as part of my Freelance activity) we are considering a baby-step migration to Docker, so that we could learn about Docker and actual constraints using it on a production infrastructure, as well as discover it's benefits for development and continuous delivery. This is not a trivial transition, and for sure there's lot's of points we will only discover with real-life usage.

0 commentaires: