27 décembre 2014

Multiple Dockerfiles for project

When you start to use docker for the application you're developing, you need to choose if the Dockerfile you write is designed to

  • Build from source and produce binary
  • Package built binary for production application
  • Build from source and hot-reload source code for quick development cycles

Docker doesn't let you (yet) set the file you want to use as a Dockerfile, and enforce your use Dockerfile.
A possible workaround is for you to define a "Build from source" Dockerfile at project root, so it can ADD the source directory, and build binary in a target directory, then add to this directory another Dockerfile designed to produce the production image, just adding the binary with runtime dependencies. You still miss the 3rd use case, that require the docker image you run to allow hot-reload of source code you bind-mount in container.

After some experiments with various approach, my preference is to build the Docker build context by myself. So I have 3 Dockerfiles : Dockerfile.dev, Dockerfile.build, Dockerfile.prod.


  • First one uses a VOLUME to access project source code and run the app with hot-reload enabled (typically, play run). This let you use your IDE to hack the code and see the resulting app running in Docker container.
  • Second one build the application and package it for production execution (mvn package). This is the reference build environment, the one you probably use for CI as well. You can setup Jenkins to archive the resulting artifacts, or can just run it and execute a cp or cat command to export it from Docker container.
  • Last one to use the built artifact (from previous step) and ADD it to another image that only define required runtime dependencies, so the image is as small as possible. Such an image can't be used isolated, as it relies on the build one, so for sample can't be used with trusted builds, until Docker team offer some way to support non-trivial build scenarios.

To work around lack of a --file option for docker build command (#2112), I'm passing the build context explicitly as a tar.gz archive - there is no overweight doing this, as docker build commands does the same with current folder.

gtar --transform='s|Dockerfile.dev|Dockerfile|' -cz * | docker build -t dev -

As I'm running OSX and the included tar command does not support --transform option (sic) I had to install gnu-tar with homebrew, so the gtar command I'm using. As this is not a trivial command this can be set within a makefile, so you can just run make dev|build|prod.

Hope this will be useful for you as well.

10 décembre 2014

Do-Ker :: la conférence Docker des Bretons

Vous l'avez peut-être remarqué, Docker c'est mon dada depuis un moment (faut dire que ça me change des thread-dump Jenkins). Si vous n'étiez pas au courant alors c'est que vous êtes tombé sur ce blog totalement par hasard, aussi revenez à votre recherche Google et essayez le liens suivant.

Je co-anime le Docker Meetup Rennais, et avec mes amis de Rennes DevOps on avait en tête de "faire un truc", et bien c'est chose faire (enfin, prévue) avec l'annonce de la Do-Ker-Conf.

WTF
L'équipe Docker (qui comporte pas mal de Frenchies) organise tout le mois de décembre un Docker tour de France qui ne passe malheureusement pas à Rennes. Du coup, nous organisons notre propre événement fait main, 100% beurre salé, l'après midi du 29 janvier à la Cantine Numérique.

Le programme (si on peut dire)

14h00 - 15h00 : retours et débats sur la DockerCon.europe. 
J'essayerais de vous retransmettre l'ambiance et les annonces de la première édition européenne de la DockerCon. Je ne garanti pas de pouvoir vous faire des démos qui tienne la route, et le but est surtout d'avoir votre point de vue et de débattre du sujet.

15h00 - 17h00 : 4 talks "retours d'expérience" de 30 minutes
Pour ces quatre sujets nous avons mis en place un call-for-papers, si vous voulez témoigner de l'utilisation de Docker dans un contexte quelconque (votre home-automation, votre équipe de dev, votre service de prod, etc) contribuez. Nous ne retiendrons peut être pas tout mais ça fera des sujets pour les prochains Docker Meetup :)

17h00 - 17h30 : Session Q&A avec Jérôme Petazzoni
"bricoleur extraordinaire" chez Docker Inc, Jérôme se joindra à nous par Hangout depuis la jungle/office de Docker SF pour répondre à vos questions.

17h30 - 18h30 : un open-space
Lors de notre tentative de Hackhaton, il est apparu que Docker donne envie de parler, d'échanger, de débattre, au point de cannibaliser l'événement initialement prévu. Et comme ça a du sens autant le faire correctement. Nous vous proposons donc un format open-space, à savoir des discussions structurées "juste ce qu'il faut" sans ordre du jour préalable. 

18h30 : buffet (et suite des discussions)
SVP ne vous inscrivez pas "juste" pour le buffet :-)

19h00 - 22h00 : un lab d'une durée de 3h 
pour ceux qui veulent mettre les mains dans le cambouis sans jamais avoir osé demander. Il s'agit d'un sujet que nous voulons proposer à DevoxxFrance, en gros notre première répétition.

Et comme tout événement qui se respecte, nous avons un super logo, intégrant tous les éléments clés qui font notre identité :

  

09 décembre 2014

About Docker authentication

On my previous blog post I was experimenting with Docker machine, this one require to run a custom build for docker client with "identity-auth". I was wondering what this is about.

This is actually implementation for a development topic introduced at DockerCon.SF in june



The authorization issue when using Docker is that client is talking to daemon on a TCP connection, so open to all possible abuses, and let you run arbitrary container or do crazy things on target host. Docker 1.3.0 introduced TLS authorization with a client and server certificate, so only well identified clients can connect to daemon. That's fine, but won't scale. In such a setup, all clients need to get certificate signed by authority as well as daemon, and revocation for such a client certificate is a complex process.

SSH based authentication with authorized_keys on server is a common practice to manage enterprise infrastructures. The proposed change is to adopt this exact same practice for docker.

Each docker instance (daemon or client) will get a unique identity, set first time it is ran and saved in .docker/. Surprisingly, JSON files are used to store keys - not openssl PEM files. This is actually JSON Web Keys, I never heard about this recent standard (admittedly not a topic I'm actively following). It's a bit surprising such a fresh new standard has been chosen, vs long terms established PEM format. Proposal says both format are actually supported, so third party client library will have to support both. Having looked at Java deplorable support for PEM files I wonder just adopting this fresh new format would make things simpler...

First time you establish connection with a Docker daemon, you'll have to confirm remote identity, confirming the remote key fingerprints - just like you use to do with SSH. On Daemon side, authorized client keys are stored under some authorized_keys.d directory.

As a resume, this proposed Authorization system is more or less just adopting SSH successful model. This makes me wonder why they not just use SSH directly, just like Git does as a transport layer they build docker on, and just require a secure communication channel with daemon.


The detailed proposal is discussed on #7667

08 décembre 2014

First experiment with Docker Machine

Docker "machine" is a new command created by Docker team to manage docker servers you can deploy containers to. Think about boot2docker you use to run on your development workstation, but applied to all possible Cloud and on-premises hosting services.

"machine" is actually an OSX / BSD command, so conflict with your installation (#2). To prevent this I've moved it to my go/bin and declared this first in PATH.

Docker machine is based on ssh authentication to setup docker host nodes. You won't be able to use it with a standard Docker client, so need to download a custom 1.3.1-dev-identity-auth build - the related changes haven't been merged in 1.3.2 yet. I've moved this binary to go/bin to get it as default docker command during my test session. Run docker command once to get your authentication setup well done (~/.docker/public-key.json file).

Right, you're well done now. Let's first experiment locally, to compare with boot2docker

➜ machine create -d virtualbox local
INFO[0000] Downloading boot2docker...                   
INFO[0039] Creating SSH key...                          
INFO[0039] Creating VirtualBox VM...                    
INFO[0044] Starting VirtualBox VM...                    
INFO[0044] Waiting for VM to start...                   
INFO[0075] "local" has been created and is now the active machine. Docker commands will now run against that machine. 
➜ echo $DOCKER_HOST
tcp://boot2docker:2376
➜ export DOCKER_HOST=`machine url` DOCKER_AUTH=identity
➜ echo $DOCKER_HOST
tcp://192.168.99.100:2376

I expected (according to "Docker commands will now run against that machine") that DOCKER_HOST would be well set, but I had to set it by myself - maybe because it's already defined in my environment.


➜  machine ip local
192.168.99.100
➜  machine ssh local
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           \______ o          __/
             \    \        __/
              \____\______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
boot2docker: 1.2.0
             master : 8a06c1f - Fri Nov 28 17:03:52 UTC 2014
docker@boot2docker:~$ ls /Users/nicolas/
Applications/     Library/          bin/              
Boulot/           Movies/           
Desktop/          Music/            
Documents/        Pictures/         dotfiles/
Downloads/        Public/           go/
Dropbox/          VirtualBox VMs/   google-cloud-sdk/
docker@boot2docker:~$ exit
➜  docker ps
The authenticity of host "192.168.99.100:2376" can't be established.
Remote key ID EJF7:BPI4:GKOC:GR7H:RKZL:KH5J:LOBB:YZRU:HCWR:JYXZ:AOGH:OOEO
Are you sure you want to continue connecting (yes/no)? yes
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
➜  docker run --rm -ti ubuntu bash
Unable to find image 'ubuntu:latest' locally
ubuntu:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete 
01bf15a18638: Downloading 9.731 MB/201.6 MB 20m43s
30541f8f3062: Download complete 
e1cdf371fbde: Download complete 
9bd07e480c5b: Download complete
...
 


So I get a classic Boot2Docker installation on machine, with /Users shared volume and all expected setup. Also get equivalent commands I use to run with boot2docker client command.

The user experience here is very comparable to boot2docker, so I don't worry you'll get quickly used with it.

Let's no switch to a Cloud provider. Google Compute Engine is not (yet) supported, so was an opportunity to give Microsoft Azure a try, as this is terra incognita for me ...

First need to subscribe - then get your subscriber ID. Authentication on Azure is based on client certificate, so need to create a cert file, that you then have to register on https://manage.windowsazure.com. As it took me some time to discover where to upload it, here is a screenshot :


test


➜  machine create  -d azure --azure-subscription-id="c33.....ec5" --azure-subscription-cert="mycert.pem" azure
INFO[0000] Creating Azure host...                       
INFO[0072] Waiting for SSH...                           
INFO[0161] Waiting for docker daemon on host to be available... 
INFO[0202] "azure" has been created and is now the active machine. Docker commands will now run against that machine. 
➜  export DOCKER_HOST=`machine url` DOCKER_AUTH=identity
➜  docker run --rm -it debian bash
FATA[0000] TLS Handshake error: tls: oversized record received with length 20527 


wtf ? This is actually a know issue. I have the same issue running docker client 1.3.2. I'm still stuck here.

Also, provisionning Azure host took a looooong time, "Waiting for SSH". I have no idea this is just me due to some misconfiguration, or something expected on Azure. I'm used with GCE to give me SSH prompt within 10 seconds... :P #troll

Anyway, the question here is not if provider A is better than B, but the option Docker Machine offer to change provider without changing the toolchain. Assuming I have a production delivery pipeline based on Docker containers, I'd be happy I can run the same both locally and provisioning Cloud instances by whatever IaaS provider.

Isn't that cool ?



06 décembre 2014

About Docker monolithic design

CoreOS Rocket announcement claim to fix a design / security issue within Docker. They didn't explained much, and as you can imagine this has been discussed during DockerCon.

Docker uses a one-does-all executable model, just like busybox does. Busybox is a minimalist Linux distribution design for embedded use, and as such provides a single executable to cover all common Unix commands. This significantly reduce the distribution size.

Docker did adopt the same model and provide a single "docker" command both for client and daemon - make sense as they share lot's of code to communicate on REST api. The design and security consideration comes on the fact Docker daemon runs as root. The daemon has to be root to manage Linux namespaces and cgroup and few other kernel level stuff (network, ...). From a security perspective having a root component exposed over HTTP(S) to get client commands on the network is unpleasant, and daemon does lot's of stuff that does not require to be root (downloading image layers for sample) that offer a larger surface attack. If you consider Apache httpd design (as a sample), main daemon run as root to bind port 80, but workers process run as non-root to prevent any abuse for http handlers and module possible security issues.

CoreOS point of view is SystemD should be used to manage containers, and the container manager doesn't have to be running as root and delegate to SystemD when some kernel-level container management stuff is required.

Solomon just tweeted this :

solomonstre
So who wants to help make Docker embeddable? Daemon mode would be optional if you prefer another central daemon to be in charge like systemd
06/12/2014 02:28
solomonstre
@solomonstre the difficulty is that some parts of "just run" require managing global system state, eg ip allocation. How do we do this?
06/12/2014 02:46
solomonstre
@solomonstre rocket sweeps this under the rug by putting it all in systemd. But I don't want to tie Docker to systemd, it's too monolithic
06/12/2014 02:48

That's a major point : SystemD does not enough so Docker daemon doesn't need to run as root, so can't just say "let's make Docker rely on SystemD". Also, Docker changed it's design in 0.6 to make the image persistence extensible, so you can use AUFS or device-mapper, maybe alternate Filesystem solution (ZFS for sample) will later plug into docker. Docker team doesn't want to have SystemD as unique solution. So as they use to : define a clean extension point with neutral API, provide a default implementation (current design, with docker daemon running as root) and offer extensibility so you can configure docker to run third-party implementation, maybe relying on SystemD, maybe on other solutions.


Today Docker team is organizing a Hackathon for people still at Amsterdam after DockerCon (~80 hackers have registered afaik). Not sure this will be enough to get this implemented during the week-end, but I expect this will be actively discussed and maybe some plan for a proposal.