10 décembre 2014

Do-Ker :: la conférence Docker des Bretons

Vous l'avez peut-être remarqué, Docker c'est mon dada depuis un moment (faut dire que ça me change des thread-dump Jenkins). Si vous n'étiez pas au courant alors c'est que vous êtes tombé sur ce blog totalement par hasard, aussi revenez à votre recherche Google et essayez le liens suivant.

Je co-anime le Docker Meetup Rennais, et avec mes amis de Rennes DevOps on avait en tête de "faire un truc", et bien c'est chose faire (enfin, prévue) avec l'annonce de la Do-Ker-Conf.

WTF
L'équipe Docker (qui comporte pas mal de Frenchies) organise tout le mois de décembre un Docker tour de France qui ne passe malheureusement pas à Rennes. Du coup, nous organisons notre propre événement fait main, 100% beurre salé, l'après midi du 29 janvier à la Cantine Numérique.

Le programme (si on peut dire)

14h00 - 15h00 : retours et débats sur la DockerCon.europe. 
J'essayerais de vous retransmettre l'ambiance et les annonces de la première édition européenne de la DockerCon. Je ne garanti pas de pouvoir vous faire des démos qui tienne la route, et le but est surtout d'avoir votre point de vue et de débattre du sujet.

15h00 - 17h00 : 4 talks "retours d'expérience" de 30 minutes
Pour ces quatre sujets nous avons mis en place un call-for-papers, si vous voulez témoigner de l'utilisation de Docker dans un contexte quelconque (votre home-automation, votre équipe de dev, votre service de prod, etc) contribuez. Nous ne retiendrons peut être pas tout mais ça fera des sujets pour les prochains Docker Meetup :)

17h00 - 17h30 : Session Q&A avec Jérôme Petazzoni
"bricoleur extraordinaire" chez Docker Inc, Jérôme se joindra à nous par Hangout depuis la jungle/office de Docker SF pour répondre à vos questions.

17h30 - 18h30 : un open-space
Lors de notre tentative de Hackhaton, il est apparu que Docker donne envie de parler, d'échanger, de débattre, au point de cannibaliser l'événement initialement prévu. Et comme ça a du sens autant le faire correctement. Nous vous proposons donc un format open-space, à savoir des discussions structurées "juste ce qu'il faut" sans ordre du jour préalable. 

18h30 : buffet (et suite des discussions)
SVP ne vous inscrivez pas "juste" pour le buffet :-)

19h00 - 22h00 : un lab d'une durée de 3h 
pour ceux qui veulent mettre les mains dans le cambouis sans jamais avoir osé demander. Il s'agit d'un sujet que nous voulons proposer à DevoxxFrance, en gros notre première répétition.

Et comme tout événement qui se respecte, nous avons un super logo, intégrant tous les éléments clés qui font notre identité :

  

09 décembre 2014

About Docker authentication

On my previous blog post I was experimenting with Docker machine, this one require to run a custom build for docker client with "identity-auth". I was wondering what this is about.

This is actually implementation for a development topic introduced at DockerCon.SF in june



The authorization issue when using Docker is that client is talking to daemon on a TCP connection, so open to all possible abuses, and let you run arbitrary container or do crazy things on target host. Docker 1.3.0 introduced TLS authorization with a client and server certificate, so only well identified clients can connect to daemon. That's fine, but won't scale. In such a setup, all clients need to get certificate signed by authority as well as daemon, and revocation for such a client certificate is a complex process.

SSH based authentication with authorized_keys on server is a common practice to manage enterprise infrastructures. The proposed change is to adopt this exact same practice for docker.

Each docker instance (daemon or client) will get a unique identity, set first time it is ran and saved in .docker/. Surprisingly, JSON files are used to store keys - not openssl PEM files. This is actually JSON Web Keys, I never heard about this recent standard (admittedly not a topic I'm actively following). It's a bit surprising such a fresh new standard has been chosen, vs long terms established PEM format. Proposal says both format are actually supported, so third party client library will have to support both. Having looked at Java deplorable support for PEM files I wonder just adopting this fresh new format would make things simpler...

First time you establish connection with a Docker daemon, you'll have to confirm remote identity, confirming the remote key fingerprints - just like you use to do with SSH. On Daemon side, authorized client keys are stored under some authorized_keys.d directory.

As a resume, this proposed Authorization system is more or less just adopting SSH successful model. This makes me wonder why they not just use SSH directly, just like Git does as a transport layer they build docker on, and just require a secure communication channel with daemon.


The detailed proposal is discussed on #7667

08 décembre 2014

First experiment with Docker Machine

Docker "machine" is a new command created by Docker team to manage docker servers you can deploy containers to. Think about boot2docker you use to run on your development workstation, but applied to all possible Cloud and on-premises hosting services.

"machine" is actually an OSX / BSD command, so conflict with your installation (#2). To prevent this I've moved it to my go/bin and declared this first in PATH.

Docker machine is based on ssh authentication to setup docker host nodes. You won't be able to use it with a standard Docker client, so need to download a custom 1.3.1-dev-identity-auth build - the related changes haven't been merged in 1.3.2 yet. I've moved this binary to go/bin to get it as default docker command during my test session. Run docker command once to get your authentication setup well done (~/.docker/public-key.json file).

Right, you're well done now. Let's first experiment locally, to compare with boot2docker

➜ machine create -d virtualbox local
INFO[0000] Downloading boot2docker...                   
INFO[0039] Creating SSH key...                          
INFO[0039] Creating VirtualBox VM...                    
INFO[0044] Starting VirtualBox VM...                    
INFO[0044] Waiting for VM to start...                   
INFO[0075] "local" has been created and is now the active machine. Docker commands will now run against that machine. 
➜ echo $DOCKER_HOST
tcp://boot2docker:2376
➜ export DOCKER_HOST=`machine url` DOCKER_AUTH=identity
➜ echo $DOCKER_HOST
tcp://192.168.99.100:2376

I expected (according to "Docker commands will now run against that machine") that DOCKER_HOST would be well set, but I had to set it by myself - maybe because it's already defined in my environment.


➜  machine ip local
192.168.99.100
➜  machine ssh local
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           \______ o          __/
             \    \        __/
              \____\______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
boot2docker: 1.2.0
             master : 8a06c1f - Fri Nov 28 17:03:52 UTC 2014
docker@boot2docker:~$ ls /Users/nicolas/
Applications/     Library/          bin/              
Boulot/           Movies/           
Desktop/          Music/            
Documents/        Pictures/         dotfiles/
Downloads/        Public/           go/
Dropbox/          VirtualBox VMs/   google-cloud-sdk/
docker@boot2docker:~$ exit
➜  docker ps
The authenticity of host "192.168.99.100:2376" can't be established.
Remote key ID EJF7:BPI4:GKOC:GR7H:RKZL:KH5J:LOBB:YZRU:HCWR:JYXZ:AOGH:OOEO
Are you sure you want to continue connecting (yes/no)? yes
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
➜  docker run --rm -ti ubuntu bash
Unable to find image 'ubuntu:latest' locally
ubuntu:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete 
01bf15a18638: Downloading 9.731 MB/201.6 MB 20m43s
30541f8f3062: Download complete 
e1cdf371fbde: Download complete 
9bd07e480c5b: Download complete
...
 


So I get a classic Boot2Docker installation on machine, with /Users shared volume and all expected setup. Also get equivalent commands I use to run with boot2docker client command.

The user experience here is very comparable to boot2docker, so I don't worry you'll get quickly used with it.

Let's no switch to a Cloud provider. Google Compute Engine is not (yet) supported, so was an opportunity to give Microsoft Azure a try, as this is terra incognita for me ...

First need to subscribe - then get your subscriber ID. Authentication on Azure is based on client certificate, so need to create a cert file, that you then have to register on https://manage.windowsazure.com. As it took me some time to discover where to upload it, here is a screenshot :


test


➜  machine create  -d azure --azure-subscription-id="c33.....ec5" --azure-subscription-cert="mycert.pem" azure
INFO[0000] Creating Azure host...                       
INFO[0072] Waiting for SSH...                           
INFO[0161] Waiting for docker daemon on host to be available... 
INFO[0202] "azure" has been created and is now the active machine. Docker commands will now run against that machine. 
➜  export DOCKER_HOST=`machine url` DOCKER_AUTH=identity
➜  docker run --rm -it debian bash
FATA[0000] TLS Handshake error: tls: oversized record received with length 20527 


wtf ? This is actually a know issue. I have the same issue running docker client 1.3.2. I'm still stuck here.

Also, provisionning Azure host took a looooong time, "Waiting for SSH". I have no idea this is just me due to some misconfiguration, or something expected on Azure. I'm used with GCE to give me SSH prompt within 10 seconds... :P #troll

Anyway, the question here is not if provider A is better than B, but the option Docker Machine offer to change provider without changing the toolchain. Assuming I have a production delivery pipeline based on Docker containers, I'd be happy I can run the same both locally and provisioning Cloud instances by whatever IaaS provider.

Isn't that cool ?