14 septembre 2015

Giving Windows Docker containers a try

I've been experimenting a few with Windows Docker Containers, aka "Windows Server Containers Technical Preview 3". Windows 2016 will offer kernel-level container capabilities and the adequate glue code to offer Docker API (actually, RunC) so we can use Docker to create and run containerized Windows applications.

Please note : this is all about Windows applications running on Windows 2016, Docker is not a Virtual Machine runtime, so you won't get existing Linux images running on Windows 2016, neither can you run some Windows software on your Linux system.

So, I've created a Windows 2016 VM on Azure (which was simpler than downloading 6Gb from MSDN), following https://msdn.microsoft.com/virtualization/windowscontainers



The VM comes with a single image pre-installed : windowsservercore. We will use this base image to create our own images, just like we used to do starting our Dockerfile with FROM ubuntu. Most significant difference is this base image is 9Gb large, but hopefully you will never have to download it as it will come pre-installed on container-enabled windows releases.

First thing I noticed, starting a new container takes some significant time. Starting a linux docker container takes few tenths of a seconds, so that you feel like your program started without delay. Running a windows container takes 14s on my experiment (running on an Azure D3 box : 4 core 14Gb).


Second thing, my plan was to experiment by creating a windows jenkins-slave container, and for this purpose I need to download a JDK. I had to google a few then switch to PowerShell so I can run wget command to download Oracle JDK windows 64 installer.

Then I used notepad.exe (sic) to edit a Dockerfile, to install JDK in a container. My experiments stopped here, as I can't find how to launch the installer, always get weird error "

The directory name is invalid.
The command 'cmd /S /C C:\install\jdk.exe' returned a non-zero code: 1

Tried also with unix style path, same issue.

I also got a container I can't stop. I have no idea about this container state, but it's annoying I can't kill it, as docker daemon is supposed to have super-power on all container running and can force a SIGKILL, or windows equivalent, which seems to be only partially implemented. But let's remember we are running on a beta preview here.



Conclusion : considering Microsoft commitment to provide a container solution on Windows is just one year old, this is an encouraging preview. There's also lot's we have to learn to adapt the habits we have for Linux-based docker image to Windows, it seems the windows docker runtime do use Unix paths, which might results in some confusion when running windows commands in a Dockerfile. But the feeling I have after this experiment is I'll come back in few months when this get polished a few.


Update
As suggested by David, I've tried to use Chocolatey and use it to install a JDK. And this works well !

C:\Users\nicolas\dock  docker build -t java .
Sending build context to Docker daemon 2.048 kB
Step 0 : FROM windowsservercore
 --- 0d53944cb84d
Step 1 : RUN @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
 --- Using cache
 --- 82076c2bad03
Step 2 : RUN chocolatey install -y jdk8
 --- Using cache
 --- 84f6a8356fe3
Successfully built 84f6a8356fe3

C:\Users\nicolas\dock  docker run -it --rm java cmd

...
C:\Windows\system32  java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)

C:\Windows\system32

Next step is for me to use this to setup a JNLP windows jenkins slave and check how it goes compared to a classic windows VM.

Moving my demo to Docker

Last Thursday I went to local Java User Group for the very first run of my new talk "(Docker, Jenkins) -> { Orchestrating Continuous Delivery }".

I've planned for this talk since june, but actually started to work on it on ... Tuesday, and got demo setup on Thursday in the morning :-\ As an expected result, the demo all failed, while the talk was mostly appreciated afaict.

How to make my demo more reproducible ? Hey, this is what the conference is talking about after all : setup reproducible build environment using Docker containers ! So let's use the same technique to host the demo itself.

The demo is using Cloudbees Jenkins Enterprise docker image, jetty to deploy built web apps, and docker registry to deploy docker images. I could run them by hand, but sounds better to rely on docker-compose to handle all the plumbing !

# Jenkins master, running on default http 80 port
jenkins:
  image: cloudbees/jenkins-enterprise
  ports:
    - "80:8080"
    - "50000:50000"
  volumes:
    # JENKINS_HOME, from host with pre-defined demo jobs
    - ./jenkins-home:/var/jenkins_home
    # Required host stuff so jenkins can run `docker` command to start containers
    - /var/run/docker.sock:/var/run/docker.sock
    - /usr/local/bin/docker:/usr/local/bin/docker
  volumes_from:
    # We "deploy" to jetty with a plain `cp`
    - webserver    
  links:
    - webserver
    - docker-registry
  user: jenkins:100 # 'docker' group, required to access docker.sock  

# Jetty web server to run tests, staging and production
webserver:
  image: jetty:9
  ports:
    - "8080:8080"
  volumes:
    - ./webapps:/var/lib/jetty/webapps

# Our private docker registry
docker-registry:
  image: registry
  ports:
    - "5000:5000"




The demo relies on a set of preconfigured jobs, so jenkin-home is bind mounted from the demo root directory. I had to configure bunch of .gitignore rules so I can share this config but not the build history and all other local jenkins stuff.

Webserver webapps directory could not be bind mounted, just relying on volumes, but then I get a permission error when Jenkins jobs tries to copy the war to jetty's volume. This could be fixed if we have #7198 resolved.

As the demo do run some docker tasks, I'm bind-mounting docker unix socket and cli executable in jenkins container, so it can build and start new containers. Docker Workflow plugin demo do use Docker-in-Docker for this, but I wanted to avoid using this hack (also read this). But a side effect is I need to configure the jenkins user with adequate permission so it can access the docker daemon socket, which means it has to be in the docker group. I'd like to use user: jenkins:docker, but the later doesn't work as --user do expect either a numeric ID, or user name for a declared user in the container, but not a host user/group name (I haven't found a related docker issue).

Last but not least, docker demo failed. I eventually understood the issue comes from bind mounted volumes.

JENKINS_HOME is set in jenkins container as /var/jenkins_home; this path is bind mounted from current directory /Users/nicolas/demo/jenkins-home. When docker-workflow creates as container to run continuous delivery pipeline steps inside containers, it tries to bind mount this exact same directory, but with my setup it does this on a side container (not a nested one, as in the original demo). As a result, this build container get some /var/jenkins_home/job/demo/workspace bind mounted, expecting to get there the project source files; but such a file doesn't exists on host, resulting in JENKINS-28821. This could be fixed if workflow can detect it's running inside a container, and uses --volume-from. I'll investigate such a fix. In the meantime, I've created a symlink on host as an ugly workaround.

Ok, so this was not such a trivial thing, and there's few things to get polished, but with this setup I now can share my demo, cleanup my environment with git clean -fdx, and get it up and running with a simple docker-compose up.