22 septembre 2015

As announced yesterday, we developped a new jenkins plugin during Docker Hack Day

This competition will elect a hacker team to be invited at DockerCon europe, and we need you for this to happen to us !

Please go to https://www.docker.com/community/hackathon?destination=node/4606
search for "Jenkins Docker Slave" if the link doesn't directly point to our hack, and vote using the social media links

Last but not least, share with your friends !





Contributing to Docker

At some time, the hack you build with docker do hit some limitations, and you have few options :

  • give up
  • complain
  • work around
  • contribute

On this blog post I'll focus on the last one.

Working on CloudBees Docker Custom Build Environment Plugin (aka "oki docki"), I had to use docker exec and pass extra environment variables, but it doesn't offer such an option. To get my work done I used a workaround assuming env is available in the target container, but was not pleased by this option.

So on my spare time (sic) I've looked into Docker code, and understood this option isn't supported because ... nobody asked for it so far - actually, IIUC, the underlying RunC container engine fully support passing environment from exec, as the plumbing code is actually shared with docker run.

Getting the project

I made few mistakes before I understood Go programming conventions. So, created ~/go and declared GOPATH accordingly, then cloned docker git repo under $GOPATH/src/github.com/docker. With this setup, I can open the project root in Intellij Idea with Go plugin installed, and get a nice development environment.

I'm far from being fluent in Go language, but docker source code is modular so make it pretty simple to search, and Idea can be used to lookup method usage and such things.



As a result, I added few lines of code

Building

Docker builds inside Docker - and docker is running on my machine inside boot2docker. This Russian dolls setup makes the build process a bit complex.

First, prepare a development environment. For this purpose simply use the Dockerfile present at docker project root:

docker build -t dry-run-test .

Use this docker image, bind mounting the project source, to cross-compile docker binary

docker run --privileged --rm -ti -v `pwd`:/go/src/github.com/docker/docker docker-dev hack/make.sh binary cross

You will get the binary for all platforms created under bundles/1.9.0-dev/cross

Testing

As I'm a both lazy and a Java developer I can't read ngrep output and learn how to, so installed Charles Proxy and ran :

unset DOCKER_TLS_VERIFY

http_proxy=http://127.0.0.1:8888 ./bundles/1.9.0-dev/cross/darwin/amd64/docker exec --env FOO=BAR 1234 bash


Cool, now have to build and run the docker daemon and check how to get this new option passed to the container engine. Time to get back to work :)

21 septembre 2015

Introducing docker-slaves jenkins plugin

I was at DockerHackDay with Yoann on Thursday, and we implemented together a hack we had in mind for a while without time to actually work on it. So, 48 hours later we are proud to announce: 

Jenkins Docker Slaves Plugin

Why yet another Jenkins / Docker plugin ? Actually, there's at least 3 of them, including one I created last year, but all of them do rely on Docker as plain old virtual machines.



For this projet, we wanted to embrace Docker and the container paradigm. Don't run more than one process in a container. Have containers to communicate through links you explicitly setup. Rely on volumes for persistent data.

Docker Slaves plugin do workaround some Jenkins API that haven't been designed to manage Containers. 

The most obvious of them is the Cloud API, which uses a NodeProvisioner to determine when a new node is required, and when to shut it down. This API has been designed for virtual machines, as costly resources which are slow to setup and as such have to be kept online for few builds. Containers are lightweight, start in milliseconds, and there's no reason to reuse one vs create a fresh new dedicated one for another task. 



Another API mismatch is how Jenkins do launch commands on build executor. Jenkins do rely on the slave agent to some way run `System.exec()`. So Jenkins remoting act as a remote process executor. But what's Docker after all ? It's a remote process launcher (with some extra features) ! So we just bypass the Jenkins Remote Launcher to run a plain `docker run` from Jenkins master. In future, we could run this in detached mode, then Jenkins would not even need to stay online as the job is running, and could be restarted... 

Last but not least, there's no need to use the same container to run all commands. This actually prevent some plugin to apply new environment variables setup by build wrappers, or require some terrible hacks as a workaround. Our plugin is just launching a fresh new container for all command, so can setup the adequate environment. The UI does not (yet) offer this option, but one could imagine user can run some build steps with a docker image and some later steps with another one.

This also means running some background process as part of the build, which used to be a hack in build script, with various cleanup issues - Xvnc plugin, I'm looking at you - isn't necessary anymore. If you want to run Selenium tests, then just run a Selenium container side by side with your build container(s), and thanks to shared network setup you can run selenium tests without any extra configuration.




See the plugin repo README for more details, give it a try, and let you know how it goes !