At some time, the hack you build with docker do hit some limitations, and you have few options :
give up
complain
work around
contribute
On this blog post I'll focus on the last one.
Working on CloudBees Docker Custom Build Environment Plugin (aka "oki docki"), I had to use
docker exec and pass extra environment variables, but it doesn't offer such an option. To get my work done I used a workaround assuming env is available in the target container, but was not pleased by this option.
So on my spare time (sic) I've looked into Docker code, and understood this option isn't supported because ... nobody asked for it so far - actually, IIUC, the underlying RunC container engine fully support passing environment from exec, as the plumbing code is actually shared with docker run.
Getting the project
I made few mistakes before I understood Go programming conventions. So, created ~/go and declared GOPATH accordingly, then cloned docker git repo under $GOPATH/src/github.com/docker. With this setup, I can open the project root in Intellij Idea with Go plugin installed, and get a nice development environment.
I'm far from being fluent in Go language, but docker source code is modular so make it pretty simple to search, and Idea can be used to lookup method usage and such things.
I was at DockerHackDay with Yoann on Thursday, and we implemented together a hack we had in mind for a while without time to actually work on it. So, 48 hours later we are proud to announce:
Jenkins Docker Slaves Plugin
Why yet another Jenkins / Docker plugin ? Actually, there's at least 3 of them, including one I created last year, but all of them do rely on Docker as plain old virtual machines.
For this projet, we wanted to embrace Docker and the container paradigm. Don't run more than one process in a container. Have containers to communicate through links you explicitly setup. Rely on volumes for persistent data.
Docker Slaves plugin do workaround some Jenkins API that haven't been designed to manage Containers.
The most obvious of them is the Cloud API, which uses a NodeProvisioner to determine when a new node is required, and when to shut it down. This API has been designed for virtual machines, as costly resources which are slow to setup and as such have to be kept online for few builds. Containers are lightweight, start in milliseconds, and there's no reason to reuse one vs create a fresh new dedicated one for another task.
Another API mismatch is how Jenkins do launch commands on build executor. Jenkins do rely on the slave agent to some way run `System.exec()`. So Jenkins remoting act as a remote process executor. But what's Docker after all ? It's a remote process launcher (with some extra features) ! So we just bypass the Jenkins Remote Launcher to run a plain `docker run` from Jenkins master. In future, we could run this in detached mode, then Jenkins would not even need to stay online as the job is running, and could be restarted...
Last but not least, there's no need to use the same container to run all commands. This actually prevent some plugin to apply new environment variables setup by build wrappers, or require some terrible hacks as a workaround. Our plugin is just launching a fresh new container for all command, so can setup the adequate environment. The UI does not (yet) offer this option, but one could imagine user can run some build steps with a docker image and some later steps with another one.
This also means running some background process as part of the build, which used to be a hack in build script, with various cleanup issues - Xvnc plugin, I'm looking at you - isn't necessary anymore. If you want to run Selenium tests, then just run a Selenium container side by side with your build container(s), and thanks to shared network setup you can run selenium tests without any extra configuration.
See the plugin repo README for more details, give it a try, and let you know how it goes !