I was at DockerHackDay with Yoann on Thursday, and we implemented together a hack we had in mind for a while without time to actually work on it. So, 48 hours later we are proud to announce:
Jenkins Docker Slaves Plugin
Why yet another Jenkins / Docker plugin ? Actually, there's at least 3 of them, including one I created last year, but all of them do rely on Docker as plain old virtual machines.
For this projet, we wanted to embrace Docker and the container paradigm. Don't run more than one process in a container. Have containers to communicate through links you explicitly setup. Rely on volumes for persistent data.
Docker Slaves plugin do workaround some Jenkins API that haven't been designed to manage Containers.
The most obvious of them is the Cloud API, which uses a NodeProvisioner to determine when a new node is required, and when to shut it down. This API has been designed for virtual machines, as costly resources which are slow to setup and as such have to be kept online for few builds. Containers are lightweight, start in milliseconds, and there's no reason to reuse one vs create a fresh new dedicated one for another task.
Another API mismatch is how Jenkins do launch commands on build executor. Jenkins do rely on the slave agent to some way run `System.exec()`. So Jenkins remoting act as a remote process executor. But what's Docker after all ? It's a remote process launcher (with some extra features) ! So we just bypass the Jenkins Remote Launcher to run a plain `docker run` from Jenkins master. In future, we could run this in detached mode, then Jenkins would not even need to stay online as the job is running, and could be restarted...
Last but not least, there's no need to use the same container to run all commands. This actually prevent some plugin to apply new environment variables setup by build wrappers, or require some terrible hacks as a workaround. Our plugin is just launching a fresh new container for all command, so can setup the adequate environment. The UI does not (yet) offer this option, but one could imagine user can run some build steps with a docker image and some later steps with another one.
This also means running some background process as part of the build, which used to be a hack in build script, with various cleanup issues - Xvnc plugin, I'm looking at you - isn't necessary anymore. If you want to run Selenium tests, then just run a Selenium container side by side with your build container(s), and thanks to shared network setup you can run selenium tests without any extra configuration.
See the plugin repo README for more details, give it a try, and let you know how it goes !