24 avril 2016

Docker Slaves Jenkins plugin has been released !

At Docker Hack Day 2015, Yoann an I created Docker Slaves Plugin, an alternative way to rely on Docker for Jenkins to get build nodes. This initial implementation was the summary of a summer hacking Jenkins APIs and internal design, it was very helpful for us to suggest required changes in jenkins-core and offer a more flexible approach.

How does Docker-slaves differ from other Jenkins' Docker plugins ?

No prerequisite on Docker images

Jenkins uses a Java agent on build machine to manage build operations from master remotely. As a side effect, plugins like docker, amazon-ecs or kubernetes -plugins do require the Docker image configured to host builds to have a JVM, expected permissions, and sometime even a ssh daemon running.

We think this is a non-sense. You should not have to bake Jenkins specific images. Especially, if you don't code in Java but use Jenkins for CI, growing your Docker images by 200Mb of JDK is terrible.

We already explored this goal with Docker Custom Build Environment Plugin but this one also has some contraints : relying on bind mount, it require your Jenkins "classic" build node to have a local docker daemon installed. It also suffers some technical limitations :'(

Docker Slaves Plugin let you use any arbitrary image.

More than just one Docker image

Docker, amazon-ecs and kubernetes -plugins all rely on running a single docker image for the build. They some way admit a common misunderstanding aobut containers, considering them as ligthweight virtual machines. As a result, you can find some docker images to include tons of build tools and also start a selenium environment, like cloudbees/java-build-tools

Why try to get all this shit in a single docker image ? Couldn't we combine a set of more specialized docker images into a group ("pod") of containers configured to work together ?

We used this exact approach. Every build will executre with at least 2 containers :
  1. a plumbing 'jenkins-slave' container to run required Jenkins slave agent
  2. your build container
  3. some optional additional containers, for sample selenium/standalone-firefox to run browser-based tests, or a test database, or ... whatever resource your build require.
All those containers are set to share build workspace and network, so they can work all together without extra configuration.



Docker Slaves Plugin let you define build environment and resources as a set of containers.

Build specific Docker executor

Jenkins use to maintain a pool of slaves, which can be automatically provisioned by a Cloud provider. When a job is executed, such a slave get the task assigned, creates a log for the build, and start executing. After completion, the slave goes back to available pool. Docker-plugin and few other do hack this lifecycle so the slave can't be reused, and enforce a fresh new provisioned node.

This has an odd effect : when the docker infrastructure has issue to run your container, and so the slave doesn't come online, Jenkins will try to run another slave. Again and again. You won't get notified about failure as your build didn't even started. So, wafter few hours when you connect to Jenkins, you'll see hundred disconnected slaves and your build pending...

We wanted to reverse the Slave :: Build relation. We also wanted the slave environment to be defined by the job, or maybe even by content of the job's repository at build time - typically, from a Dockerfile stored in SCM. 

When docker-slaves is used by a job, a slave is created to host the build but it's actual startup is delayed until the job has been assigned, and a build log created. We use this to pipe the container launch log in the build log, so you can immediately diagnose an issue with docker images or Dockerfile you used for the build.

Docker Slaves Plugin creates a one-shot executor, as a main element of your build.

Jenkins Remoting

Jenkins communicates with the slave agent using a specific "remoting" library, comparable to Java RMI. It relies on this one so the master can access remote filesystem and start commands on slave.

But we use Docker, and docker client typically can be considered a way to run and control remote commands, relying on docker daemon as the remote agent. 

Docker Slaves bypass Jenkins Remoting when master has to run a command on slave. It relies on plain docker run for this purpose. We still need Remoting as it is also used for plugins to send Java code closures to be executed on slave. This is the reason we have a jenkins-slave container attached to all builds, which you can ignore, but is required for all Jenkins plugins to work without a single change. 

Docker Slaves Plugin reduce Jenkins Remoting usage.

Pipeline Support

Last but not least, Docker Slaves to fully embrace Jenkins Pipeline. Being main component for Jenkins 2.0, we could not just let Pipeline integration for further implementation effort.

Docker Slaves do introduce dockerNode Pipeline DSL, as an alternative to node used to assign a classic Jenkins slave. dockerNode takes as parameter the set of container images to be ran, then act as a node and you can use all your Jenkins Pipeline construct to script your CI/CD workflow.

dockerNode(image: "maven:3.3.3-jdk-8", sideContainers: ["selenium/standalone-firefox"]) {
  git "https://github.com/wakaleo/game-of-life"
  sh 'mvn clean test'
}

Docker Slaves Plugin embrace Pipeline.

What's next ?

There's still some point we need to address, and probably some bugs as well, but Plugin is working fine already. If you give it a try, please send us feedback on your usage scenario.

Something we want to address as well is volumes management. We re-attach the jenkins-slave container on later builds so we can retrieve a non-empty workspace. But we'd like to fully manage this as a volume and manage it's lifecycle. Especially, we'd like to experiment use of docker volume plugins to improve user experience. For sample, use of Flocker would allow us to snapshot workspace on build completion. This could be useful to offer post-build browsing of the workspace (for diagnostic on failure for sample) or to ensure a build starts from the last stable workspace state, which will offer pre-populated SCM checkout and dependencies, without the risk to get a corrupter environment from a build failure.

We also would like to investigate adapting this approach on container orchestrator like Kubernetes. Not sure they offer the adequate flexibility, maybe only a subset of the plugin could be enabled on such an environment, but makes sense to give it a try.

As a resume, still some long hacking nights in perspective :)

In the meantime, please give it a try, and let us know if it would be helpful for your Jenkins usage.



2 commentaires:

Unknown a dit…

I am running Jenkins inside a container and want to use this plugin. However when I do it it doesn't appear to find the docker host. In the Jenkins global config I set the URL to my docker host as http://192.168.10.12:2375, and to be sure also set the environment variable DOCKER_HOST to the same value, but no luck. I know the host can be reached because I am using the same Jenkins instance with a configured Cloud and Template. The test of the connection is successful and when referencing the label for this a build job a container is launched and the job run (I'm using the jenkins-slave image that Max Stewart at RiotGames covers in his tutorial series).

So I must be missing the correct configuration for the docker host in Jenkins somewhere. Can you describe how to do this please.

Kind Regards

Fraser.

Unknown a dit…

To be clearer about my earlier comment: The specific console output is below. From within the Jenkins container I can culr to the docker host, for example :-

curl -X GET http://192.168.10.12:2375/images/json

That successfully returns all of the images on the docker host via its remote API.

And, as I mentioned above, if I just use the standard docker plugin and point to a configured Docker Cloud, that works fine too.

I have tried on both Jenkins v1.651 and v2.7.1, same behaviour :-(

Started by user admin
FATAL: null
xyz.quoidneufdocker.jenkins.dockerslaves.api.OneShotExecutorProvisioningException
at xyz.quoidneufdocker.jenkins.dockerslaves.api.OneShotSlave.provisionFailed(OneShotSlave.java:146)
at xyz.quoidneufdocker.jenkins.dockerslaves.api.OneShotSlave.provision(OneShotSlave.java:130)
at xyz.quoidneufdocker.jenkins.dockerslaves.api.OneShotSlave.createLauncher(OneShotSlave.java:155)
at xyz.quoidneufdocker.jenkins.dockerslaves.DockerSlave.createLauncher(DockerSlave.java:90)
at hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:561)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Finished: FAILURE