This week is JenkinsWorld, and I won't be there ... but this doesn't mean I won't be part of it.
I'm pleased to announce Docker-slaves-plugin has reached 1.0.
Docker-slaves was created on year ago, as Yoann and I joined forces during DockerHackDay, with complementary ideas in mind to re-invent Jenkins in a Docker world. We are proud we were able to develop a working prototype within 24 hours and won 3rd prize on this challenge !
What makes Docker slaves such a revolution ?
Docker-related Jenkins plugins so far use a single docker container to provide a Jenkins agent (aka "slave"), which will host a build. Such a container is running a Jenkins remote agent and as such has to include a JRE, and allow connection from master or connect to master by itself. In both cases, this introduces significant constraints on the Docker image one can use. We want this to disappear.
To communicate with remote agent, Jenkins establish a communication channel with slave. On ssh slave, this communication uses the ssh connection as a tunnel. Other slave launcher uses various tunneling techniques or explicit port allocation to create this channel. But docker also has an interactive mode to run a container, which let us establish a stable (encrypted) connection with a remote container. Running `docker run -it` is very comparable to running a ssh client to connect to a remote host. So we use this capability to create the remoting channel directly on top of docker's interactive mode.
... and you can even define those containers using a Dockerfile you store in SCM ! If you love infrastructure as code, you'll be happy to get both your source code, continuous deployment pipeline script, and build environment stored all in your project SCM with plain Dockerfiles. Docker-slaves will build the required image on demand and run the build inside this container.
Last but not least, Jenkins docker-related plugins use to start with a fresh, new slave, which require to run a full SCM checkout and populate project dependencies. Docker-slaves uses a volume as build workspace, so this one is persistent between builds. There is huge room for improvement here, with assistance for docker's volume-drivers, typically to provide snapshot capability so you can browse workspace for a specific build (to diagnose a build failure) or replicate workspace on cluster.
If you have opportunity to give it a try, please send me feedback @ndeloof
UPDATE:
For some odd reason (who said "conspiracy" ?) docker-slaves 1.0 was not published in update center (yet). Will need to investigate with Jenkins infra team, but with Jenkins World they are all higly busy this week.
I'm pleased to announce Docker-slaves-plugin has reached 1.0.
Docker-slaves was created on year ago, as Yoann and I joined forces during DockerHackDay, with complementary ideas in mind to re-invent Jenkins in a Docker world. We are proud we were able to develop a working prototype within 24 hours and won 3rd prize on this challenge !
What makes Docker slaves such a revolution ?
Docker-related Jenkins plugins so far use a single docker container to provide a Jenkins agent (aka "slave"), which will host a build. Such a container is running a Jenkins remote agent and as such has to include a JRE, and allow connection from master or connect to master by itself. In both cases, this introduces significant constraints on the Docker image one can use. We want this to disappear.
With Docker-slaves, you can use ANY docker image to run your build.
To communicate with remote agent, Jenkins establish a communication channel with slave. On ssh slave, this communication uses the ssh connection as a tunnel. Other slave launcher uses various tunneling techniques or explicit port allocation to create this channel. But docker also has an interactive mode to run a container, which let us establish a stable (encrypted) connection with a remote container. Running `docker run -it` is very comparable to running a ssh client to connect to a remote host. So we use this capability to create the remoting channel directly on top of docker's interactive mode.
SSHD in docker container is evil. Docker-slaves don't need one, and does not require your master to expose JNLP on the Internet
Docker is not just about container, it's also about containers. docker-compose is a popular tool and demonstrate need to combine containers together to build useful stuff. We think a build environment has the same requirement. You need maven to build your project, but you also need a redis database and a web browser so you can run Selenium functional tests. Just create 3 containers, and link them together ! This is exactly what Docker-slaves does, so you can define your build environment as a set of containers.
Docker-slaves let you compose your build environment
... and you can even define those containers using a Dockerfile you store in SCM ! If you love infrastructure as code, you'll be happy to get both your source code, continuous deployment pipeline script, and build environment stored all in your project SCM with plain Dockerfiles. Docker-slaves will build the required image on demand and run the build inside this container.
Docker-slaves bring Continous-Delivery-as-Code to next level
Docker slave also adds a transparent "remoting" container so Jenkins plumbing is well setup and all Jenkins plugins work without any change. This includes Pipeline plugin. This plugin used to require a classic jenkins slave as node() to run build step, and this node to host a docker daemon if you want to run anything docker related. With docker slaves, just write your Jenkinsfile without need for a single classic Jenkins executor.
dockerNode("maven:3.3.3-jdk-8") { sh "mvn -version" }
How does this compare to docker-pipeline's docker.inside ? There's no need for a host jenkins Node here, nor a local Docker daemon installed on your jenkins slaves. You can rely on docker swarm to scale your build capacity.
Docker-slaves do embrace Pipeline
Last but not least, Jenkins docker-related plugins use to start with a fresh, new slave, which require to run a full SCM checkout and populate project dependencies. Docker-slaves uses a volume as build workspace, so this one is persistent between builds. There is huge room for improvement here, with assistance for docker's volume-drivers, typically to provide snapshot capability so you can browse workspace for a specific build (to diagnose a build failure) or replicate workspace on cluster.
Docker-slaves workspace is persistent
UPDATE:
For some odd reason (who said "conspiracy" ?) docker-slaves 1.0 was not published in update center (yet). Will need to investigate with Jenkins infra team, but with Jenkins World they are all higly busy this week.
@ndeloof w00t ! Best plugin ever !— Damien Duportal (@DamienDuportal) September 14, 2016