27 avril 2016

I'm now a Docker Captain

I'm now a Docker Captain

Captains is a Docker program to organize their best advocates around the world, give them first class informations and contacts in Docker community, help them to experiment and spread the world with great Docker related content.

Thanks to Docker captain program, we got access to Docker4Desktop before it was publicly announced, we get online webinars with engineering, and we now have a cool logo !

I'm pleased to have been selected as a Captain, even my Youtube Channel Quoi d'Neuf Docker is limited to French audience (maybe I should launch an english spoken one ?).

Java has Java Champions program, I never have been invited to join, despite my awesome Apache Maven book and talks at major conferences, seems it's easier to become a Docker star :P






25 avril 2016

cfp.io

I'm very excited to announce the launch of a long term project of mines: cfp.io


TLDR; Are you event organizer ? Need a free call-for-papers? Contact us, we will host it for you.

History

As a conference organizer, I've been handling call-for-papers for a while, and have actually worked on developing 3 of them (sic). 


  • We started in 2012 with a simple google form to collect speakers proposals. This was a poor experience, but did the job. Anyway as developers we decided we need a dedicated tool to offer a better service.
  • In 2013 we created our own CFP, and worked on it during the pre-event period so it match our needs. Main issue took place in 2014 when we tried to restart it from the initial collect phase, as the later changes had unexpected impacts (soc)
  • In 2015, DevoxxFrance offered to share his custom CFP application codebase. We forked it, and adapted to our needs. The codebase is written in Scala, and just for this reason I hardly was able to found volunteers to help us on this topic. Also, this application do cover lot's of Devoxx specific things, hidden in codebase. We ended with few speakers not being registered, as Devoxx do only offer a pass to the first two speakers, and we have labs with up to four of them :'(
  • In 2016, we forked DevFest Nantes Call for Papers application. This one was written in Java/Spring/Angular, had a nice look and feel, so was a great candidate. We noticed many glitches in backend, so decided to fix them ... and after few months had it mostly fully rewritten. Anyway, this was a smooth migration, and 4 volunteers where able to make it a great app.

Our CFP worked like a charm, and was then forked by NCfrats.io for their own need.

So, is this the definitive CFP we need ?

No: main issue here is to maintain forks of each others. As a sample, we discovered a security issue with our CFP, fixed it, but as this happened two weeks before the event, we didn't took time to let DevFest guys know about it (sorry guys). Also, people at NCrafts could introduce interesting features, and even git make it easy to backport such code, this could quickly become a hard task to keep everything in sync.

We believe a better approach is to colocate our Call-for-papers. a CFP is a low traffic application, limited data volume, and limited complexity. It's very easy to make our codebase multi-tenant, and we plan to offer free hosting for call for papers on our server. Each tenant will get access as https://{{my_awesome_event}}.cfp.io and will be able to manage his own stuff, without need to bother with infrastructure.


As code is open-source (AGPL) we will welcome contribution to offer additional features. If you have designed an integration with third party web service, then please contribute this new feature. Other platform users will then get notified about this new feature and can chose to use it as well. We expect this to be the best way for cfp.io to become the place to be for event organizers, covering all aspects of event management.

Emerging features

A side effect of co-hosting call for papers is to build sort of a speaker social network :

As a speaker, I have to copy/paste my bio and favorite talk abstract on various events' call-for-papers. With cfp.io, I can just reuse my bio, and select a talk I already proposed at eventA so I can apply to eventB. I also can discover events I didn't heard about, just because their CFP is hosted by the platform.
As an event organizer, I can check a speaker already talked at other events, and maybe this specific talk was recorded and available online. This will be a great assistance for program committee to select speakers.

Business model

Do we plan to create a company here ? No. we are a non-profit organization, and this is all just for fun ! We are just geeks, trying to make something cool and useful. Our "business plan" is for each hosted conference to get a free pass. Some of them will be use for our own pleasure to join great events, some of them might be sold to pay for the server, or to help us pay for travel.

If you are event organizer, feel free to contact us to get your CFP setup on our infra - we plan to make this automated as well on a self-service basis, but this isn't implemented yet.

24 avril 2016

Docker Slaves Jenkins plugin has been released !

At Docker Hack Day 2015, Yoann an I created Docker Slaves Plugin, an alternative way to rely on Docker for Jenkins to get build nodes. This initial implementation was the summary of a summer hacking Jenkins APIs and internal design, it was very helpful for us to suggest required changes in jenkins-core and offer a more flexible approach.

How does Docker-slaves differ from other Jenkins' Docker plugins ?

No prerequisite on Docker images

Jenkins uses a Java agent on build machine to manage build operations from master remotely. As a side effect, plugins like docker, amazon-ecs or kubernetes -plugins do require the Docker image configured to host builds to have a JVM, expected permissions, and sometime even a ssh daemon running.

We think this is a non-sense. You should not have to bake Jenkins specific images. Especially, if you don't code in Java but use Jenkins for CI, growing your Docker images by 200Mb of JDK is terrible.

We already explored this goal with Docker Custom Build Environment Plugin but this one also has some contraints : relying on bind mount, it require your Jenkins "classic" build node to have a local docker daemon installed. It also suffers some technical limitations :'(

Docker Slaves Plugin let you use any arbitrary image.

More than just one Docker image

Docker, amazon-ecs and kubernetes -plugins all rely on running a single docker image for the build. They some way admit a common misunderstanding aobut containers, considering them as ligthweight virtual machines. As a result, you can find some docker images to include tons of build tools and also start a selenium environment, like cloudbees/java-build-tools

Why try to get all this shit in a single docker image ? Couldn't we combine a set of more specialized docker images into a group ("pod") of containers configured to work together ?

We used this exact approach. Every build will executre with at least 2 containers :
  1. a plumbing 'jenkins-slave' container to run required Jenkins slave agent
  2. your build container
  3. some optional additional containers, for sample selenium/standalone-firefox to run browser-based tests, or a test database, or ... whatever resource your build require.
All those containers are set to share build workspace and network, so they can work all together without extra configuration.



Docker Slaves Plugin let you define build environment and resources as a set of containers.

Build specific Docker executor

Jenkins use to maintain a pool of slaves, which can be automatically provisioned by a Cloud provider. When a job is executed, such a slave get the task assigned, creates a log for the build, and start executing. After completion, the slave goes back to available pool. Docker-plugin and few other do hack this lifecycle so the slave can't be reused, and enforce a fresh new provisioned node.

This has an odd effect : when the docker infrastructure has issue to run your container, and so the slave doesn't come online, Jenkins will try to run another slave. Again and again. You won't get notified about failure as your build didn't even started. So, wafter few hours when you connect to Jenkins, you'll see hundred disconnected slaves and your build pending...

We wanted to reverse the Slave :: Build relation. We also wanted the slave environment to be defined by the job, or maybe even by content of the job's repository at build time - typically, from a Dockerfile stored in SCM. 

When docker-slaves is used by a job, a slave is created to host the build but it's actual startup is delayed until the job has been assigned, and a build log created. We use this to pipe the container launch log in the build log, so you can immediately diagnose an issue with docker images or Dockerfile you used for the build.

Docker Slaves Plugin creates a one-shot executor, as a main element of your build.

Jenkins Remoting

Jenkins communicates with the slave agent using a specific "remoting" library, comparable to Java RMI. It relies on this one so the master can access remote filesystem and start commands on slave.

But we use Docker, and docker client typically can be considered a way to run and control remote commands, relying on docker daemon as the remote agent. 

Docker Slaves bypass Jenkins Remoting when master has to run a command on slave. It relies on plain docker run for this purpose. We still need Remoting as it is also used for plugins to send Java code closures to be executed on slave. This is the reason we have a jenkins-slave container attached to all builds, which you can ignore, but is required for all Jenkins plugins to work without a single change. 

Docker Slaves Plugin reduce Jenkins Remoting usage.

Pipeline Support

Last but not least, Docker Slaves to fully embrace Jenkins Pipeline. Being main component for Jenkins 2.0, we could not just let Pipeline integration for further implementation effort.

Docker Slaves do introduce dockerNode Pipeline DSL, as an alternative to node used to assign a classic Jenkins slave. dockerNode takes as parameter the set of container images to be ran, then act as a node and you can use all your Jenkins Pipeline construct to script your CI/CD workflow.

dockerNode(image: "maven:3.3.3-jdk-8", sideContainers: ["selenium/standalone-firefox"]) {
  git "https://github.com/wakaleo/game-of-life"
  sh 'mvn clean test'
}

Docker Slaves Plugin embrace Pipeline.

What's next ?

There's still some point we need to address, and probably some bugs as well, but Plugin is working fine already. If you give it a try, please send us feedback on your usage scenario.

Something we want to address as well is volumes management. We re-attach the jenkins-slave container on later builds so we can retrieve a non-empty workspace. But we'd like to fully manage this as a volume and manage it's lifecycle. Especially, we'd like to experiment use of docker volume plugins to improve user experience. For sample, use of Flocker would allow us to snapshot workspace on build completion. This could be useful to offer post-build browsing of the workspace (for diagnostic on failure for sample) or to ensure a build starts from the last stable workspace state, which will offer pre-populated SCM checkout and dependencies, without the risk to get a corrupter environment from a build failure.

We also would like to investigate adapting this approach on container orchestrator like Kubernetes. Not sure they offer the adequate flexibility, maybe only a subset of the plugin could be enabled on such an environment, but makes sense to give it a try.

As a resume, still some long hacking nights in perspective :)

In the meantime, please give it a try, and let us know if it would be helpful for your Jenkins usage.