27 novembre 2015

a Dockerfile-based Continuous Delivery Pipeline

I've seen a bunch of project with a Dockerfile to package some app into a production docker image. Most of them do rely on some pre-existing binary they just ADD / curl into the image. This demonstrates some missing piece in the equation, as this binary has to be built from source, and as such you loose the traceability of your built image.

On the other side, some try to setup a full "build from source" Dockerfile, but as a result end with source code and many unnecessary dependencies and intermediate files present in the image: compiler, test libraries, binary intermediate objects, etc. This isn't pleasant, and makes your docker image bigger.

#7115 is a proposal to offer a multi-step Dockerfile, which could solve this issue but introduce some significant complexity in Dockerfile syntax. Also, this is still discussed and I don't expect this to be implemented in a near future.

So, here's my way to handle such a Continuous Delivery Pipeline based on Dockerfiles.


Here is my "build phase" Dockerfile, which I name by convention Dockerfile.build :

FROM maven:3.3.3-jdk-8

ADD / /work
WORKDIR /work

ENV NODE_VERSION 4.2.2

RUN curl -SL "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz" | tar xz -C /usr/local --strip-components=1 

RUN npm install -g bower gulp
RUN npm install
RUN bower --allow-root install
RUN gulp build
RUN mvn package

CMD cp /work/target/*.war /out/app.war

This Dockerfile do define my build environment, relying on Maven and NodeJS to build a modern java application. Such a Dockerfile is designed to fix the build requirements and offer an isolated, reproducible build script, not to create an actual docker image to be ran. But I also added a CMD, which role is to let me export the built binary to a pre-defined location. So I can run :

docker build -f Dockerfile.build -t build .
docker run -v $(pwd):/out build

I also tried to output the war to stdout, so I can run docker run build > app.war,  but got some odd extra character in the stream, and didn't investigated more. I could also rely on docker cp to extract binaries form a container created from build image.

Doing this, I'm building from source and after completion get the application binary for a further step. Next step is for me to create my deployment docker image, using Dockerfile.prod :

FROM jetty:9
ADD application.war /var/lib/jetty

And my full pipeline execution script become

docker build -f Dockerfile.build -t build .
docker run -v $(pwd):/out build
docker build -f Dockerfile.prod -t application .

I could just push the build application image to my repository, but can also have further steps to run it with required middleware (database, etc) and execute some additional automated tests.

For this purpose I rely on docker-compose, to setup the test environment and execute my test suite

test:
  build: .
  dockerfile: Dockerfile.tests
  links: 
    - application
    - selenium

selenium:
  image: selenium/standalone-firefox

application:
  image: application
  ports:
    - "80:8080"
  links:
    - database

database:
  image: mysql

Running docker-compose up will (build and) run my test suite based on selenium acceptance tests.

Last step is just to tag and push the image to my registry.

A future improvement is for me to find a cool way to integrate this in Jenkins as a declarative syntax, vs a plain shell script. Wait and see.