28 Aug

Let Jenkins build the Jenkins image

Properly designed and implemented automation system requires its own infrastructure. You need to place components like code and configuration repository, agents responsible for executing the automated tests, acceptance tests software and tool to define automation processes. A leader in the last category is Jenkins. You can use it to automate the building, testing and deployment processes. Many of Jenkins features are available as plugins. In my previous post, I recommended running the infrastructure components as Docker containers. That includes Jenkins as well. You can use official images available on Docker Hub, but very soon you will find those images are missing many components, so you need to make your own. Of course not manually! Let Jenkins build the Jenkins image!

My own Jenkins image? Why?

Many features you can use in Jenkins are available through plugins. That makes it a very flexible and extendable product. Let’s say you want to use Jenkins to build Docker image or test application inside the container – you need to install the separate Docker plugin that will provide required feature in Jenkins!

There is one catch! Plugins can’t work if you don’t have the software or libraries on host executing the task installed. If you want to use Docker containers you need to have Docker installed. If you’re going to trigger Ansible playbook from Jenkins script you need to have Ansible installed.

If you use my docker-compose.yml configuration described in the previous post, then the only persistent storage for the container will map to /var/jenkins_home which is the Jenkins main directory. It contains configuration, projects work folders, plugins, and logs. That means every system library or application installed in the container will be lost when we remove the container and recreate it. Updating the Jenkins this way means losing all additionally installed packages. There are two simple solutions:t

  • Write a shell script that will install the required software and execute it in the container every time you recreate it
  • Build own docker image with all required additional applications and libraries

Image source code

On GitHub, we can find the official Jenkins repository with a code for building the docker images. The script that defines the building process is the Dockerfile. It uses the openjdk:8-jdk image as a base image for the build – remember Jenkins is a Java-based application. Unless we provide new value for environment variables JENKINS_VERSION and JENKINS_SHA the script will use hardcoded default values which point to latest LTE release number. 

Let’s say we don’t want to use the LTE release but focus on the weekly development versions. We need to provide new values for those parameters, but because we want full automation for this process we need to get if from somewhere. On official Jenkins source code repository, we can find the maven-metadata.xml file that contains information about all release numbers. Identifier of the latest release is between <latest></latest> tags. We can read it using various CLI tools for XML parsing, but I prefer a different approach

#!/bin/sh
curl -s https://repo.jenkins-ci.org/releases/org/jenkins-ci/main/jenkins-war/maven-metadata.xml | sed -ne '/latest/{s/.*&lt;latest&gt;\(.*\)&lt;\/latest&gt;.*/\1/p;q;}'

This simple shell script parses the XML file looking for predefined tags. Why I prefer this way? If we want to use any additional XML library or application, we need first to install it. The sed is a core part of the Linux distribution. It means we can download the official Jenkins container from Docker Hub and use it for building our image and we don’t need to install any additional software. It is handy in highly secure networks. 

The second script is very similar but requires the version number retrieved by the first script as its argument to get the SHA256 signature of the Jenkins installation file.

#!/bin/bash
curl -q -fsSL "https://repo.jenkins-ci.org/releases/org/jenkins-ci/main/jenkins-war/$1/jenkins-war-$1.war.sha256"

Adding own customization

We need to provide the configuration for our changes, but we should not edit the Dockerfile from the Jenkins repository. The content of this file may change between releases, and you cannot call automation the process where you need to update your configuration file manually every time. But hey! Nobody said it is not safe to append the original content! This is the approach I prefer – I have my additional configuration file, let’s call it Dockerfile.diff where I put the script that will install my customized libraries and applications. All I need to do is append original Dockerfile with code in my Dockerfile.diff everytime the Jenkins pipeline will work on building a new image. 

You can find an example of such a file on my GitLab repository. All it does is installing the additional packages via apt-get, then pip, the Docker-CE, Ansible, Jinja2, and Robot. Please note there is no FROM declaration at the beginning, because we append the original Dockerfile, not create a new one. Alternatively, you can use the original Jenkins image from Docker Hub as a base for your own image and provide own complete Dockerfile.

Jenkins pipeline

Jenkins Pipeline Execution
Jenkins Pipeline Execution

If you are working on automation, the Git is your friend. Everything you can you should store on Git repository and fetch the latest version every time you need the code. We should put both scripts, the Dockerfile.diff and the Jenkins pipeline configuration in Jenkinsfile as a project on Git repository. In the Jenkins project configuration, we will just put a link to the repository. We can also use events on the Git server as triggers for project execution thanks to the Webhooks. If we configure the trigger as commit operation, the building process will start every time we update the Dockerfile.diff file on the repository. If we add a repetitive daily task as a trigger, we don’t have to access Jenkins, unless we experience errors.

Here is my basic pipeline configuration for this process

pipeline {
    agent any

    stages {
        stage('Clone repository') {
            steps {
                checkout scm
            }
        }
        stage('Check latest version') {
            steps {
                script {
                    env.JENKINS_VERSION = sh (
                    script: 'sh ./getLatestVersion.sh',
                    returnStdout: true
                    ).trim()
                }
                script {
                    env.JENKINS_SHA = sh (
                    script: "sh ./getLatestVersionSHA.sh ${env.JENKINS_VERSION}",
                    returnStdout: true
                    ).trim()
                }
            }
        }
        stage('Clone Jenkins repository') {
            steps {
                checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'Jenkins-GitLab-Docker_repo']], submoduleCfg: [], userRemoteConfigs: [[url: 'https://github.com/jenkinsci/docker.git']]]
            }
        }
        stage('Append Dockerfile') {
            steps {
                sh "cat Dockerfile.diff >> Jenkins-GitLab-Docker_repo/Dockerfile"
            }
        }
        stage('Build docker image and push to repository') {
            steps {
                script {
                    docker.withRegistry("http://172.16.10.41", 'jenkins-to-harbor') {
                        def customImage = docker.build("pwo-jenkins/jenkins", "--build-arg JENKINS_VERSION=${env.JENKINS_VERSION} --build-arg JENKINS_SHA=${env.JENKINS_SHA} -f Jenkins-GitLab-Docker_repo/Dockerfile Jenkins-GitLab-Docker_repo")
                        /* Push the container to the custom Registry */
                        customImage.push("${env.JENKINS_VERSION}")
                        customImage.push("latest")
                    }
                }
            }
        }
    }
}

What the code does?

The code is straightforward and you shouldn’t have any problems understanding it even if you have no experience in Jenkins. There are two things I would like to clarify.

When Jenkins execute the checkout scm as a first action, it will synchronize a local copy of our project with the Jenkins code from repository provided in Jenkins configuration. Then in the third step, we use the checkout plugin to synchronize the code in the Jenkins project repository with our local copy. We can provide only one repository as a part of the project configuration. 

To build the image, we will call the docker plugin. If we don’t provide additional parameters, it will try to use Docker Hub for all tasks we define inside. If we want to use our own registry we need to call the plugin as a docker.withRegistry providing the registry URL and the credentials. But we don’t put login and password directly in the script, but we use the name of credentials object. We need to create it on the Jenkins server, and then we can use it in multiple projects. We execute two type of docker actions – build for building the image and push for uploading the image to the repository with the tag provided as a parameter.

My script builds the image on the central Jenkins server in my network. It means the Docker must be installed on the server before we execute the script. It will not work on the Jenkins image available on Docker Hub official repository. You need to connect to the container and install required packages manually first. But you can use remote Docker server, such as with Docker Swarm, and specify it calling the docker.withServer method. This is why I prefer to use sed in the script than any XML parsing tool.

As with every project – “start small and grow big”. First, try to build the image without changes the add your customization or additional steps. This is the best way to eliminate mistakes when you make them and learn new things.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: