Jenkins + k8s: Building Docker Image without Docker
Build Docker Image in Jenkins pipeline without Docker
Published on 06/04/2022 by igor.kolomiyets in Technical Tips

As it was announced in 2020, Kubernetes deprecates Docker as container runtime in v1.20, and it will be removed starting from v1.24 which is getting released soon. In case your Jenkins pipelines are using the Kubernetes plugin it is highly likely that pipelines are using underlying Docker to build images, hence you might have a problem.

Splash

What are the options post 1.24 release if you still want to use Kubernetes plugins to build Docker images?

User Docker

The most obvious one is to continue using docker to build the images.

Yes, you can still have Docker installed on the Kubernetes Worker Node even if it is not used by kubelet as a container runtime. In this case, no changes to the pipelines are required.

However, bear in mind that this is only applicable in the self-managed clusters where you have full control over Worker Nodes.

What if you do not want to use Docker or you are using managed Kubernetes, hence the choice of the container runtime is outside of your control?

Use BuildKit

Using BuildKit will require some minor changes to the pipeline Jenkinsfile.

Let’s go through the required changes.

Consider we have the following pipeline:

version="1.0.0"
repository="ikolomiyets/demo-frontend"
tag="latest"
image="${repository}:${version}.${env.BUILD_NUMBER}"
namespace="demo"

podTemplate(label: 'demo-customer-pod', cloud: 'kubernetes', serviceAccount: 'jenkins',
  containers: [
    containerTemplate(name: 'ng', image: 'iktech/angular-client-slave', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'docker', image: 'docker:dind', ttyEnabled: true, command: 'cat', privileged: true,
        envVars: [secretEnvVar(key: 'DOCKER_USERNAME', secretName: 'docker-hub-credentials', secretKey: 'username'),
    ]),
    containerTemplate(name: 'sonarqube', image: 'iktech/sonarqube-scanner', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'kubectl', image: 'roffe/kubectl', ttyEnabled: true, command: 'cat'),
  ],
  volumes: [
    secretVolume(mountPath: '/etc/.ssh', secretName: 'ssh-home'),
    secretVolume(secretName: 'docker-hub-credentials', mountPath: '/etc/.secret'),
    hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')
  ]) {
    node('demo-customer-pod') {
        stage('Prepare') {
            checkout scm
        }

        stage('Build Docker Image') {
            container('docker') {
                sh """
                  docker build -t ${image} .
                  cat /etc/.secret/password | docker login --password-stdin --username $DOCKER_USERNAME
                  docker push ${image}
                  docker tag ${image} ${repository}:${tag}
                  docker push ${repository}:${tag}
                """
                milestone(1)
            }
        }
        stage('Deploy Latest') {
            container('kubectl') {
                sh "kubectl patch -n ${namespace} deployment demo-frontend -p '{\"spec\": { \"template\" : {\"spec\" : {\"containers\" : [{ \"name\" : \"demo-frontend\", \"image\" : \"${image}\"}]}}}}'"
                milestone(2)
            }
        }
    }
}

properties([[
    $class: 'BuildDiscarderProperty',
    strategy: [
        $class: 'LogRotator',
        artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '10']
    ]
]);

It is a simplified version of the real pipeline where all the test, static code analysis and tagging steps are removed. So, just two stages left: build docker image and deploy it to kubernetes cluster.

In order to use BuildKit you have to replace docker:dind container with moby/buildkit:master. In this case, mounting of /var/run/docker.sock is not required anymore.

However, in order to let BuildKit to push your image to registry, first you need to create a new secret for Docker configuration file (if your registry requires authentication). In the following example, we assume that we are pushing image to Docker Hub and our credentials are as follows: username is user and password is password.

First, encode username/password pair into base64 string. To do so, run the following command:

echo -n user:password | base64 -w 0, assuming that you run it on Linux or echo -n user:password | base64 on Mac. Copy resulting string (dXNlcjpwYXNzd29yZA==) and add it to the following json file (config.json) into the “auth” property:

{
  "auths": {
     "https://index.docker.io/v1/": {
       "auth": "dXNlcjpwYXNzd29yZA=="
     }
  }
}

Convert content of the config.json file into base64 string running the following command: cat config.json | base64 -w 0 and add resulting string into kubernetes secret manifest secret.yaml. We assume that your Jenkins Kubernetes plugin uses jenkins namespace for pipeline pods.

apiVersion: v1
kind: Secret
metadata:
  namespace: jenkins
  name: docker-config
data:
  config.json: ewogICJhdXRocyI6IHsKICAgICAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogewogICAgICAgImF1dGgiOiAiZFhObGNqcHdZWE56ZDI5eVpBPT0iCiAgICAgfQogIH0KfQo=

Then apply it to the Kubernetes cluster: kubectl apply -f secret.yaml.

Now, we are ready to modify the podTemplate. It should look like following:

podTemplate(label: 'demo-customer-pod', cloud: 'kubernetes', serviceAccount: 'jenkins',
  containers: [
    containerTemplate(name: 'ng', image: 'iktech/angular-client-slave', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'buildkit', image: 'moby/buildkit:master', ttyEnabled: true, privileged: true),
    containerTemplate(name: 'sonarqube', image: 'iktech/sonarqube-scanner', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'kubectl', image: 'roffe/kubectl', ttyEnabled: true, command: 'cat'),
  ],
  volumes: [
    secretVolume(mountPath: '/etc/.ssh', secretName: 'ssh-home'),
    secretVolume(secretName: 'docker-config', mountPath: '/root/.docker')
  ]) {

Note that we renamed the container from docker to buildkit and also removed the command definition.

Also, we mounted a secret with the Docker’s config.json into /root/.docker directory so it will be available to in buildkit container as /root/.docker/config.json.

And next, we replace the Build Docker Image stage with the following:

 stage('Build Docker Image') {
            container('buildkit') {
                sh """
                  buildctl build \
                    --frontend dockerfile.v0 \
                    --local context=. \
                    --local dockerfile=. \
                    --output type=image,name=${image},push=true
                  buildctl build \
                    --frontend dockerfile.v0 \
                    --local context=. \
                    --local dockerfile=. \
                    --output type=image,name=${repository}:${tag},push=true
                """
                milestone(1)
            }
        }

That is pretty much it.

Using AWS ECR

One last point, in case you use AWS ECR. In this case, your Docker config.json file will be different:


  "credHelpers": {
     "public.ecr.aws": "ecr-login",
     "<account_id>.dkr.ecr.<zone_id>.amazonaws.com": "ecr-login"
  }
}

This config file does not have secrets hence can be stored as ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: docker-config
  namespace: jenkins
data:
  config.json: |
    {
      "credHelpers": {
        "public.ecr.aws": "ecr-login",
        "<account_id>.dkr.ecr.<zone_id>.amazonaws.com": "ecr-login"
      }
    }

If you use ConfigMap, volume mount in the podTemplate should change from: secretVolume(secretName: 'docker-config', mountPath: '/root/.docker') to configMapVolume(configMapName: 'docker-config', mountPath: '/root/.docker').

However, in this case, you need to have ECR Login Helper available in the path so the Build Docker Image stage will look like the following:

       stage('Build Docker Image') {
            container('buildkit') {
                sh """
                  wget https://amazon-ecr-credential-helper-releases.s3.us-east-2.amazonaws.com/0.6.0/linux-amd64/docker-credential-ecr-login -O /usr/local/bin/docker-credential-ecr-login
                  chmod 755 /usr/local/bin/docker-credential-ecr-login
                  buildctl build \
                    --frontend dockerfile.v0 \
                    --local context=. \
                    --local dockerfile=. \
                    --output type=image,name=${image},push=true
                  buildctl build \
                    --frontend dockerfile.v0 \
                    --local context=. \
                    --local dockerfile=. \
                    --output type=image,name=${repository}:${tag},push=true
                """
                milestone(1)
            }
        }

There are a number of edge cases such as K8S cluster behind the proxy or container registry server using self-signed certificates requiring a slightly more complex configuration which we will discuss later.