Jenkins + k8s + Buildkit: life behind the corporate proxy
Using Buildkit in the corporate world
Published on 15/04/2022 by igor.kolomiyets in Technical Tips

In the previous article, we discussed the options available to you when it comes to building the Docker images after Kubernetes removes Dockershim in v1.24.

But what if your on-premise Kubernetes cluster has no direct access to the internet and is forced to go through the proxy server. This scenario is fairly common in the corporate world.

Also, what is common in this set up is when the self hosted Docker registry is used and in some cases, internal self-hosted resources use self-signed certificates or certificates signed by the locally created untrusted Certificate Authority. Is it possible to use Buildkit in this set up?

And the answer is yes, it is possible, but it requires a little bit extra effort to set it up.

Splash

Self-Hosted Docker Registry

Let’s discuss each scenario, starting with self-hosted Docker Registry.

No changes are required if self-hosted Docker Registry uses SSL Certificate signed by the recognized and trusted Certificate Authority.

However, if it is signed by the untrusted CA or it uses self-signed certificate it must be added to the moby/buildkit image. Since we use buildkit in a daemon mode, it must be added to the image before starting the container.

To do so, you need to create a new image extending moby/build placing you CA certificate or self-signed certificate into /usr/local/share/ca-certificates/ directory (moby/buildkit is based on alpine distro) and running update-ca-certificates command.

For example, Dockerfile will look like this:

FROM moby/buildkit:master

ADD http://example.com/dl/ROOT-CA.crt /usr/local/share/ca-certificates/ROOT-CA.crt

RUN update-ca-certificates

Once you build it docker build -t myorg/buildkit . and push it to the registry docker push myorg/buildkit it is ready to be used in the pipeline. Just replace moby/buildkit in the Pod Template with the name of the newly built image and that is it:

podTemplate(label: 'demo-customer-pod', cloud: 'kubernetes', serviceAccount: 'jenkins',
  containers: [
    containerTemplate(name: 'ng', image: 'iktech/angular-client-slave', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'buildkit', image: 'myorg/buildkit', ttyEnabled: true, privileged: true),
    containerTemplate(name: 'sonarqube', image: 'iktech/sonarqube-scanner', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'kubectl', image: 'roffe/kubectl', ttyEnabled: true, command: 'cat'),
  ],
  volumes: [
    secretVolume(mountPath: '/etc/.ssh', secretName: 'ssh-home'),
    secretVolume(secretName: 'docker-config', mountPath: '/root/.docker')
  ]) {

If you push the image into the private registry that requires authentication to access images you have to create a new secret of type kubernetes.io/dockerconfigjson in the namespace that is used by Jenkins.

To do so, first encode the following JSON with the registry credentials:

{
   "auths":{
      "myprivateregistry.com":{
         "username":"user",
         "password":"password",
         "email":"deploy@example.com",
         "auth":"dXNlcjpwYXNzd29yZA=="
      }
   }
}

The content of the auth property is the base64 encoded username:password string.

Encode above JSON with base64 encoding and add it as a .dockerconfigjson property in the following secret:

apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJjbHByZWdpc3RyeXN0YWdpbmcuYWwuaW50cmF4YSI6eyJ1c2VybmFtZSI6ImNscHJlZ2lzdHJ5IiwicGFzc3dvcmQiOiJkcm9lamZyUlRMUDEyIiwiZW1haWwiOiJkZXBsb3lAYWwuaW50cmF4YSIsImF1dGgiOiJZMnh3Y21WbmFYTjBjbms2WkhKdlpXcG1jbEpVVEZBeE1nPT0ifX19
kind: Secret
metadata:
  name: mysecret
  namespace: jenkins
type: kubernetes.io/dockerconfigjson

Run kubectl apply -f secret.yaml to create secret in the namespace and then add its name to the Pod Template’s imagePullSecrets array property:

podTemplate(label: 'demo-customer-pod', cloud: 'kubernetes', serviceAccount: 'jenkins', imagePullSecrets['mysecret'],
  containers: [
    containerTemplate(name: 'ng', image: 'iktech/angular-client-slave', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'buildkit', image: 'myprivateregistry.com/buildkit', ttyEnabled: true, privileged: true),
    containerTemplate(name: 'sonarqube', image: 'iktech/sonarqube-scanner', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'kubectl', image: 'roffe/kubectl', ttyEnabled: true, command: 'cat'),
  ],
  volumes: [
    secretVolume(mountPath: '/etc/.ssh', secretName: 'ssh-home'),
    secretVolume(secretName: 'docker-config', mountPath: '/root/.docker')
  ]) { 

Kubernetes Cluster Behind Proxy

If your Kubernetes Cluster is deployed behind the proxy and worker nodes do not have direct access to the internet to pull images and/or pull dependencies there are two additional settings are required.

In order to allow BuildKit to pull images if Kubernetes is behind proxy add HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables to the pod in question. In this case container definition will look like this:

podTemplate(label: 'demo-customer-pod', cloud: 'kubernetes', serviceAccount: 'jenkins',
  containers: [
    containerTemplate(name: 'ng', image: 'iktech/angular-client-slave', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'buildkit', image: 'moby/buildkit:master', ttyEnabled: true, privileged: true, 
      envVars: [
        envVar(key: 'HTTP_PROXY', value: 'http://proxy.com:8080'),
        envVar(key: 'HTTPS_PROXY', value: 'http://proxy.com:8080'),
        envVar(key: 'NO_PROXY', value: 'localhost,domain.internal')
    ]),
    containerTemplate(name: 'sonarqube', image: 'iktech/sonarqube-scanner', ttyEnabled: true, command: 'cat'),
    containerTemplate(name: 'kubectl', image: 'roffe/kubectl', ttyEnabled: true, command: 'cat'),
  ],
  volumes: [
    secretVolume(mountPath: '/etc/.ssh', secretName: 'ssh-home'),
    secretVolume(secretName: 'docker-config', mountPath: '/root/.docker')
  ]) {

In order to allow BuildKit to pull dependencies or any other network resources while building image add same environment variables to the build command. In this case build command will look like this:

        buildctl build \
          --frontend dockerfile.v0 \
          --local context=. \
          --local dockerfile=. \
          --output type=image,name=${image},push=true \
          --opt build-arg:http_proxy=http://proxy.com:8080 \
          --opt build-arg:no_proxy=http://proxy.com:8080 \
          --opt build-arg:http_proxy=localhost,domain.internal