Integrating Jenkins, Tanzu Build Service and ArgoCD

This post discusses how to integrate Jenkins, Tanzu Build Service, and ArgoCD.

Why Tanzu Build Service.

One of the most significant issues with adopting Kubernetes is developing a workflow that allows for building production-ready container images. Images need to be built and frequently in a repeatable manner.

Tanzu Build Services (TBS) leverages something called a Buildpack to build containers. A Buildpack is a standard approach to ingesting application source code and converting it into a runnable container. Since Buildpacks are based on a standard implementation, multiple vendors can provide Buildpacks. VMware bundles an extended set of Buildpacks based on the Packeto open-source project.

Normally containers based on BuildPacks are built using a CLI command called pack. The pack tool scans source code the determine what type of BuildPacks to use and applies its configuration to a base operating system image.

Tanzu Build Service extends the cli by creating builds with a declarative manifest. Containers are rebased by TBS when newer BuildPacks are imported into the cluster.

Developer Perspective

As a software developer:

  • I don't want to understand how to build Docker containers.
  • I don't want to be responsible for maintaining and patching containers.

Given some source code, I just want to be able to run it on Kubernetes.

CTO Perspective

As a CTO

  • I want to ensure my organization's applications are always running the latest patches available.
  • I want my developers only to run software from a trustworthy repository.
  • I want my developers to spend more time developing new features and not patching operating systems.

Given a Kubernetes application, I want any easy way to update the container to fix the latest CVEs.

Development Iteration

Window_and_localhost_8080_hello-world

The above diagram depicts the flow when a developer commits new code.

  1. Jenkins Fires a new Job when a check in to GIT on to the master branch. At this point, Jenkins can run unit tests, code coverage, and other static analysis tasks.
  2. After the compilation/test phase is complete, Jenkins submits the artifact to the Tanzu Build Service.
  3. After Tanzu Build Service completes the creation of the Image, Jenkins, updates the kustomization manifest automatically
  4. ArgoCD applies the latest configuration to the development cluster.

Argo Out of Sync

Argo

Any differences between the GIT Baseline and what is deployed by Argo are reconciled automatically. Logging in Argo will show when and why artifacts changed.

Argo visits service changed

Argo

Implementing This.

Although any continuous delivery tool could be used to implement this workflow, I chose Jenkins for its ubiquity. If you want to follow along and run my pipelines, you will want to make sure you have Jenkins configured to run builds inside Kubernetes containers.

Jenkins Prerequisites

The following plugins must be installed:

  • kubernetes:1.29.2
  • job-dsl:1.77
  • envinject:2.4.0
  • ssh-agent:1.22
  • kubernetes-credentials-provider:0.18-1

I have included the versions of the plugins I am using. The Jenkins plugin ecosystem is very volatile, so version numbers may vary for you over time.

App Seed

In the Spirit of Configuration as Code, I have a groovy script that is used to seed jobs within Jenkins. Adding a new Job is as simple as adding a new entry to the script and committing the changes to GIT. In the code below, I have the name of the project and the name of the pipeline file that should be used to build the result. Since all my apps are Spring Boot, they all can use the same pipeline. A pipeline file is nothing more than a Groovy script that instructs Jenkins on executing a job.

 1def apps = [
 2 'spring-petclinic-vets-service': [
 3 buildPipeline: 'ci/jenkins/pipelines/spring-boot-app.pipeline'
 4 ],
 5 'spring-petclinic-api-gateway': [
 6 buildPipeline: 'ci/jenkins/pipelines/spring-boot-app.pipeline'
 7 ],
 8 'spring-petclinic-visits-service': [
 9 buildPipeline: 'ci/jenkins/pipelines/spring-boot-app.pipeline'
10 ],
11 'spring-petclinic-httpbin': [
12 buildPipeline: 'ci/jenkins/pipelines/spring-boot-app.pipeline'
13 ],
14 'spring-petclinic-config-server': [
15 buildPipeline: 'ci/jenkins/pipelines/spring-boot-app.pipeline'
16 ]
17]
18
19
20apps.each { name, appInfo ->
21
22
23 pipelineJob(name) {
24 description("Job to build '$name'. Generated by the Seed Job, please do not change !!!")
25 environmentVariables(
26 APP_NAME: name
27 )
28 definition {
29 cps {
30 script(readFileFromWorkspace(appInfo.buildPipeline))
31 sandbox()
32 }
33 } 
34 triggers {
35 scm('* * * * *') 
36 }
37 properties{
38 disableConcurrentBuilds()
39 }
40 }
41}

The complete script is available here

The Pipeline

The pipeline implements several stages.

  1. Fetch from GitHub
  2. Create Image
  3. Update Deployment Manifest

All of these steps are performed in a clean docker container as defined in the pod template.

 1apiVersion: v1
 2kind: Pod
 3metadata:
 4 labels:
 5 app.kubernetes.io/name: jenkins-build
 6 app.kubernetes.io/component: jenkins-build
 7 app.kubernetes.io/version: "1"
 8spec:
 9 volumes:
10 - name: secret-volume
11 secret:
12 secretName: pks-cicd 
13 hostAliases:
14 - ip: 192.168.1.154
15 hostnames:
16 - "small.pks.ellin.net"
17 - ip: 192.168.1.80
18 hostnames:
19 - "harbor.ellin.net"
20 containers:
21 - name: k8s
22 image: harbor.ellin.net/library/docker-build
23 command:
24 - sleep
25 env:
26 - name: KUBECONFIG
27 value: "/tmp/config/jenkins-sa"
28 volumeMounts:
29 - name: secret-volume
30 readOnly: true
31 mountPath: "/tmp/config" 
32 args:
33 - infinity

Since our pod needs access to a remote Kubernetes cluster, I have mounted a service account KUBECONFIG into the pod as a secret.

  1. Fetch from GitHub
 1steps {
 2dir("app"){
 3git(
 4poll: true,
 5changelog: true,
 6branch: "main",
 7credentialsId: "git-jenkins",
 8url: "git@github.com:jeffellin/${APP_NAME}.git"
 9)
10sh 'git rev-parse HEAD > git-commit.txt'
11}
12}
13}
  1. Create an Image with TBS. Use the -w flag to wait until the build is complete.
 1stage('Create Image') {
 2steps {
 3container('k8s') {
 4sh '''#!/bin/sh -e
 5export GIT_COMMIT=$(cat app/git-commit.txt)
 6kp image save ${APP_NAME} \
 7--git git@github.com:jeffellin/${APP_NAME}.git \
 8-t harbor.ellin.net/dev/${APP_NAME} \
 9--env BP_GRADLE_BUILD_ARGUMENTS='--no-daemon build' \
10--git-revision ${GIT_COMMIT} -w
11'''
12} 

Images Monitored by TBS

TBS Images

  1. Update Deployment Manifest

Since we use Kustomize to maintain our deployment versions. We update the kustomization.yaml using kustomize itself.

 1stage('Update Deployment Manifest'){
 2steps {
 3container('k8s'){
 4dir("gitops"){
 5git(
 6poll: false,
 7changelog: false,
 8branch: "master",
 9credentialsId: "git-jenkins",
10url: "git@github.com:jeffellin/spring-petclinic-gitops.git"
11)
12}
13sshagent(['git-jenkins']) { 
14sh '''#!/bin/sh -e
15
16kubectl get image ${APP_NAME} -o json | jq -r .status.latestImage >> containerversion.txt
17export CONTAINER_VERSION=$(cat containerversion.txt)
18cd gitops/app
19kustomize edit set image ${APP_NAME}=${CONTAINER_VERSION}
20git config --global user.name "jenkins CI"
21git config --global user.email "none@none.com"
22git add .
23git diff-index --quiet HEAD || git commit -m "update by ci"
24mkdir -p ~/.ssh
25ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
26git pull -r origin master
27git push --set-upstream origin master
28'''
29} 
30}}}

All of these steps run within the "k8s" container, which was pulled from Harbor.

harbor.ellin.net/library/docker-build

This image is based on the following Dockerfile.

 1FROM docker:dind
 2
 3ENV KUBE_VERSION 1.20.4
 4ENV HELM_VERSION 3.5.3
 5ENV KP_VERSION 0.2.0
 6RUN apk add --no-cache ca-certificates bash git openssh curl jq bind-tools subversion git-svn \
 7 && wget -q https://storage.googleapis.com/kubernetes-release/release/v${KUBE_VERSION}/bin/linux/amd64/kubectl -O /usr/local/bin/kubectl \
 8 && chmod +x /usr/local/bin/kubectl \
 9 && wget -q https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz -O - | tar -xzO linux-amd64/helm > /usr/local/bin/helm \
10 && chmod +x /usr/local/bin/helm \
11 && chmod g+rwx /root \
12 && mkdir /config \
13 && chmod g+rwx /config \
14 && helm repo add "stable" "https://charts.helm.sh/stable" --force-update
15
16RUN wget https://github.com/vmware-tanzu/kpack-cli/releases/download/v${KP_VERSION}/kp-linux-${KP_VERSION}
17RUN mv kp-linux-${KP_VERSION} /usr/local/bin/kp
18RUN chmod a+x /usr/local/bin/kp
19
20RUN curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
21RUN mv kustomize /usr/local/bin
22
23#ADD ca.crt /usr/local/share/ca-certificates
24#RUN update-ca-certificates
25WORKDIR /config
26
27CMD bash

It's a standard docker image with some utilities that we commonly need to use.

  • bash
  • git
  • openssh
  • curl
  • jq
  • helm
  • kubectl
  • kustomize

The complete script is available here

All the source code for the pet-clinic and its deployment are available on GitHub

comments powered by Disqus