Science and technology

Running integration exams in Kubernetes

Linux containers have modified the way in which we run, construct, and handle purposes. As more and more platforms develop into cloud-native, containers are taking part in a extra necessary position in each enterprise’s infrastructure. Kubernetes (K8s) is at present probably the most well-known resolution for managing containers, whether or not they run in a private, public, or hybrid cloud.

With a container software platform, we will dynamically create an entire surroundings to run a job and discard it afterward. In an earlier publish, we lined the right way to use Jenkins to run builds and unit tests in containers. Before studying additional, I like to recommend having a look at that publish so that you’re accustomed to the fundamental ideas of the answer.

Now let’s take a look at the right way to run integration exams by beginning a number of containers to supply an entire check surroundings.

Let’s assume we’ve got a backend software that is dependent upon different providers, equivalent to databases, message brokers, or net providers. During unit testing, we attempt to use embedded options or just mock-up these endpoints to verify no community connections are required. This requires modifications in our code for the scope of the check.

The function of the mixing check is to confirm how the applying behaves with different components of the answer stack. Providing a service is dependent upon extra than simply our codebase. The total resolution is a mixture of modules (e.g., databases with saved procedures, message brokers, or distributed cache with server-side scripts) that should be wired collectively the appropriate means to supply the anticipated performance. This can solely be examined by working all these components subsequent to one another and never enabling “test mode” inside our software.

Whether “unit test” and “integration test” are the right terms on this case is debatable. For simplicity’s sake, I will name exams that run inside one course of with none exterior dependencies “unit tests” and those working the app in manufacturing mode making community connections “integration tests.”

Maintaining a static surroundings for such exams will be troublesome and a waste of assets; that is the place the ephemeral nature of dynamic containers turns out to be useful.

The codebase for this publish will be present in my kubernetes-integration-test GitHub repository. It comprises an instance Red Hat Fuse 7 software (/app-users) that takes messages from AMQ, queries information from a MariaDB, and calls a REST API. The repo additionally comprises the mixing check challenge (/integration-test) and the totally different Jenkinsfiles defined on this publish.

Here are the software program variations used on this tutorial:

  • Red Hat Container Development Kit (CDK) v3.four
  • OpenShift v3.9
  • Kubernetes v1.9
  • Jenkins photographs v3.9
  • Jenkins kubernetes-plugin v1.7

A recent begin each time

We need to obtain the next targets with our integration check:

  • begin the production-ready bundle of our app below check,
  • begin an occasion of all of the dependency programs required,
  • run exams that work together solely through the general public service endpoints with the app,
  • be sure that nothing persists between executions, so we do not have to fret about restoring the preliminary state, and
  • allocate assets solely throughout check execution.

The resolution relies on Jenkins and the jenkins-kubernetes-plugin. Jenkins can run duties on totally different agent nodes, whereas the plugin makes it doable to create these nodes dynamically on Kubernetes. An agent node is created just for the duty execution and is deleted afterward.

We have to outline the agent node pod template first. The jenkins-master picture for OpenShift comes with predefined podTemplates for Maven and NodeJS builds, and admins can add such “static” pod templates to the plugin configuration.

Fortunately, defining the pod template for our agent node immediately in our challenge is feasible if we use a Jenkins pipeline. This is clearly a extra versatile means, as the entire execution surroundings will be maintained in code by the event group. Let’s see an instance:

  label: 'app-users-it',
  cloud: 'openshift', //This must match the cloud title in jenkins-kubernetes-plugin config
  containers: [
    //Jenkins agent. Also executes the mixing check. Having a 'jnlp' container is obligatory.
    containerTemplate(title: 'jnlp',
                      picture: '',
                      resourceLimitMemory: '512Mi',
                      args: '$pc.jnlpmac $',
                      envVars: [
                        //Heap for mvn and surefire course of is 1/four of resourceLimitMemory by default
                        envVar(key: 'JNLP_MAX_HEAP_UPPER_BOUND_MB', worth: '64')
    //App below check
    containerTemplate(title: 'app-users',
                      picture: '',
                      resourceLimitMemory: '512Mi',
                      envVars: [
                        envVar(key: 'SPRING_PROFILES_ACTIVE', worth: 'k8sit'),
                        envVar(key: 'SPRING_CLOUD_KUBERNETES_ENABLED', worth: 'false')
    containerTemplate(title: 'mariadb',
                      picture: '',
                      resourceLimitMemory: '256Mi',
                      envVars: [
                        envVar(key: 'MYSQL_USER', worth: 'myuser'),
                        envVar(key: 'MYSQL_PASSWORD', worth: 'mypassword'),
                        envVar(key: 'MYSQL_DATABASE', worth: 'testdb'),
                        envVar(key: 'MYSQL_ROOT_PASSWORD', worth: 'secret')
    containerTemplate(title: 'amq',
                      picture: '',
                      resourceLimitMemory: '256Mi',
                      envVars: [
                        envVar(key: 'AMQ_USER', worth: 'check'),
                        envVar(key: 'AMQ_PASSWORD', worth: 'secret')
    //External Rest API (supplied by mockserver)
    containerTemplate(title: 'mockserver',
                      picture: 'jamesdbloom/',
                      resourceLimitMemory: '256Mi',
                      envVars: [
                        envVar(key: 'LOG_LEVEL', worth: 'INFO'),
                        envVar(key: 'JVM_OPTIONS', worth: '-Xmx128m'),

This pipeline will create all of the containers, pulling the given Docker photographs working them inside the similar pod. This implies that the containers will share the localhost interface, so the providers can entry one another’s ports (however we’ve got to consider port-binding collisions). This is how the working pod seems to be in OpenShift net console:

The photographs are set by their Docker URL (OpenShift picture streams aren’t supported right here), so the cluster should entry these registries. In the instance above, we beforehand constructed the picture of our app inside the similar Kubernetes cluster and are actually pulling it from the inner registry: (docker-registry.default.svc). This picture is our launch bundle that could be deployed to a dev, check, or prod surroundings. It’s began with a k8sit software properties profile the place the connection URLs level to

Thinking about reminiscence utilization for containers working Java processes is necessary. Current variations of Java (v1.eight, v1.9) ignore the container memory restrict by default and set a a lot larger heap dimension. Version three.9 jenkins-slave images helps reminiscence limits through environment variables a lot better than earlier variations. Setting JNLP_MAX_HEAP_UPPER_BOUND_MB=64 was sufficient for us to run Maven duties with a 512MiB restrict.

All containers inside the pod have a shared empty dir quantity mounted at /residence/jenkins (default workingDir). This is utilized by the Jenkins agent to run pipeline step scripts inside the container, and that is the place we try our integration check repository. This can also be the present listing the place the steps are executed except they’re inside a dir('relative_dir') block. Here are the pipeline steps for the instance above:

    node('app-users-it') { //should match the label within the podTemplate
        stage('Pull supply')
        dir ("integration-test") { //In this instance the mixing check challenge is in a sub listing
            stage('Prepare check')

            //These env vars are utilized by the exams to ship message to queue

The pipeline steps are run on the jnlp container except they’re inside a container('container_name') block:

  • First, we try the supply of the mixing challenge. In this case, it is within the integration-test subdirectory inside the repo.
  • The sql/ script creates tables and hundreds check information within the database. It requires the mysql device, so it should be run within the mariadb container.
  • Our software (app-users) calls a Rest API. We don’t have any picture to begin this service, so we use MockServer to carry up the HTTP endpoint. It’s configured by the mockserver/
  • The integration exams are written in Java with JUnit and executed by Maven. It may very well be the rest—that is merely the stack we’re accustomed to.

There are loads of configuration parameters for podTemplate and containerTemplate following the Kubernetes resource API, with a number of variations. Environment variables, for instance, will be outlined on the container stage in addition to the pod stage. Volumes will be added to the pod, however they’re mounted on every container on the similar mountPath:

  containers: [...],
      configMapVolume(mountPath: '/and many others/myconfig',
        configMapName: 'my-settings'),
      persistentVolumeClaim(mountPath: '/residence/jenkins/myvolume',
  envVars: [
     envVar(key: 'ENV_NAME', worth: 'my-k8sit')

Sounds straightforward, however…

Running a number of containers in the identical pod is a pleasant technique to connect them, however there is a matter that we will run into if our containers have entry factors with totally different person IDs. Docker photographs used to run processes as root, however it’s not beneficial in manufacturing environments attributable to security concerns, so many photographs swap to a non-root person. Unfortunately, totally different photographs might use a special uid (USER in a Dockerfile) that may trigger file-permission points in the event that they use the identical quantity.

In this case, the supply of battle is the Jenkins workspace on the workingDir quantity (/residence/jenkins/workspace/). This is used for pipeline execution and saving step outputs inside every container. If we’ve got steps in a container(…) block and the uid on this picture is totally different (non-root) than within the jnlp container, we’ll get the next error:

contact: can't contact '/residence/jenkins/workspace/k8sit-basic/integration-test@tmp/durable-aa8f5204/jenkins-log.txt': Permission denied

Let’s take a look on the USER in photographs in our instance:

The default umask within the jnlp container is 0022, so steps in containers with uid 185 and uid 27 will run into the permission challenge. The workaround is to alter the default umask within the jnlp container so the workspace is accessible by any uid:

containerTemplate(title: 'jnlp',
  picture: '',
  resourceLimitMemory: '512Mi',
  command: '/bin/sh -c',
  //change umask so any uid has permission to the jenkins workspace
  args: '"umask 0000; /usr/local/bin/run-jnlp-client $computer.jnlpmac $"',
  envVars: [
    envVar(key: 'JNLP_MAX_HEAP_UPPER_BOUND_MB', worth: '64')

To see the entire Jenkinsfile that first builds the app and the Docker picture earlier than working the mixing check, go to kubernetes-integration-test/Jenkinsfile.

In these examples, the mixing check is run on the jnlp container as a result of we picked Java and Maven for our check challenge and the jenkins-slave-maven picture can execute that. This is, in fact, not obligatory; we will use the jenkins-slave-base picture as jnlp and have a a separate container to execute the check. See the kubernetes-integration-test/Jenkinsfile-jnlp-base instance the place we deliberately separate jnlp and use one other container for Maven

YAML template

The podTemplate and containerTemplate definitions help many configurations, however they lack a number of parameters. For instance:

  • They cannot assign surroundings variables from ConfigMap, solely from Secret.
  • They cannot set a readiness probe for the containers. Without them, Kubernetes studies the pod is working proper after kicking off the containers. Jenkins will begin executing the steps earlier than the processes are prepared to simply accept requests. This can result in failures attributable to racing situations. These instance pipelines usually work as a result of checkout scm offers sufficient time for the containers to begin. Of course, a sleep helps, however defining readiness probes is the right means.

To resolve the issue, a YAML parameter was added to the podTemplate() in kubernetes-plugin (v1.5+). It helps an entire Kubernetes pod resource definition, so we will outline any configuration for the pod:

  label: 'app-users-it',
  cloud: 'openshift',
  //yaml configuration inline. It's a yaml so indentation is necessary.
  yaml: '''
apiVersion: v1
variety: Pod
    check: app-users
  #Java agent, check executor
  - title: jnlp
    - /bin/sh
    - -c
      #Note the args and syntax for run-jnlp-client
    - umask 0000; /usr/native/bin/run-jnlp-client $(JENKINS_SECRET) $(JENKINS_NAME)
        reminiscence: 512Mi
  #App below check
  - title: app-users
  //volumes for instance will be outlined within the yaml our as parameter

Make positive to replace the Kubernetes plugin in Jenkins to v1.5+, in any other case the YAML parameter will probably be silently ignored.

YAML definition and different podTemplate parameters are alleged to be merged in a means, however it’s much less error-prone to make use of just one or the opposite. If defining the YAML inline within the pipeline is troublesome to learn, see kubernetes-integration-test/Jenkinsfile-yaml, which is an instance of loading it from a file.

Declarative Pipeline syntax

All the instance pipelines above used the Scripted Pipeline syntax, which is virtually a Groovy script with pipeline steps. The Declarative Pipeline syntax is a brand new method that enforces extra construction on the script by offering much less flexibility and permitting no “Groovy hacks.” It leads to cleaner code, however you could have to modify again to the scripted syntax in complicated eventualities.

In Declarative Pipelines, the kubernetes-plugin (v1.7+) supports only the YAML definition to outline the pod:

pipeline {
    phases {
        stage('Run integration check')

Setting a special agent for every stage additionally is feasible, as in kubernetes-integration-test/Jenkinsfile-declarative.

Try it on Minishift

If you’d wish to attempt the answer described above, you may want entry to a Kubernetes cluster. At Red Hat, we use OpenShift, which is an enterprise-ready model of Kubernetes. There are a number of methods to have entry to a full-scale cluster:

Running a small one-node cluster in your native machine additionally is feasible, which might be the best technique to attempt issues. Let’s see the right way to arrange Red Hat CDK (or Minikube) to run our exams.

After downloading Red Hat CDK, prepare the Minishift surroundings:

  • Run setup: minishift setup-cdk
  • Set the inner Docker registry as insecure:
    minishift config set insecure-registry This is required as a result of the kubernetes-plugin is pulling the picture immediately from the inner registry, which isn’t HTTPS.
  • Start the Minishift digital machine (use your free Red Hat account):  minishift --username [email protected] --password ... --memory 4GB begin
  • Note the console URL (or you will get it by getting into: minishift console --url)
  • Add the oc device to the trail: eval $(minishift oc-env)
  • Log in to OpenShift API (admin/admin):
    oc login

Start a Jenkins grasp inside the cluster utilizing the template out there:
oc new-app --template=jenkins-persistent -p MEMORY_LIMIT=1024Mi

Once Jenkins is up, it needs to be out there through a route created by the template (e.g., Login is built-in with OpenShift (admin/admin).

Create a brand new Pipeline challenge that takes the Pipeline script from SCM pointing to a Git repository (e.g., kubernetes-integration-test.git) having the Jenkinsfile to execute. Then merely Build Now.

The first run takes longer, as photographs are downloaded from the Docker registries. If every part goes effectively, we will see the check execution on the Jenkins construct’s Console Output. The dynamically created pods will be seen on the OpenShift Console below My Project / Pods.

If one thing goes improper, attempt to examine by :

  • Jenkins construct output
  • Jenkins grasp pod log
  • Jenkins kubernetes-plugin configuration
  • Events of created pods (Maven or integration-test)
  • Log of created pods

If you’d wish to make extra executions faster, you should utilize a quantity as a neighborhood Maven repository so Maven would not must obtain dependencies each time. Create a PersistentVolumeClaim:

# oc create -f - <<EOF
variety: PersistentVolumeClaim
apiVersion: v1
  title: mavenlocalrepo
    - ReadWriteOnce
      storage: 10Gi

Add the amount to the podTemplate (and optionally the Maven template in kubernetes-plugin). See kubernetes-integration-test/Jenkinsfile-mavenlocalrepo:

volumes: [
  persistentVolumeClaim( mountPath: '/residence/jenkins/.m2/repository',
    claimName: 'mavenlocalrepo')

Note that Maven native repositories declare to be “non-thread safe” and shouldn’t be utilized by a number of builds on the similar time. We use a ReadWriteOnce declare right here that will probably be mounted to just one pod at a time.

The jenkins-2-rhel7:v3.9 picture has kubernetes-plugin v1.2 put in. To run the Jenkinsfile-declarative and Jenkinsfile-yaml examples, you have to replace the plugin in Jenkins to v1.7+.

To utterly clear up after stopping Minishift, delete the ~/.minishift listing.


Each challenge is totally different, so it is necessary to know the impression of the next limitations and elements in your case:

  • Using the jenkins-kubernetes-plugin to create the check surroundings is unbiased from the mixing check itself. The exams will be written utilizing any language and executed with any check framework—which is a good energy but additionally an ideal duty.
  • The complete check pod is created earlier than the check execution and shut down afterward. There isn’t any resolution supplied right here to handle the containers throughout check execution. It’s doable to separate up your exams into totally different phases with totally different pod templates, however that provides a variety of complexity.
  • The containers begin earlier than the primary pipeline steps are executed. Files from the mixing check challenge aren’t accessible at that time, so we won’t run put together scripts or present configuration information for these processes.
  • All containers belong to the identical pod, so they need to run on the identical node. If we’d like many containers and the pod requires too many assets, there could also be no node out there to run the pod.
  • The dimension and scale of the mixing check surroundings needs to be saved low. Though it is doable to begin up a number of microservices and run end-to-end exams inside one pod, the variety of required containers can shortly enhance. This surroundings can also be not best to check excessive availability and scalability necessities.
  • The check pod is re-created for every execution, however the state of the containers remains to be saved throughout its run. This implies that the person check instances aren’t unbiased from one another. It’s the check challenge’s duty to do some cleanup between them if wanted.


Running integration exams in an surroundings created dynamically from code is comparatively straightforward utilizing Jenkins pipeline and the kubernetes-plugin. We simply want a Kubernetes cluster and a few expertise with containers. Fortunately, increasingly more platforms present official Docker photographs on one of many public registries. In the worst-case state of affairs, we’ve got to construct some ourselves. The hustle of getting ready the pipeline and integration exams pays you again shortly, particularly if you wish to attempt totally different configurations or dependency model upgrades throughout your software’s lifecycle.

This was initially printed on Medium and is reprinted with permission.

Most Popular

To Top