Most operations retailers are nicely down the highway to extremely automated configuration and provisioning methods. Sometimes this transformation is a part of a DevOps transformation, and different instances it is as a result of it is the easiest way to handle change within the setting.
These methods are superb at creating system artifacts that match our wants, however points nonetheless come up the primary time a growth group deploys an utility to the node. Problems aren’t caught till a human will get concerned, and troubleshooting is a protracted, handbook course of involving checklists and ad-hoc fixes. How will we easy that rocky highway the place operations and growth artifacts meet?
Continuous integration and supply (CI/CD) has been a watchword in IT retailers for years, however primarily as a growth course of. CI is concerning the automation of testing for modifications made to the code and to stop modifications launched to the codebase from breaking the appliance. CD is about ensuring that any remaining artifacts are appropriate to be used in a manufacturing setting. Any utility construct that makes it by integration exams may be recognized for simple deployment.
From an operations viewpoint, our “application” is a match for a server or a container. Our “code” is the person automation snippets that do actions. Our “build” is the blueprint for stringing these snippets collectively to get a working system. All modifications to an automation script or file must be examined to make sure they do not trigger issues.
Think of the exams of your automation like bench or smoke exams. Most automation does not consist of 1 huge blob of code for every system. Rather, we have adopted different patterns, like “don’t repeat yourself” (DRY) to construct reusable chunks of automation that we will recombine to get the configuration we wish. These types of exams allow us to discover integration points between roles, uncover issues earlier than they present up in manufacturing, and customarily show that the system is match for its function. A CI/CD pipeline is a software designed to run and handle these types of exams and sign-offs.
Foundations for strong pipelines
We have to agree on just a few ideas to reap the benefits of a pipeline in operations:
- All infrastructure code is in model management
- Version management is necessary in a non-pipeline setting however essential to the operations of a pipeline. Ops wants to concentrate on what modifications “broke the build” and supply clear steerage on what’s deployable. This means you possibly can make sure that a container picture constructed and saved in a registry by the pipeline, or a digital machine provisioned and configured by the automation, will likely be an identical and useful.
- All infrastructure code modifications get examined individually
- We make small modifications to our codebase, and people modifications are vetted for primary correctness. That consists of syntax checking, performance, dependencies, and many others. This degree of testing is like unit testing for an utility.
- All infrastructure code will get examined as a mixed system
- Infrastructure elements are made up of discrete, smaller chunks and must be examined as a complete. These exams are for traits and behaviors of what we resolve is a “working system.” Our automation could also be right and dealing, however nonetheless be incomplete or have conflicting steps in numerous roles (e.g., we began MySQL however did not open the firewall, or we locked down the port in a safety function).
This is all summary, so I’ll stroll by a easy instance. The roles and the exams should not manufacturing high quality, however hopefully, they’re useful sufficient so that you can use them as a place to begin in your investigations. I am additionally going to work with the instruments that I am most acquainted with. Your setting will fluctuate, however the ideas ought to translate between any of the instruments in your toolbox. If you’d wish to see the instance code, you possibly can take a look at the GitHub repository.
Here’s what’s in my toolbox:
- Ansible: A well-liked automation engine written in Python that I have been utilizing for a number of years, which I am going to use to construct a single function for testing
- Molecule: A more moderen, role-based testing harness for Ansible that brings some test-driven design ideas to function growth
- Testinfra: A Pytest-based framework for inspecting system states, which I am going to use to check the habits of the function
- Jenkins Blue Ocean: A brand new pipeline plug-in for Jenkins that gives a brand new UI for pipelines and helps the Jenkinsfile definitions for a pipeline
Here are another particulars concerning the setup on a Fedora 28 system:
- Since Ansible, Molecule, and Testinfra are all distributed by way of PyPi, I’ve put in all of them with pip globally.
- There’s a container for Jenkins with the brand new UI plugin, so I run that on the identical Fedora 28 host.
- Molecule helps testing inside a container, and Jenkins can use that container as a builder in a pipeline. To get the Jenkins docker plugin in the Jenkins container speaking to Docker on the host, I ran that container as `privileged`, mounted the docker socket file, and altered the SELinux context on the host. You might want to decide what’s greatest to your setting, as this may not be the best choice for something past this proof of idea.
Later I am going to present you the CentOS 7 base picture I constructed for Molecule that features all the identical dependencies because the Fedora 28 host the place we developed the function.
Create the function listing
Let’s construct a job to put in an Apache internet server. In the top-level challenge folder, we’ll have our stock, a website playbook, and a
roles listing. In the
roles listing, we’ll use Molecule to initialize the function listing construction.
molecule init function -r webserver
--> Initializing new function webserver...
Initialized function in /root/iac-ci/weblog/ansible/roles/webserver efficiently.
In the newly created
webserver listing, you will see one thing that appears like the results of an
ansible-galaxy init command with the addition of a
molecule listing. I have never modified any of the defaults on the command line, which suggests Molecule will use Docker as a goal to run playbooks and Testinfra because the verifier to run exams. You can take a look at
molecule/default/molecule.yml for these particulars or to vary choices.
Write our function
Normally we might fireplace up our editor on
duties/important.yml and begin writing Ansible duties. But since we’re considering forward about exams, let’s begin there (in any other case known as test-driven design). Since we’d like a working webserver, we have two necessities:
- is the service working?
- is there a web page to serve?
So we will open the default Python script that Molecule created for Testinfra, run
molecule/default/exams/test_default.py, and add the next after the take a look at.
httpd = host.service("httpd")
index = host.file("/var/www/html/index.html")
We’re utilizing two built-in modules, Service and File, to verify the state of the system after the Ansible function executes. We’ll use these identical exams for our smoke testing, however in a dwell setting, you are going to need extra refined checks in opposition to anticipated behaviors.
Now we will add our desired duties and templates to the function to fulfill the necessities. We’ll set up the package deal, create the templated
index.htmltasks/important.yml, and you may see the remaining within the repository.
- identify: Install Apache
- identify: Create index
- restart httpd
Running a take a look at
The remaining step earlier than working any exams with Molecule or Testinfra is to create the bottom picture we’d like. Not solely do we’d like the dependencies for the framework, we additionally wish to use a container that has an
init system. This lets us take a look at for the eventual goal of a digital machine while not having a second out there VM.
RUN yum -y set up epel-release &&
yum -y set up gcc python-pip python-devel openssl-devel docker openssh-clients &&
pip set up docker molecule testinfra ansible &&
yum clear all
Give the container a reputation you’ll keep in mind since we’ll use it in our Jenkins pipeline.
docker construct . -t molecule-base
You can run the exams on the host now earlier than you construct the pipeline. From the
roles/webserver listing, run
molecule take a look at, which is able to execute its default matrix together with the Testinfra exams. You can management the matrix, however after we construct our pipeline, we’ll decide to run the steps individually as a substitute of utilizing the
take a look at command.
With our function written and our exams in place, we will create the pipeline to manage our construct.
Building the pipeline
The Jenkins installation guide exhibits you methods to get the container and methods to unlock Jenkins after working. Alternatively, you should utilize this pipeline tutorial, which can even stroll you thru connecting your Jenkins occasion to GitHub. The pipeline will verify your code out each time it runs, so it is important to have Ansible, Molecule, and Testinfra beneath supply management.
Head to the net UI for Jenkins Blue Ocean at
localhost:8080/blue and click on on New Pipeline. If you are utilizing a fork of my GitHub repository, Jenkins will detect the prevailing Jenkinsfile and begin working the pipeline immediately. You might wish to select a brand new repository with out a Jenkinsfile.
On the brand new pipeline, it’s best to see a Pipeline Settings column on the best aspect. Select Docker within the drop-down field and add the identify of your base picture to the field labeled Image. This would be the base picture used for all Docker containers created by this pipeline.
Under Environment, click on the blue + image and add
ROLEDIR beneath Name and
ansible/roles/webserver beneath Value. We’ll use this a number of instances within the pipeline. Setting an setting variable on the prime degree means it may be accessed in any stage.
Click on the + within the heart of the web page to create a brand new stage. Stages are chunks of labor performed by the pipeline job, and every stage may be made up of a number of, sequential steps. For this pipeline, we’ll create a stage for every Molecule command we wish to run, for the Ansible playbook run in opposition to the VM, and for the Testinfra exams run in opposition to the VM.
The Molecule phases will all be shell instructions, so click on Add Step and choose Shell Script. In the field, add the next traces:
This will be sure we’re within the function listing within the native Jenkins working listing earlier than calling Molecule. You can take a look at the take a look at matrix to see what particular checks you wish to run. You will not have to create or destroy any situations, as Jenkins will handle these containers.
Once you’ve got added just a few phases, you possibly can hit Save. This will mechanically commit the Jenkinsfile to the repository and begin the pipeline job. You have the selection of committing to grasp or a brand new department, which suggests you possibly can take a look at new code with out breaking manufacturing.
Alternately, the Jenkinsfile is dedicated in the identical repository as the remainder of our code. You can straight edit the file to duplicate the Molecule phases and use Git to commit the modifications from the command line. You can have Jenkins scan the repository and choose up the brand new phases.
For the Ansible stage, we’d like to ensure we’ve an entry within the take a look at host within the
stock file and a website playbook that features the function we wish to run.
- hosts: all
- function: webserver
The step sort for this stage is “Invoke an Ansible playbook.” Fill in all of the values which are acceptable. For something needing a path, like
Playbook, use a relative path from the bottom of the repository, like
ansible/website.yml. You can import SSH keys or use an Ansible vault file for credentials.
Our final stage is the Testinfra stage, which can be a shell script. To run Testinfra from the command line with out invoking Molecule, we’ll want to ensure to move some variables. Testinfra can use Ansible as a connection backend, so we will use the identical stock and credentials from earlier than.
In the Shell Script field, add the next:
testinfra --ssh-identity-file=$ --connection=ansible --ansible-inventory=$MOLECULE_INVENTORY_FILE
In the Settings for the stage, create the next setting variable:
KEYFILE variable is created by a variable binding of a credential. This must be performed within the Jenkinsfile, as configuring that step is not but supported within the interface. This will make the identical SSH key configured for the Ansible stage out there as a file at some point of the stage.
Through the Jenkinsfile within the instance repository and these steps, it’s best to have a working pipeline. And hopefully you’ve got received a grasp of not solely the way it works, however why it is definitely worth the effort to check our infrastructure the identical method our developer colleagues take a look at utility code modifications. While the examples are easy, you possibly can construct a take a look at suite that ensures the infrastructure code deploys a system that the appliance code can depend on. In the spirit of DevOps, you will have to work along with your developer group to hash out these acceptance exams.