Science and technology

How to ‘Kubernetize’ an OpenStack service

Kuryr-Kubernetes is an OpenStack mission, written in Python, that serves as a container community interface (CNI) plugin that gives networking for Kubernetes pods by utilizing OpenStack Neutron and Octavia. The mission stepped out of its experimental part and have become a totally supported OpenStack ecosystem citizen in OpenStack’s Queens launch (the 17th model of the cloud infrastructure software program).

One of Kuryr-Kubernetes’ primary benefits is you needn’t use a number of software program improvement networks (SDNs) for community administration in OpenStack and Kubernetes. It additionally solves the difficulty of utilizing double encapsulation of community packets when working a Kubernetes cluster on an OpenStack cloud. Imagine utilizing Calico for Kubernetes networking and Neutron for networking the Kubernetes cluster’s digital machines (VMs). With Kuryr-Kubernetes, you employ only one SDN—Neutron—to supply connectivity for the pods and the VMs the place these pods are working.

You may also run Kuryr-Kubernetes on a bare-metal node as a traditional OpenStack service. This manner, you possibly can present interconnectivity between Kubernetes pods and OpenStack VMs—even when these clusters are separate—by simply placing Neutron-agent and Kuryr-Kubernetes in your Kubernetes nodes.

Kuryr-Kubernetes consists of three elements:

  • kuryr-controller observes Kubernetes assets, decides how one can translate them into OpenStack assets, and creates these assets. Information about OpenStack assets is saved into annotations of corresponding Kubernetes assets.
  • kuryr-cni is an executable run by the CNI that passes the calls to kuryr-daemon.
  • kuryr-daemon needs to be working on each Kubernetes node. It watches the pods created on the host and, when a CNI request is available in, wires the pods in response to the Neutron ports included within the pod annotations.

In basic, the management a part of a CNI plugin (like Calico or Nuage) runs as a pod on the Kubernetes cluster the place it gives networking, so, naturally, the Kuryr workforce determined to observe that mannequin. But changing an OpenStack service right into a Kubernetes app wasn’t precisely a trivial job.

Kuryr-Kubernetes necessities

Kuryr-Kubernetes is simply an utility, and purposes have necessities. Here is what every part wants from the atmosphere and the way it interprets to Kubernetes’ primitives.

kuryr-controller

  • There needs to be precisely one occasion of kuryr-controller (though that quantity could also be greater with the A/P high-availability function applied in OpenStack Rocky). This is simple to attain utilizing Kubernetes’ Deployment primitive.
  • Kubernetes ServiceAccounts can present entry to the Kubernetes API with a granular set of permissions.
  • Different SDNs present entry to the OpenStack API otherwise. API SSL certificates also needs to be offered, for instance by mounting a Secret in the pod.
  • To keep away from a chicken-and-egg drawback, kuryr-controller ought to run with hostNetworking to bypass utilizing Kuryr to get the IP.
  • Provide a kuryr.conf file, ideally by mounting it as a ConfigMap.

In the tip, we get a Deployment manifest much like this:

apiVersion: apps/v1beta1
sort: Deployment
metadata:
  labels:
    identify: kuryr-controller
  identify: kuryr-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        identify: kuryr-controller
      identify: kuryr-controller
    spec:
      serviceAccountName: kuryr-controller
      automountServiceAccountToken: true
      hostNetwork: true
      containers:
      - picture: kuryr/controller:newest
        identify: controller
        volumeMounts:
        - identify: config-volume
          mountPath: "/etc/kuryr/kuryr.conf"
          subPath: kuryr.conf
        - identify: certificates-volume
          mountPath: "/etc/ssl/certs"
          readOnly: true
      volumes:
      - identify: config-volume
        configMap:
          identify: kuryr-config
      - identify: certificates-volume
        secret:
          secretName: kuryr-certificates
      restartPolicy: Always

kuryr-daemon and kuryr-cni

Both of those elements needs to be current on each Kubernetes node. When the kuryr-daemon container begins on the Kubernetes nodes, it injects the kuryr-cni executable and reconfigures the CNI to make use of it. Let’s break that down into necessities.

  • kuryr-daemon ought to run on each Kubernetes node. This means it may be represented as a DaemonSet.
  • It ought to be capable of entry the Kubernetes API. This will be applied with ServiceAccounts.
  • It additionally wants a kuryr.conf file. Again, one of the simplest ways is to make use of a ConfigMap.
  • To carry out networking operations on the node, it should run with hostNetworking and as a privileged container.
  • As it must inject the kuryr-cni executable and the CNI configuration, the Kubernetes nodes’ /choose/cni/bin and /and so forth/cni/internet.d directories should be mounted on the pod.
  • It additionally wants entry to the Kubernetes nodes’ netns, so /proc should be mounted on the pod. (Note that you simply can not use /proc as a mount vacation spot, so it should be named otherwise and Kuryr must be configured to know that.)
  • If it is working with the Open vSwitch Neutron plugin, it should mount /var/run/openvswitch.
  • To establish pods working on its node, nodeName needs to be handed into the pod. This will be carried out utilizing atmosphere variables. (This can also be true with the pod identify, which shall be defined beneath.)

This produces a extra sophisticated manifest:

apiVersion: extensions/v1beta1
sort: DaemonSet
metadata:
  identify: kuryr-cni
  namespace: kube-system
  labels:
    identify: kuryr-cni
spec:
  template:
    metadata:
      labels:
        Name: kuryr-cni
    spec:
      hostNetwork: true
      serviceAccountName: kuryr-controller
      containers:
      - identify: kuryr-cni
        picture: kuryr/cni:newest
        command: [ "cni_ds_init" ]
        env:
        - identify: KUBERNETES_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - identify: KURYR_CNI_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.identify
        securityContext:
          privileged: true
        volumeMounts:
        - identify: bin
          mountPath: /choose/cni/bin
        - identify: net-conf
          mountPath: /and so forth/cni/internet.d
        - identify: config-volume
          mountPath: /and so forth/kuryr/kuryr.conf
          subPath: kuryr-cni.conf
        - identify: proc
          mountPath: /host_proc
        - identify: openvswitch
          mountPath: /var/run/openvswitch
      volumes:
        - identify: bin
          hostPath:
            path: /choose/cni/bin
        - identify: net-conf
          hostPath:
            path: /and so forth/cni/internet.d
        - identify: config-volume
          configMap:
            identify: kuryr-config
        - identify: proc
          hostPath:
            path: /proc
        - identify: openvswitch
          hostPath:
            path: /var/run/openvswitch

Injecting the kuryr-cni executable

This half took us the longest time. We went via 4 totally different approaches till every thing labored. Our answer was to inject a Python utility from the container into the container’s host and to inject the CNI configuration recordsdata (however the latter is trivial). Most of the problems have been associated to the truth that Python purposes aren’t binaries, however scripts.

We first tried making our kuryr-cni script a binary utilizing PyInstaller. Although this labored pretty properly, it had critical disadvantages. For one factor, the construct course of was sophisticated—we needed to create a container with PyInstaller and Kuryr-Kubernetes that generated the binary, then construct the kuryr-daemon container picture with that binary. Also, because of PyInstaller quirks, we ended up with a number of deceptive tracebacks in kubelet logs, i.e., in exceptions, we might get the unsuitable traceback on the logs. The deciding issue was that PyInstaller modified paths to the included Python modules. This meant that some checks in the os.vif library failed and broke our steady integration (CI).

We additionally tried injecting a Python digital atmosphere (venv) containing a CPython binary, the kuryr-kubernetes package deal, and all its necessities. The drawback is Python venvs aren’t designed to be transportable. Even although there’s a –relocatable possibility within the virtualenv command-line software, it does not all the time work. We deserted that method.

Then we tried what we expect is the “correct” manner: injecting the host with an executable script that does docker exec -i on a kuryr-daemon container. Because the kuryr-kubernetes package deal is put in in that container, it might simply execute the kuryr-cni binary. All the CNI atmosphere variables should be handed via the docker exec command, which has been doable since Docker API v1.24. Then, we solely wanted to establish the Docker container the place it needs to be executed.

At first, we tried calling the Kubernetes API from the kuryr-daemon container’s entry level to get its personal container ID. We rapidly found that this causes a race situation, and generally the entry level runs earlier than the Kubernetes API is up to date with its container ID. So, as an alternative of calling the Kubernetes API, we made the injected CNI script name the Docker API on the host. Then it is simple to establish the kuryr-daemon container utilizing labels added by Kubernetes.

Lessons discovered

In the tip, we have got a working system that’s simple to deploy and handle as a result of it is working on Kubernetes. We’ve proved that Kuryr-Kubernetes is simply an utility. While it took a number of effort and time, the outcomes are value it. A “Kubernetized” utility is way simpler to handle and distribute. 


Michał Dulko will current How to make a Kubernetes app from an OpenStack service at OpenStack Summit, November 13-15 in Berlin.

Most Popular

To Top