Kubernetes GitOps Continuous Integration and Delivery with Fleet and Rancher
l

10 Jun, 2021

LinkedInTwitter

If you would like to know more about how to implement modern data and cloud technologies, such as Kubernetes, into to your business, we at Digitalis do it all: from cloud and Kubernetes migration to fully managed services, we can help you modernize your operations, data, and applications. We provide consulting and managed services on Kubernetes, clouddata, and DevOps.

Contact us today for more information or to learn more about each of our services.

Introduction

SUSE Rancher is a powerful and fully opensource tool for managing Kubernetes at either cloud, on-prem or even developers laptops. It provides a powerful and well-designed UI that gives you a view over all of your Kubernetes clusters.

Furthermore from version 2.5 they have bundled Rancher with Fleet, another opensource SUSE tool, for GitOps-like CI/CD application.

What is GitOps?

GitOps is a model for designing continuous integration and continuous delivery where the code you are deploying is stored and versioned in a Git repository. You can also control the processes by enforcing peer review (pull requests) and quality by unit testing the code.

Gitops with Fleet

The last step is the deployment to either development or production. Here is where you can take advantage of Fleet. A well-implemented GitOps environment will lead to increased productivity by improving the quality and reducing the time required to deploy.

Fleet

Fleet implements GitOps at scale allowing you to manage up to one million clusters but it is small enough to run it locally on developer laptops using for example k3d (a lightweight wrapper to run k3s).

Rancher Continuous Delivery

I’ve always been a fierce advocate for helm as the sole package management for Kubernetes and I go to the extremes of creating helm charts for the smallest of deployments such as single secret, but I understand that not everyone is as strict as I am or have the same preferences. This is why with Fleet you can use all of the most common deployment methods:

  • Raw Kubernetes YAML: config as it is, simple deployments, secrets, etc.
  • Kustomize: you take a raw YAML and apply some changes (patches) to it before installing
  • Helm: Installs helm charts from local directory, Git repository or charts repository.
  • A combination of all three!

Setting up a Lab

Let’s set up a lab environment to learn about Rancher and Fleet.

Kubernetes

The instructions below show how to set up a locally running Kubernetes server to be able to play with SUSE Rancher and Fleet. I’m going to use k3d (a wrapper to k3s).

The following command will create a Kubernetes cluster with one master and two nodes. Also, we’re mapping port 80 to the local computer on 8081 and 443 to 8443 to allow external access to the cluster.

~$ k3d cluster create --api-port 6550 \
  -p "8081:80@loadbalancer" \
  -p "8443:443@loadbalancer" \
  --agents 2 \
  rancher-lab

K3d installs Traefik ingress by default so we don’t need to do anything else. If you prefer to use minikube you can use the script below to start up minikube and set up the load balancer using metallb.

#!/bin/bash

minikube start --memory 4096 --cpus=2 --driver=hyperkit
for a in metallb storage-provisioner ingress; do
  minikube addons enable $a
done

MINIKUBE_IP=$(minikube ip)
LB_START=${MINIKUBE_IP%.*}.200
LB_END=${MINIKUBE_IP%.*}.210

cat <<END | kubectl apply -n metallb-system -f -
apiVersion: v1
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - ${LB_START}-${LB_END}
kind: ConfigMap
metadata:
  name: config
END

Installing Rancher

Whilst you can install Fleet without Rancher you will gain much more using the entire installation. Check out the rancher documentation for a full list of the available options. For this example, I’m going to use defaults.

The first thing is to install cert-manager. Whether you use Let’s Encrypt or Rancher generated SSL certificates this is a dependency to be able to install Rancher. You can install it from its helm chart using:

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set=installCRDs=”true” \
  --version v1.0.4

Now let’s install Rancher. My local IP address is 192.168.1.23 so I’m going to use nip.io as my DNS. This is pretty handy for lab work as it’ll give me an FQDN to work with and access Rancher.

# Add repo
helm repo add \
  rancher-latest \
  https://releases.rancher.com/server-charts/latest

# Install rancher with default settings
helm upgrade --install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --create-namespace \
  --set hostname=192.168.1.23.nip.io

Wait for Rancher to start up (kubectl get po -w -n cattle-system) and then you should be able to access it using (replace IP with yours)

https://192.168.1.23.nip.io:8443/

If there are no issues you should be able to log in to Rancher and access the cluster explorer from where you can select the Continuous Delivery tab.

Rancher CD

Organizing your Git Repository

There is no right or wrong way to do it. At the end of the day, it will come down to preferences and the level of complexity and control you would like to have.

Control Freak

If you want to maximize control over your deployments you will need several Git repositories. Each application you deploy will need a minimum of two:

  1. A repository holding the Fleet configuration (fleet.yaml) which you can branch and tag
  2. A repository for the application (helm, kustomize or raw yaml)
  3. If the application has multiple components you will also need one repository for each of them.

Pros: full control of your application versions and deployments as you will be versioning the pipeline configs outside the application configurations.
Cons: It adds overhead to your daily work as you will end up with a lot of repositories to manage
Who should use it? Control freaks and large DevOps teams which share resources.

Passive-aggresive

If you are not too bothered about the pipeline’s configuration because they hardly change, you can decrease the number of Git repositories:

  1. A repository per application (helm, kustomize or raw yaml) together with the Fleet deployment configuration (fleet.yaml)

Pros: full control of the application versions as individual entities.
Cons: you are linking the pipeline code to the application code giving you limited control over versions.
Who should use it? This is probably a middle grown approach recommended for most teams.

Cool as a cucumber

The simplest but with the lowest control is to use a single repository for all your applications In this case you will just need to organize the application into directories

Directory Structure

Pros: very simple to manage with a single repo to update and version control
Cons: when you update an app and commit the changes you are taking over any changes to the other apps with you and this is likely to be undesirable.
Who should use it? I would only recommend it for very small teams with a couple of applications and lab work.

Authentication to Git Repositories

If you’re using the UI you will be given the option to configure how to access the Git repositories. The default is without authentication. If you use the command line you will need to create the secret manually before deploying the GitRepo configuration. See the two examples below, the first one uses SSH keys:

apiVersion: v1
kind: Secret
metadata:
  name: gitrepo-auth-ssh
  namespace: fleet-local
data:
  ssh-privatekey: <priv-key-base64>
  ssh-publickey: <pub-key-base64>
type: kubernetes.io/ssh-auth
And for HTTP authentication:
apiVersion: v1
kind: Secret
metadata:
  name: gitrepo-auth-https
  namespace: fleet-local
data:
  password: <password-base64>
  username: <username-base64>
type: kubernetes.io/basic-auth

Fleet.yaml

The fleet.yaml configuration file is the core of the GitOps pipeline used by Rancher. It describes the pipeline to deploy, either Helm, raw yaml or Kustomize. The format is simple to understand and create.

The example below shows how to install a helm chart from an external repository:

namespace: sample-helm

# Custom helm options
helm:
  # The release name to use. If empty a generated release name will be used
  releaseName: httpbin

  # The directory of the chart in the repo.
  chart: "github.com/twingao/httpbin"

  # An https to a valid Helm repository to download the chart from
  repo: ""

  # Used if repo is set to look up the version of the chart
  version: "master"

  # Force recreate resource that can not be updated
  force: false

  # For how long Helm waits  the release to be active. If the value
  # is less than or equal to zero, we will not wait in Helm
  timeoutSeconds: 0

  # Custom values that will be passed as values.yaml to the installation
  values:
    replicaCount: 1

As you can see we are telling Fleet to download the helm chart from a Git URL on branch master and install it with an override variable setting the number of pods to just one. It is worth mentioning that the chart URL can be in any format supported by go-getter.

You can also take out the values overrides from the fleet.yaml configuration file into external files and reference them:

  valueFiles:
    - values.yaml
    - other.yaml

The other deployment methods such as kustomize are similarly configured. For example in Kustomize you just need a very basic configuration pointing to the directory where kustomization.yaml is stored:

namespace: sample-kust
kustomize:
  # To use a kustomization.yaml different from the one in the root folder
  dir: ""

Whilst raw yaml does not even need a fleet.yaml unless you need to add filters for environments or overlay configurations.

Adding a Git Repo

Once you have the Git repository sorted with the fleet.yaml and all the components you’d like it to deploy it’s time to add the config to Rancher.

Fleet UI Git Repo
You can use the UI or the command line. The screenshot above shows the options to use in the UI whilst the code below shows the exact same configuration but to be applied from the command line.

The repository is public, hence we don’t need to set up any authentication.

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: httpbin
  namespace: fleet-local
spec:
  branch: master
  # If you require authentication add next the secret name
  # clientSecretName: gitrepo-auth-https
  insecureSkipTLSVerify: false
  repo: https://github.com/digitalis-io/fleet-examples.git
  targets:
  - clusterSelector: {}
type: fleet.cattle.io.gitrepo
If no errors you should see how the Helm Chart is downloaded and installed:
# shows the gitrepo added and the last commit aplied
root@sergio-k3s:~# kubectl get gitrepo -n fleet-local
NAME      REPO                                                 COMMIT                                     BUNDLEDEPLOYMENTS-READY   STATUS
httpbin   https://github.com/digitalis-io/fleet-examples.git   c0196a016838dee41e5fa47066efc5cb95f427bb   2/2


root@sergio-k3s:~# kubectl get po -n sample-helm
NAME                      READY   STATUS    RESTARTS   AGE
httpbin-9f49d7f44-qznlw   1/1     Running   0          74s

You can also do a describe of the GitRepo to get more details such as the deployment status. The command is as follows but I’m not copying over the output as it’s quite long.

root@sergio-k3s:~# kubectl describe -n fleet-local gitrepo/httpbin
[...]
  Summary:
    Desired Ready:  2
    Ready:          2

You can see we have the deployment complete and running in no time. Now, if we were to update the Git repository holding the fleet.yaml and commit the changes, Fleet will detect the changes and re-apply (in this case) the helm chart.

The screenshot below shows how after we updated the value for replicaCount from 1 to 2 and committed the changes, the helm chart is redeployed:

Helm Chart Redeployed
And we can confirm it looking at the helm values:
root@sergio-k3s:~# helm get -n sample-helm values httpbin
USER-SUPPLIED VALUES:
global:
  fleet:
    clusterLabels:
      name: local
replicaCount: 2

Cluster selectors

There will be many occasions where you want to deploy the helm charts to some clusters but not others. For this reason, Fleet offers a target option.

The first thing you should do is label the clusters. You can do this from the UI or from the command line.

Cluster selectors
The command line options are:
~$ kubectl label -n fleet-local clusters.fleet.cattle.io/local env=dev
cluster.fleet.cattle.io/local labeled

We can now use these labels as selectors for the deployments. The snippet below shows how we’re now targeting a single environment by making sure this deployment only goes to those clusters labelled as env=dev.

kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
  name: helm
  namespace: fleet-default
spec:
  repo: https://github.com/digitalis-io/fleet-examples.git
  targets:
  - name: dev
    clusterSelector:
      matchLabels:
        env: dev

Managing environments

Another great thing about Rancher is you can manage all your environments from a single place instead of having to duplicate your pipelines per environment (something I see quite often, unfortunately) or create complex deployments.

Labels will become very important if you manage multiple clusters from Rancher as you will be using them to decide where the deployments are going to be installed. But also provides a way to modify the configuration per cluster. Let’s see the following example:

namespace: sample-helm
helm:
  releaseName: httpbin
  chart: "github.com/twingao/httpbin"
  repo: ""
  version: "master"
  force: false

  values:
    replicaCount: 1

targetCustomizations:
  - name: prod
    helm:
      values:
        replicaCount: 2
    clusterSelector:
      matchLabels:
        env: prod

  - name: dev
    helm:
      values:
        replicaCount: 1
    clusterSelector:
      matchLabels:
        env: dev

This is the fleet.yaml we used before but we have now added two new sections at the bottom we called dev and prod. What it means is that any cluster labelled as env=dev will start up just one replica whilst env=prod will start two.

Command line or UI?

Rancher UI is great. It’s fast, feature-rich and very easy to use, but when working with CI/CD pipelines, should you use it at all? That’s an interesting question. The core principle of DevOps is infrastructure as code, therefore if you do use the UI to set up the jobs and configure rancher, are you still doing infrastructure as code?

The most likely answer is probably not. You should be keeping your GitOps configurations under Git control and versioning in the same manner as any application you deploy to Kubernetes.

What should you do? Just store the jobs themselves into a Git repository and treat it like any other application with branching, version control, pull requests, etc.

If you’re having trouble creating the jobs manually you can always do:

  • Create the GitOps job in the UI
  • Select the job and click on Download YAML
  • Copy the downloaded file to your Git jobs repository

Conclusion

Fleet is a powerful addition to Rancher for managing deployments in your Kubernetes cluster. Its simple approach of describing the pipeline in a single file reduces the maintenance overhead.

At Digitalis we strive for repeatable Infrastructure as Code and, for this reason, we destroy and recreate all our development environments weekly to ensure the code is still sound. Being able to restore the pipelines by applying a few Yaml configurations certainly appeals to us.

There are a few things we would like to see added in future versions of Fleet:

  • Secrets Management: we, as well as most of our customers, are using a secrets management system, be it a cloud provider (AWS KMS, Google KMS, etc) or Hashicorp Vault. Some GitOps tools provide a semblance of secrets management like FluxCD whilst others like ArgoCD have poor support with the vault plugin as the only stable solution. Fleet lacks secrets management and if you read my previous blog post you’ll know how much I care for this subject. I am currently discussing with the Fleet community the merits of integrating with vals and I hope this will be taken into consideration – it would be very useful (would appreciate some 👍 on the issue if you agree).
  • Dependencies are also in my wishlist for Fleet. Kubernetes has a de facto dependency system in the sense that if you deploy for instance a pod that requires a secret and the secret is not present it will wait for it. This applies to the Fleet deployment that will wait for the required components. But this is not true for all cases. Often your pipeline will fail hard when it cannot be deployed. For example, if you try to set up an ingress endpoint before you deploy the ingress controller you’ll get a hard error. There is a semblance of this if you use helm dependencies but it would be good for Fleet to provide a fully-fledged dependency system.
  • Notifications are lower on my wishlist but there are good reasons for them. When you manage multiple clusters for many customers and you are continuously deploying code you need to know if the deployment failed and take action.I would like Fleet to be able to send alerts via common notifications methods such as Slack, PagerDuty, etc.

At Digitalis we recommend Rancher and Fleet to any company that wishes to take advantage of all its great features – and many thanks to SUSE and the Rancher team for providing these opensource tools to the community. Digitalis is a SUSE Partner and a CNCF Kubernetes Certified Service Provider so if you would like help adopting these practices and technologies let us know.

Categories

Archives

Related Articles