Rancher: break in case of emergency

SUSE Rancher is a multi-cluster management platform for Kubernetes. It allows you to both build and manage Kubernetes clusters in cloud providers (AWS, Google, etc), on-premises, bare-metal, etc. It’s extremely flexible and easy to use.
Clusters built from Rancher are also accessed from Rancher, both via the Web UI and from your local computer using the kubeconfig
provided by Rancher. This is very good because one of the best features of Rancher is the RBAC model where we can authenticate to Rancher using our external identity provider and grant different levels of access to the teams and team members.
If you want to know more, have a look at this previous blog post.
At Digitalis, we love it. We use it internally and we recommend it to our customers.
What’s the downside?
The recommendation is to deploy Rancher into its own Kubernetes cluster and to back it up. This will give you the resilience you need to ensure you stay in control and don’t lose access to the downstream clusters.
But what if the worst happens and Rancher is not available?

Emergency Access
We’re getting into the habit of building and emergency access. All you need to do is generate a kubeconfig
that bypasses the Rancher proxy and connects directly to the Kubernetes control plane nodes. Obviously, this is for emergencies only and it has the caveat of requiring that your firewall or security groups allow access to the control plane nodes.
Service Account and Cluster Role
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: emergency-admins-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: emergency-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin # it already exists in Rancher Clusters
subjects:
- kind: ServiceAccount
name: emergency-admins-sa
namespace: default
First, apply the above config to create a service account that’s linked to the cluster-admin
role. This role should exist already but if it doesn’t, see below an example.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
Then you can create a kubeconfig
from this service account. I use the following script:
#!/usr/bin/env bash
# Copyright 2017, Z Lab Corporation. All rights reserved.
# Copyright 2017, Kubernetes scripts contributors
#
# For the full copyright and license information, please view the LICENSE
# file that was distributed with this source code.
set -e
if [[ $# == 0 ]]; then
echo "Usage: $0 SERVICEACCOUNT [kubectl options]" >&2
echo "" >&2
echo "This script creates a kubeconfig to access the apiserver with the specified serviceaccount and outputs it to stdout." >&2
exit 1
fi
function _kubectl() {
kubectl $@ $kubectl_options
}
serviceaccount="$1"
kubectl_options="${@:2}"
if ! secret="$(_kubectl get serviceaccount "$serviceaccount" -o 'jsonpath={.secrets[0].name}' 2>/dev/null)"; then
echo "serviceaccounts \"$serviceaccount\" not found." >&2
exit 2
fi
if [[ -z "$secret" ]]; then
echo "serviceaccounts \"$serviceaccount\" doesn't have a serviceaccount token." >&2
exit 2
fi
# context
context="$(_kubectl config current-context)"
# cluster
cluster="$(_kubectl config view -o "jsonpath={.contexts[?(@.name==\"$context\")].context.cluster}")"
server="$(_kubectl config view -o "jsonpath={.clusters[?(@.name==\"$cluster\")].cluster.server}")"
# token
ca_crt_data="$(_kubectl get secret "$secret" -o "jsonpath={.data.ca\.crt}" | openssl enc -d -base64 -A)"
namespace="$(_kubectl get secret "$secret" -o "jsonpath={.data.namespace}" | openssl enc -d -base64 -A)"
token="$(_kubectl get secret "$secret" -o "jsonpath={.data.token}" | openssl enc -d -base64 -A)"
export KUBECONFIG="$(mktemp)"
kubectl config set-credentials "$serviceaccount" --token="$token" >/dev/null
ca_crt="$(mktemp)"; echo "$ca_crt_data" > $ca_crt
kubectl config set-cluster "$cluster" --server="$server" --certificate-authority="$ca_crt" --embed-certs >/dev/null
kubectl config set-context "$context" --cluster="$cluster" --namespace="$namespace" --user="$serviceaccount" >/dev/null
kubectl config use-context "$context" >/dev/null
cat "$KUBECONFIG"
# vim: ft=sh :
There is only one last change you’ll need to do manually. Edit the newly generated kubeconfig
and locate the server:
line. It’ll read something like:
server: https://rancher.mycompany.com
and you will need to change it to
server: https://control-plane-node-ip:6443
Now all you need to do is to keep the generated config saved for emergencies only.
Conclusion
SUSE Rancher’s feature are worth every penny. But we must always plan for the worst case scenarios and losing access to the kubernetes API is one of them.