
Introduction
A traditional application design is usually split off into traffic layers isolating the most important part of the infrastructure (for example databases) deep into the network and well protected whilst leaving in the outer boundaries the least sensitive such as the web servers. Below there is an example of a 3-tier application.

There used to be a lot of design work into getting the tiers well defined, the network segmented into subnets, routing configurations and finally the firewall to control the ins and outs.
But we have forgotten all these good practices when it comes to deploying applications to a Kubernetes platform. By default, there is no network-layer security in Kubernetes and pods are allowed to communicate with any other pod in the cluster. All tiers are merged into a single network. If you are unlucky and your website is compromised, you are granting an attacker easy access to the database and the whole Kubernetes network.

What can you do about it? There are many good security standards you should be looking at but I’d like to focus on network policies today.
Network Policies
Network Policies are rules applied at OSI layer 3 or 4 to control the traffic flow between pods. Kubernetes Network policies work by:
- Controlling access from pod to pod
- Granting or denying access from or to a namespace
- Using IP blocks to restrict access
Remember that to be able to use Network Policies you must use a network plugin that supports it. We are partial to Calico.
Selectors
Traditional firewalls use only IP addresses to select the traffic you want to affect. There are of course layer 7 firewalls as well but that’s outside the scope of Kubernetes Network Policies.
Because in general IP addresses are dynamic for the pods, using IP blocks is often not a good idea. This is why Kubernetes uses selectors to determine the source and destination. Selectors are generally labels applied to pods and namespaces.
NAME STATUS AGE LABELS
application Active 11s name=application
database Active 15s name=database
frontend Active 6s name=frontend
When configuring your cluster be mindful of labelling the different components accordingly using meaningful labels you will remember later on. The screenshot above shows a k3d cluster I just started up where I created three namespaces (frontend, application, database) I will be using in the following examples.
CIDR
Before we continue I have to mention you don’t need to use labels, you can use network addresses. I do not recommend using them for general purposes as you would need to configure pods with static IP addresses and use large subnets in the configuration to accomplish this but the option is there for you to use.
Limitations
Kubernetes Network Policies are not easy to debug when something goes wrong. One of the reasons is logging is not supported. You would need to look at alternatives such as kube-iptables-tailer or using the ProjectCalico CNI to accomplish this. We’ll be talking about Calico in the next blog.
Another limitation is that you can only allow traffic. You cannot create a policy that denies traffic from one pod to another, you can only drop all traffic and then allow what you need.
Options
podSelector: Each NetworkPolicy includes a podSelector which selects the pods based on labels. An empty podSelector selects all pods in the namespace.
ipBlock: the rule with match an IP block using CIDR notation (ie, 192.168.0.0/24)
policyTypes: it may be either Ingress, Egress, or both. This is whether the policy is applied to traffic entering or leaving the pods. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules.
ingress: not to be confused with policyTypes, this defines the rules that will match the traffic entering the pod using podSelector, namespaceSelector or ipBlock.
egress: like ingress but for traffic leaving the pod
Whilst you get used to the syntax you can try using this handy editor provider by Cilium.
Default Deny
As mentioned before in the limitations section, you can only allow traffic. This is why your first rule should be a default-deny. As you can see the podSelector is empty meaning all pods are being selected but it does not allow any ingress traffic as no rules are defined.
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
If you also want to block traffic leaving (egress) you can do the same:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress
Or even combine them both:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
ingress:
- {}
egress:
- {}
policyTypes:
- Ingress
- Egress
Example
Let’s look at a simple example. We have, as described above, an application deployed to Kubernetes that has three components:
- Frontend: we have some pods serving the web application
- Application: this is the middle tier, the web server uses this application to talk to the database
- Database: where we store everything

We can confirm first our webserver can connect to the application server on port 8080 where it is listening.
~$ kubectl exec -ti -n frontend webserver01 /bin/sh
/ # nc -v appserver01.application 8080
Connection to appserver01.application 8080 port [tcp/http-alt] succeeded!
We now apply the default deny rule for ingress (shown above) denying any ingress traffic and we retest:
~$ kubectl get networkpolicy -n database
NAME POD-SELECTOR AGE
default-deny-ingress <none> 2m
Remember! Polices are per namespace. You will need to apply the default deny to each namespace where you will be setting up network policies.
/ # nc -v appserver01.application 8080
nc: connect to appserver01.application port 8080 (tcp) failed: Operation timed out
/ #
Now that everything is denied, we build the access rules from the ground up. The first thing we need to do is to allow access to the database just from the application namespace leaving anything else denied.
Application to Database
What we’re doing here is we apply a rule to any pod labelled with app=db to only allow traffic in if it comes from the namespace with the label name=application and using port 5432. It’s important to remember that even if I am using name I’m not referring to the namespace name but to a label called name.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: from-application-to-db
namespace: database
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: application
ports:
- protocol: TCP
port: 5432
We can confirm the rules applied with
~$ kubectl get networkpolicy -n database
NAME POD-SELECTOR AGE
default-deny-ingress <none> 10m
from-application-to-db app=db 7s
I highly recommend double-checking your filters every time. The argument “-l” for kubectl filters based on labels. Using the argument “-l app=db” I’m asking to display any pod that has these labels only. If the pod is not displayed, you have something wrong.
~$ kubectl get po -n database -l app=db
NAME READY STATUS RESTARTS AGE
db01 1/1 Running 0 16m
~$ kubectl get ns -l name=application
NAME STATUS AGE
application Active 39m
If we test the access from the application namespace to the database and confirm we have now granted access per specifications:
~$ kubectl exec -ti -n application appserver01 -- nc -v db.database 5432
Connection to db.database 5432 port [tcp/postgresql] succeeded!
And if we try the same from the web application to the database it still be denied:
~$ kubectl exec -ti -n frontend webserver01 -- nc -v db.database 5432
nc: connect to db.database port 5432 (tcp) timed out: Operation in progress
Webserver to Application
The rule to use here is very similar to the one we just applied to the database but changing the port number to 8080 and namespace to frontend. I’m going to make some changes to make it a bit more complicated and use it to further learn the syntax.
Instead of one pod I have three, one has been labelled as “green”, the second as “blue” and the third as “yellow”. I will also be using the IP address for the pod so it’s displayed below.
~$ kubectl get po -n frontend \
-o custom-columns=NAME:.metadata.name,LABELS:.metadata.labels,IP:.status.podIP
NAME LABELS IP
webserver01 map[app:webserver type:blue] 10.244.120.69
webserver02 map[app:webserver type:green] 10.244.120.70
webserver03 map[app:webserver type:yellow] 10.244.120.71
The first rule allows just pods in the namespace frontend, with the label “green” and connecting to port 8080 to succeed.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: from-web-to-app
namespace: application
spec:
podSelector:
matchLabels:
app: appserver
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
type: green
namespaceSelector:
matchLabels:
name: frontend
ports:
- protocol: TCP
port: 8080
Result:
~$ kubectl exec -ti -n frontend webserver02 -- nc -w 3 -v appserver01.application 8080
Connection to appserver01.application 8080 port [tcp/http-alt] succeeded!
~$ kubectl exec -ti -n frontend webserver01 -- nc -w 3 -v appserver01.application 8080
nc: connect to appserver01.application port 8080 (tcp) timed out: Operation in progress
~$ kubectl exec -ti -n frontend webserver03 -- nc -w 3 -v appserver01.application 8080
nc: connect to appserver01.application port 8080 (tcp) timed out: Operation in progress
Whilst the other two pods timed out.
The second rule will grant access from webserver01 using the IP address (in my lab it has the IP 10.244.120.69)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: from-web-to-app-cidr
namespace: application
spec:
podSelector:
matchLabels:
app: appserver
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.244.120.69/32
ports:
- protocol: TCP
port: 8080
Result: now webserver01 and webserver02 have access to the application but not webserver03.
Obviously, this is not something you would usually do, it just illustrates what you can do. Also, you would not need two policies, a single one with all the rules will do the same.
Egress
Until now all examples are for Ingress only, that’s it, traffic entering the pod. But it’s also possible to do the same for Egress. A common practice on secure environments is to block any external access and control what the pod can see via a proxy server.
Egress rules can be quite challenging. You will be surprised of how many things your pods need to be able run. For instance, DNS! When blocking Egress I strongly recommend you open up DNS or you will run into problems straight away.
Before embarking on Egress policies make your Ingress and configured and working. It will be extremely difficult to debug when something goes wrong otherwise.
This is a sample Egress policy which denies access from any pod to the outside except for the kube-dns service which is still allowed. Remember policies are per namespace and you will need to apply this to every namespace where you’d like to control the traffic leaving. Aslo make sure all namespaces have the right labels.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
policyTypes:
- Egress
You can check that outside traffic is now blocked:
~$ kubectl exec -ti -n frontend webserver02 -- nc -w 3 -v google.com 80
nc: connect to google.com port 80 (tcp) timed out: Operation in progress
nc: connect to google.com port 80 (tcp) failed: Address not available
command terminated with exit code 1
I now have a proxy server running in the namespace “outside” listening on port 3128 and this is the policy I need to allow any of the pods to connect to it now that everything but DNS is blocked:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-proxy
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: proxy
namespaceSelector:
matchLabels:
name: outside
ports:
- protocol: TCP
port: 3128
Result:
~$ kubectl exec -ti -n frontend webserver02 -- nc -w 3 -v proxy.outside 3128
Connection to proxy.outside 3128 port [tcp/*] succeeded!
Conclusion
As you can see Kubernetes Network Policies are trying to fill a gap where you would usually have network firewalls to control access between frontend, application and data layers. But it has quite a few limitations such as not being able to deny or log traffic.
You should never forget that Kubernetes is not a secure environment. There are many good practices you should embrace such as not running pods as root, having a good and secure implementation such as RKE and of course implementing network policies.