Kubernetes Network Policies and Networking

Francesco Vannini - Aug 27 - - Dev Community

Take this diagram in which a few pods have been deployed in a cluster.
The CNI is Calico. I have taken the liberty to simplify the subnets and IP allocations.

Image description
The two network policies below should allow all pods in the tinkering-pods namespace to egress on any IP except a specific website.
The website should be accessible only if the pod has a specific label role: web-browser.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-egress-4all
  namespace: tinkering-pods
spec:
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - <IP of website>
  podSelector: {}
  policyTypes:
  - Egress
Enter fullscreen mode Exit fullscreen mode
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-website
  namespace: tinkering-pods
spec:
  egress:
  - to:
    - ipBlock:
        cidr: <IP of website>
  podSelector:
    matchLabels:
      role: web-browser
  policyTypes:
  - Egress
Enter fullscreen mode Exit fullscreen mode

There are essentially three situations we can test:
IP of website being either

  1. 192.168.0.3
  2. 10.90.0.3
  3. 172.16.0.2

We can change the NP accordingly and test for each case using pod1 and pod2.

  1. kubectl exec -it pod1 -- wget 192.168.0.3 OK
  2. kubectl exec -it pod1 -- wget 10.90.0.3 OK
  3. kubectl exec -it pod1 -- wget 172.16.0.2:32100 OK

  4. kubectl exec -it pod2 -- wget 192.168.0.3 Not working!

  5. kubectl exec -it pod2 -- wget 10.90.0.3 OK?!?

  6. kubectl exec -it pod2 -- wget 172.16.0.2:32100 Not Working!

Either of the two nodes will do, in this example we are not considering load balancers

Why can pod2 still reach pod3 exposed service via 10.90.0.3 bypassing the network policies? Which IP should we use for IP of website?

The answers are to be found in the way network policies allow pods to communicate with each other, what services really are and how nodes forward traffic to pods.

  • Pod-to-Pod Communication: Network Policies in Kubernetes are designed to control traffic between pods based on their IP addresses and labels. When you're filtering traffic by a specific IP address in the Network Policies, those policies only affect traffic between the pods (192.168.0.0/24) and not the service IP range (10.90.0.0/24).

  • Service IP Range: The IP addresses assigned to services (like your 10.90.0.0/24 range) are virtual IPs managed by Kubernetes. They do not directly map to the pod IP addresses. When a pod accesses a service via the service IP, Kubernetes transparently forwards the request to one of the pods backing the service. The Network Policies don't directly apply to these service IPs because they aren't real IPs that belong to a specific pod.

  • NodePort Behavior: When you expose a service using NodePort, the service becomes accessible via a port on the node's IP address (in your case, within the 172.16.0.0/24 range). Traffic hitting this NodePort is truly routed to the appropriate pod. Since the NodePort traffic is coming from a node's IP address, which is part of the 172.16.0.0/24 range, Network Policies that filter based on this range will be effective.

So eventually the only way the policies above will yield the desired effect is to use either:

  1. The pod3 address
  2. The node IP*

To verify k exec pod1 -it -- sh

For the first scenario we can run
wget -O index.html 192.168.0.3

For the second scenario
wget -O index.html 172.16.0.2:32100

Conclusions

Network Policies in Kubernetes eventually always regulate pod-to-pod communication. The example above hopefully clarifies this aspect.
Using IPs however is generally not favoured, mostly because of their impermanence and anti-mnemonic traits, but also because with podSelector and namespaceSelector in egress and ingress sections, you can abstract the pod/service aggregate without necessarily needing to focus on one or the other as you would when looking at them via their IP addresses.

When running:
k run webserver --image=nginx -n tinkering-pods --expose=true --port=80 --labels=role=website

You get two resources: a pod and a service.

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    role: website
  name: webserver
  namespace: tinkering-pods
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    role: website
status:
  loadBalancer: {}
---
---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    role: website
  name: webserver
  namespace: tinkering-pods
spec:
  containers:
  - image: nginx
    name: webserver
    ports:
    - containerPort: 80
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
Enter fullscreen mode Exit fullscreen mode

Both share the same label role:website and when creating a network policy to allow/deny access to the website we think of the aggregate pod/service with role:website but eventually the policy only works for the pod alone.

Network policies can be a bit elusive and, depending on your background, not entirely intuitive. Examples like these help me get a better understanding, hopefully it will be useful for others too, make sure to let me know!

Happy K8ing.

. .
Terabox Video Player