Testing Setup

In this testing setup, we have a Kubernetes cluster running on the lightweight Kubernetes distribution called Rancher K3s (version v1.29.3+k3s1). This version of K3s provides a minimalistic approach to deploying and managing a Kubernetes cluster with a focus on simplicity and resource efficiency. It supports both ARM64 and x86 architectures, making it suitable for various hardware platforms.

The Container Network Interface (CNI) plugin being used in this setup is Calico (version v3.27.3), which is a popular open-source networking solution for Kubernetes clusters. Calico offers features like network policy management, IP address management, and BGP routing. By integrating Calico with K3s, we can leverage its advanced networking capabilities while maintaining the lightweight nature of K3s.

Network policies support in different CNI

Network policies are supported in various types of Container Network Interface (CNI) plugins, including self-hosted, Amazon EKS, Google Cloud Platform (GCP), and others. Here's a breakdown of the support for network policies in each type:

Self-Hosted: Self-hosted CNI plugin support for network policies depends on the specific plugin being used. It is essential to check the documentation of the chosen CNI plugin to ensure it supports network policies.

Amazon EKS: In Amazon EKS, the Amazon VPC CNI plugin supports network policies starting from version 1.14 or later. This integration allows users to control all in-cluster communication, including IP addresses, ports, and connections between pods.

Google Cloud Platform (GCP): Google Cloud Platform does not explicitly mention support for network policies in their CNI plugin documentation. However, since Kubernetes supports network policies natively, it can be assumed that GCP's CNI plugin would also support network policies indirectly.

Others: For other CNI plugins like Calico, Flannel, Canal, and Weave Net, it is crucial to consult the respective documentation to understand their support for network policies. Each CNI plugin may have different approaches to implementing network policies, so understanding the specific implementation details is important.

Advanced Network Policy Capabilities

Network Policies (NPs) in Kubernetes provide a granular security mechanism for controlling inbound and outbound  traffic to pods. While basic network policies offer essential functionality, complex deployments often necessitate more sophisticated configurations. This blog post delves into advanced Network Policy features, enabling you to create robust and secure communication channels within your Kubernetes clusters.

Having grasped the fundamentals of NetworkPolicy resources, we can explore advanced configurations to achieve finer-grained control over network traffic. Here's a breakdown of key features:

  • Pod Selectors with Multiple Expressions: Network Policy selectors can encompass multiple expressions to precisely target specific pods or groups of pods. This allows for more granular control over which pods are subject to the policy's rules.

    YAML
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-db-access-on-service-port
    spec:
      podSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - webserver
      ingress:
      - from:
        - podSelector: {}
        ports:
        - protocol: TCP
          port: 80

    In the example above, the Network Policy only applies to pods with labels app: database and tier: backend.

    • Namespace Selectors: Network Policies can be scoped to specific namespaces, restricting traffic flow within those namespaces.

    YAML
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: restrict-web-ingress
    spec:
      podSelector: {}
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: frontend

    Here, the Network Policy allows ingress traffic only from pods within the frontend namespace.

    • Ingress from Specific Ports: Network Policies can restrict traffic by specifying allowed ingress ports.

    YAML
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-db-access-on-service-port
    spec:
      podSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - webserver
      ingress:
      - from:
        - podSelector: {}
        ports:
        - protocol: TCP
          port: 80
    

    This policy allows only TCP traffic on port 80 (HTTP) to reach pods with the label app: webserver.

    • Egress to Specific IPs/CIDRs: Similar to ingress, Network Policies can control outbound traffic by specifying allowed egress destinations using IP addresses or CIDR blocks.

    YAML
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: restrict-db-egress
    spec:
      podSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - database
      egress:
      - to:
        - podSelector: {}
      - to:
        - ipBlock:
            cidr: 10.0.0.0/16

    This policy allows the database pods to communicate with any pod within the cluster and restricts outbound traffic to the IP range 10.0.0.0/16.

    • Protocol Specificity: Network Policies can define allowed protocols (TCP, UDP, SCTP) for both ingress and egress traffic, offering granular control over the types of network communication permitted.

    YAML
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-dns-udp
    spec:
      podSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - webserver
      ingress:
      - from:
        - podSelector: {}
        ports:
        - protocol: UDP
          port: 53

    The policy above allows only UDP traffic on port 53 (DNS) to reach the webserver pods.

    • Application Ports: Network Policies can reference pod ports defined within a Service resource using the applicationPorts field. This simplifies policy creation and avoids hardcoding port numbers.

    YAML
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-http-ingress
    spec:
      podSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - webserver
      ingress:
      - from:
        - podSelector: {}
        ports:
        - protocol: TCP
          port: 80

    Advanced Configurations

    • Combining Ingress and Egress Rules: Network Policies allow combining ingress and egress rules within a single policy to define comprehensive communication restrictions for a set of pods.

    YAML
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
     name: restrict-db-communication
    spec:
     podSelector:
       matchExpressions:
       - key: app
         operator: In
         values:
         - database
     ingress:
     - from:
       - podSelector:
           matchExpressions:
           - key: app
             operator: In
             values:
             - webserver
     egress:
     - to:
       - podSelector: {}
     - to:
       - ipBlock:
           cidr: 192.168.0.0/16

    This policy restricts the database pods to receive traffic only from webserver pods within the same cluster and allows them to communicate with any pod within the 192.168.0.0/16 CIDR range.

    • Using NetworkPolicy API Groups: Network Policies support multiple API groups, offering additional functionalities.  The networking.k8s.io/v1 API group is the most commonly used, while newer versions might offer extended features.

    • NetworkPolicy Resources in Workloads: Network Policy resources can be embedded within deployments or pod specifications, allowing for tighter integration with specific workloads. This approach is generally discouraged as it reduces reusability and maintainability of Network Policies.

     

    Advanced Use Cases and Best Practices

    Having explored advanced configurations, let's delve into some practical use cases for Network Policies in complex deployments:

    • Microsegmentation: Network Policies can be used to create micro segments within a cluster, isolating communication between specific services or groups of pods. This enhances security by limiting the attack surface and preventing lateral movement of threats within the cluster.

    • Denying All Traffic by Default:  A common best practice is to implement a "deny-all-by-default" approach using Network Policies. This policy initially blocks all traffic, and subsequent Network Policies explicitly allow necessary communication channels. This ensures a more secure baseline and minimizes the risk of unintended traffic flow.

    • Limiting Egress Traffic: Network Policies can be used to restrict outbound traffic from pods to specific IP addresses or domains. This helps prevent data exfiltration and enforces communication with authorized external services only.

    Conclusion

    Network Policies offer a powerful mechanism for securing communication channels within Kubernetes clusters. By using advanced configurations and best practices, you can create robust security policies that isolate workloads, restrict unnecessary traffic, and enhance the overall security posture of your Kubernetes deployments. Remember to carefully design your Network Policies to balance security requirements with application functionality and avoid overly restrictive policies that hinder communication needed for your applications to function.