top of page

The Power of Spreading: Kubernetes Pod Anti-Affinity Sample Use Cases and Demo

Nov 4

3 min read

0

27


Where is Kubernetes Pod Anti-Affinity Used?

Kubernetes Pod Anti-Affinity is a powerful scheduling constraint used to ensure that certain pods do not run on the same node or in the same failure domain (like a rack or availability zone). Its primary purpose is to enhance the high availability and resilience of applications by preventing a single point of failure from taking down all instances of a service.


In essence, Pod Anti-Affinity is used in any scenario where the failure of a single node or zone must not result in the failure of the entire application. I have generated a conceptual diagram that clearly illustrates the concept of Kubernetes Pod Anti-Affinity. It shows three separate nodes, each hosting a single replica of the application, with visual cues indicating the rule that keeps them apart.


Kubernetes pod anti-affinity
Kubernetes Pod Anti-affinity

Real-Life Use Cases for Pod Anti-Affinity

Pod Anti-Affinity is a critical tool in a DevOps engineer's arsenal for building robust, production-grade Kubernetes deployments. The following are the most common and impactful real-life use cases:


Use Case

Description

Anti-Affinity Type

Topology Key

High Availability (HA) & Resilience

Spreading replicas of a critical application (e.g., a web server, API gateway) across different nodes to ensure that a single node failure does not impact service availability.

requiredDuringSchedulingIgnoredDuringExecution

Data Redundancy (StatefulSets)

Ensuring that database replicas (e.g., MongoDB, Cassandra, PostgreSQL) are placed on separate nodes to maintain quorum and data integrity in case of a node crash.

requiredDuringSchedulingIgnoredDuringExecution

Fault Domain Spreading

Spreading application instances across different racks, availability zones, or regions to protect against larger-scale infrastructure failures.

preferredDuringSchedulingIgnoredDuringExecution

topology.kubernetes.io/zone or custom labels

Resource Contention Mitigation

Preventing resource-intensive pods from being scheduled on the same node, which could lead to resource starvation (CPU, memory, I/O) and performance degradation for both.

preferredDuringSchedulingIgnoredDuringExecution

Licensing/Compliance

Enforcing software licensing agreements that restrict the number of instances of a particular application that can run on a single physical host.

requiredDuringSchedulingIgnoredDuringExecution

Production Example: Ensuring High Availability for a Critical API Service

Consider a high-traffic e-commerce platform that relies on a critical Product Catalog API. This API must maintain 99.99% uptime. To achieve this, we must ensure that its replicas are spread across different nodes and, ideally, different availability zones.

We will use Pod Anti-Affinity with the requiredDuringSchedulingIgnoredDuringExecution rule to enforce a strict separation of pods across nodes.


The Anti-Affinity Strategy

  1. Strict Node Separation: We use the topologyKey: kubernetes.io/hostname to tell the scheduler that no two pods with the label app: product-catalog-api can be placed on the same node. If the scheduler cannot find a separate node for a new replica, the pod will remain in a Pending state.


  1. Zone Preference (Optional but Recommended): For an even higher level of resilience, we could add a preferredDuringSchedulingIgnoredDuringExecution rule with topologyKey: topology.kubernetes.io/zone to encourage the scheduler to spread the pods across different availability zones, protecting against a zone-wide outage.


Example of Pod Anti-Affinity

In this example we are going to create a pod anti-affinity to an app named frontend in a Kubernetes Cluster with 3 worker nodes. The goal here is to show that there will be no two identical pods running in the same node. Since we only have 3 worker nodes and deployment requires 6 replicas, only 1 unique replica will be privisioned on each node other pods will be stucked into pending state. This means that the pod can not be launched into two or more copies in the same worker node thus achieving pod anti-affinity.


Step 1: Create a Deployment

kubectl create deployment frontend --image=nginx --replicas=6 -o yaml --dry-run=client > frontend.yaml

Step 2: Edit the Deployment to Include Pod Anti-Affinity

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: frontend
  name: frontend
  namespace: non-prod
spec:
  replicas: 6
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - frontend
            topologyKey: "kubernetes.io/hostname"
      containers:
      - image: nginx
        name: nginx

Step 3: Apply / Create the Deployments

 kubectl apply -f frontend.yaml

Step 4: Demonstrate Pod Anti-Affinity Created

As you can see, some pods are in pending state because we setup the Deployment to ensure that no two pods runs in the same node. The key parameters used were:

      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - frontend
            topologyKey: "kubernetes.io/hostname"
Kubernetes Pod Anti-Affinity in Action

Conclusion

Pod Anti-Affinity is a fundamental concept for designing highly available and fault-tolerant applications on Kubernetes. By intelligently spreading application replicas across different failure domains, it moves the responsibility of resilience from the application layer to the infrastructure layer, allowing developers to focus on business logic while the scheduler handles the complex task of optimal and safe pod placement.

bottom of page