In Kubernetes, a pod can disappear for any reason. It can get evicted from the node.

In fact, even the entire node can disappear from the cluster because of something as trivial as a network issue.

And yet, it is vital that our applications continue running.

For unmanaged pods, Kubernetes provides Liveness Probe that makes sure to restart the container if your application goes down.

But as we discussed in our post on Kubernetes Liveness Probe, it does not guarantee against node failure.

For a stronger guarantee of application availability, we need to utilize Kubernetes Replication Controller.

1 – What is Kubernetes Replication Controller?

The ReplicationController is a Kubernetes resource that ensures its pods are always kept running.

If a pod disappears, the replication controller notices the missing pod and creates a replacement pod.

The replication controller even guards against node-level failures.

As long as there is a node running on the cluster, the replication controller will continue to try and keep our pods running. Like a silent guardian!

Check out the below illustration:

kubernetes replication controller use
The Role of a Replication Controller

Pod A was unmanaged and didn’t survive when the Node hosting it went down. On the other hand, Pod B was managed by a replication controller. And hence, it found a new life on another Node.

Once a replication controller is created within a Kubernetes cluster, it monitors the list of running pods. While creating, we specify the desired number of pods of a certain type.

The replication controller makes sure the actual number of pods of the given type always matches the desired number.

  • If too few pods of a given type are running, the controller creates new replicas according to the pod template.
  • If too many pods of a given type are running, the controllers also removes the excess replicas.

ReplicationController strives to keep the balance at all points in time.

2 – The Components of a Kubernetes Replication Controller

A replication controller has three essential parts:

  • First, there is a label selector that determines what pods are in the scope of the replication controller.
  • Second, there is a replica count that specifies the desired number of pods that should be running at any given point in time.
  • Third, there is the pod template that defines the type of pod replicas the controller should create.

Though all three properties are important, only changes to the replica count affect the existing pods.

If we modify the label selector, the existing pods fall out of the replication controller’s scope. The controller stops caring about these pods that have suddenly gone rogue. Instead, the controller will spawn new pods to fill the gap.

If we modify the pod template (the container image, environment variables and other things), again there will be no impact on existing pods.

Replication controllers don’t care about the actual contents of the pod after they have been created. Only when the controller will create a new pod, the updated pod template will be used.

Basically, the replication controller is like a cookie cutter for cutting out new pods. Even if you change the shape of the cutter, the cookies already cut will remain the same. The new ones will be different.

Cookie Cutter

3 – Creating a Kubernetes Replication Controller

Enough theory. Time for some hands on.

Let us first create a YAML file (basic-pod-rc.yaml) for Kubernetes replication controller.

apiVersion: v1
kind: ReplicationController
metadata:
  name: basic-rc
spec:
  replicas: 3
  selector:
    app: hello-service
  template:
    metadata:
      labels:
        app: hello-service
    spec:
      containers:
      - name: hello-service
        image: progressivecoder/nodejs-demo
        imagePullPolicy: Never

The YAML template is quite similar to the one we created for a pod in an earlier post.

  • To begin with, we have the apiVersion and kind properties.
  • Next, we have a metadata section where we specify the name of the replication controller.
  • Following this, we have the import spec section where we specify the replicas (desired number of pods) and the label selector for the pods (hello-service).
  • Next, we have the template section for the pod template. Here, we have the metadata and labels section. To align with the label selector for the controller, we provide the label app: hello-service in the pod template. Basically, this will ensure all pods get created with this label so that they fall under the scope of our controller.
  • At the end, we have the specification for the containers such as the name and corresponding image.

We can apply the above file to our cluster using the below command:

$ kubectl apply -f basic-pod-rc.yaml

After applying, if we run kubectl get pods , we can see a similar response as below:

NAME                     READY   STATUS    RESTARTS        AGE
basic-rc-gjt9n           1/1     Running   0               4s
basic-rc-jz9n7           1/1     Running   0               4s
basic-rc-q8d7g           1/1     Running   0               4s

Basically, the replication controller has successfully created 3 pods (based on the specified replicas).

We can extract more details about it by using kubectl describe rc basic-rc command.

Name:         basic-rc
Namespace:    default
Selector:     app=hello-service
Labels:       app=hello-service
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=hello-service
  Containers:
   hello-service:
    Image:        progressivecoder/nodejs-demo
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  21s   replication-controller  Created pod: basic-rc-jz9n7
  Normal  SuccessfulCreate  21s   replication-controller  Created pod: basic-rc-q8d7g
  Normal  SuccessfulCreate  21s   replication-controller  Created pod: basic-rc-gjt9n

If we delete one of the pods (maybe, basic-rc-gjt9n), the replication controller springs into action and spawns a new pod to make the replica count back to 3.

NAME                     READY   STATUS    RESTARTS        AGE
basic-rc-jz9n7           1/1     Running   0               2m9s
basic-rc-mb2wg           1/1     Running   0               **10s**
basic-rc-q8d7g           1/1     Running   0               2m9s

Notice the AGE of the middle pod. It is the latest pod that was created.

If we describe the replication controller once again, we should see more details:

Name:         basic-rc
Namespace:    default
Selector:     app=hello-service
Labels:       app=hello-service
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=hello-service
  Containers:
   hello-service:
    Image:        progressivecoder/nodejs-demo
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age    From                    Message
  ----    ------            ----   ----                    -------
  Normal  SuccessfulCreate  4m17s  replication-controller  Created pod: basic-rc-jz9n7
  Normal  SuccessfulCreate  4m17s  replication-controller  Created pod: basic-rc-q8d7g
  Normal  SuccessfulCreate  4m17s  replication-controller  Created pod: basic-rc-gjt9n
  Normal  SuccessfulCreate  2m18s  replication-controller  Created pod: basic-rc-mb2wg

The Events section displays the details about the 4 pods even though only 3 are running currently.

Check out the below illustration that describes how a replication controller handles the deletion of a running pod.

how replication controller works
How Replication Controller works?

Replication Controllers are Scalable

We can also scale our replication controller by issuing the below command:

$ kubectl scale rc basic-rc --replicas=10

This will update the replicas property to 10. The controller will automatically ensure that 10 pods are running.

Deleting Replication Controllers

$ kubectl delete rc basic-rc

When we delete the replication controller, its pods are also deleted. However, we can prevent that from happening by adding the --cascade flag.

$ kubectl delete rc basic-rc --cascade=false

Changing the Labels of a Replication Controller Pod

We can also change the label of a pod managed by a replication controller.

To do so, we can issue the below command:

$ kubectl label pod basic-rc-jz9n7 app=foo --overwrite

The --overwrite flag is necessary otherwise kubectl will only print out a warning message and won’t actually update the label. It is a like a fail-safe to prevent accidental modification of existing label’s value when you are attempting to add a new label.

Anyways, the above command will make the pod no longer match the replication controller’s label selector. If three pods were running prior to the command, only two will now match the selector.

The replication controller won’t tolerate this breach of contract and immediately start a new pod to bring the number of pods back to 3.

If we execute kubectl get pods, we should now see 4 pods.

NAME                     READY   STATUS    RESTARTS        AGE
basic-rc-jz9n7           1/1     Running   0               18m
basic-rc-mb2wg           1/1     Running   0               16m
basic-rc-q8d7g           1/1     Running   0               18m
basic-rc-xckgr           1/1     Running   0               3s

There’s a new pod that is just 3 seconds old. However, the pod basic-rc-jz9n7 is still running.

Remember – we just changed the label of the pod. It is now out of the tribe. But it is still running on its own as an un-managed pod.

You might get a question at this point. Why even bother removing a pod from the replication controller?

This comes in handy when you want to perform certain actions on a specific pod. For example, you might have seen a bug that causes your pod to start behaving badly after a specific amount of time or some event. If you know a pod is malfunctioning, you can take it out of the replication controller’s scope, let the controller replace it with a new one and then debug or play around with the pod in any way you want. Once you are done, you can delete the pod.

4 – Benefits of Kubernetes Replication Controller

Kubernetes Replication Controller is simple but quite powerful. It provides several tangible benefits:

  • It makes sure a pod (or multiple pod replicas) are always running based on the desired number. This makes it extremely easy to ensure availability of your application at all times.
  • When a cluster node fails, the controller creates replacement replicas for all the pods that crashed along with the failed node. This is a huge pain point in traditional deployment approach where the ops team has to manually shift the workloads to new machines in case of failures.
  • Replication controller enables easy horizontal scaling of pods – both manually and automatically.

Conclusion

Kubernetes replication controllers are extremely useful while being simple enough to understand. Once you get a grasp of their main role, you can use them to control the scalability of your applications efficiently. The next step is to use a Kubernetes service to control traffic to multiple pods.

Of course, the journey does not stop here. With advancement in Kubernetes, different approaches to handle replication have come up. Replica sets are meant to replace replication controllers. Also, there are deployments that can do the job of replication and a lot more.

More on those in later posts.

Categories: BlogKubernetes

Saurabh Dashora

Saurabh is a Software Architect with over 12 years of experience. He has worked on large-scale distributed systems across various domains and organizations. He is also a passionate Technical Writer and loves sharing knowledge in the community.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *