Last9Last9

How to restart Kubernetes Pods with kubectl

A query that keeps popping up, so decided to write a simple reckoner on how to restart a Kubernetes pod with kubectl

Aug 24th, ‘22 | 6 min read

Share:

How to restart Kubernetes Pods with kubectl

In Kubernetes, Pods are the smallest units of computing that you can create and manage. A Pod represents a single instance of a running process in a cluster. A Pod contains one or more containers that share the same storage and network resources. Like containers, Pods are relatively ephemeral (temporary) rather than durable components and have a defined lifecycle.

The lifecycle of a Pod starts in the Pending phase and then moves through Running if at least one of its primary containers starts OK. After that, it moves to either the Succeeded or Failed phases depending on whether any container in the Pod terminated in failure. And while a Pod goes through its lifecycle, there are several scenarios where you might want to restart it to ensure your cluster is at its desired state.

This article will discuss 5 scenarios where you might want to restart a Kubernetes Pod and walk you through methods to restart Pods with kubectl.

5 scenarios where you might want to restart a Pod

There are several reasons why you might want to restart a Pod. The following are 5 of them:

  1. Container Out of Memory (OOM) error: Out of Memory error is one of the most common reasons for restarting a Pod. This error happens if a Pod’s container resource usage is not configured or the application behaves unpredictably. For example, suppose you allocated 600Mi of memory for a container, and it tries to allocate additional memory. In that case, Kubernetes will kill the Pod with an “Out of Memory” error. When this error happens, you must restart your Pod after rightsizing the resource Limits.
  2. The Pod is stuck in a terminating state: A Pod can be said to be stuck in a terminating state when all of its containers have terminated, but it is still functioning. This occurrence usually happens when the Pod is on a Node that’s unexpectedly taken out of service, and the kube-scheduler and controller-manager cannot clean up all the Pods on that Node.
  3. To easily upgrade a Pod with a newly-pushed container image if you previously set the PodSpec imagePullPolicy to Always.
  4. To update configurations and secrets.
  5. You would want to restart Pods in a scenario where the application running in the Pod has an internal state that gets corrupted and needs to be cleared.

    Now you’ve seen some scenarios where you might want to restart a Pod. Next, you will learn how to restart Pods with kubectl.

Restarting Kubernetes pods with kubectl

kubectl, by design, doesn’t have a direct command for restarting Pods as Docker has for containers — docker restart <container_id>. Because of this, to restart Pods with kubectl, you have to use one of the following methods:

  • Restarting Kubernetes Pods by changing the number of replicas with kubectl scale command
  • Downtimeless restarts with kubectl rollout restart command
  • Automatic restarts by updating the Pod’s environment variable
  • Restarting Pods by deleting them

Prerequisites

Before you learn how to use each of the above methods, ensure you have the following prerequisites:

  • A Kubernetes cluster. The demo in this article was done using minikube — a single Node Kubernetes cluster.
  • The kubectl command-line tool configured to communicate with the cluster.


For demo purposes, in any desired directory, create an nginx-deployment.yaml file with replicas set to 2 using the following YAML configurations:

In your terminal, change to the directory where you saved the deployment file, and run:

$ kubectl apply -f nginx-deployment.yaml

The above command will create the nginx deployment with two pods. To verify the number of Pods, run the $ kubectl get pods command.

Creating and verifying an Nginx deployment with kubectl

Now you have the Pods of the Nginx deployment running. Next, you will use each of the methods outlined earlier to restart the Pods.

Restarting Kubernetes Pods by changing the number of replicas

In this method of restarting Kubernetes Pods, you scale the number of the deployment replicas down to zero, which stops and terminates all the Pods. Then you scale them back up to the desired state to initialize new pods.
Note: It is important to note that when you set the number of replicas to zero, seeing the Pods stop running, there will be some application downtime.
To scale down the Nginx deployment replicas you created, run the following kubectl scale command:

$ kubectl scale deployment nginx-deployment --replicas=0

The above command will show an output indicating that Pods have been scaled, as shown in the image below.

Scaling Pods down

To confirm that the pods were stopped and terminated, run $ kubectl get pods, and you should get the “No resources are found in default namespace” message.

Showing Pods

To scale up the replicas, run the same kubectl scale, but this time with --replicas=2.
$ kubectl scale deployment nginx-deployment --replicas=2

After running the above command, to verify the number of pods running, run:
$ kubectl get pods

And you should see each Pod back up and running after restarting, as in the image below.

Scaling Pods up

Downtimeless restarts with Rollout restart

In the previous method, you scaled down the number of replicas to zero to restart the Pods; doing so caused an outage and downtime of the application. To restart without any outage and downtime, use the kubectl rollout restart command, which restarts the Pods one by one without impacting the deployment.
To use rollout restart on your Nginx deployment, run:

$ kubectl rollout restart deployment nginx-deployment

Now to view the Pods restarting, run:
$ kubectl get pods

Notice in the image below Kubernetes creates a new Pod before Terminating each of the previous ones as soon as the new Pod gets to Running status. Because of this approach, there is no downtime in this restart method.

Using kubectl rollout restart

Automatic restarts by updating the Pod’s environment variable

So far, you’ve learned two ways of restarting Pods in Kubernetes; one by changing the replicas and the other by rollout restart. The methods work, but you explicitly restarted the pods with both of them.

In this method, once you update the Pod’s environment variable, the change will automatically restart the Pods.

To update the environment variables of the Pods in your Nginx deployment, run:
$ kubectl set env deployment nginx-deployment DATE=$()

After running the above command, which adds a DATE environment variable in the Pods with a null value (=$()), run $ kubectl get pods and see the Pods restarting, similar to the rollout restart method.

Adding an environment variable to Pods

You can verify that each Pod’s DATE environment variable is null with the kubectl describe command.

$ kubectl describe <pod_name>

After running the above command, the DATE variable is empty (null) like in the image below.

Verifying environment variable addition

Restarting Pods by deleting them

Because the Kubernetes API is declarative, it automatically creates a replacement when you delete a Pod that’s part of a ReplicaSet or Deployment. The ReplicaSet will notice the Pod is no longer available as the number of container instances will drop below the target replica count.

To delete a Pod, use the following command:
$ kubectl delete pod <pod_name>

Though this method works quickly, it is not recommended except if you have a failed or misbehaving Pod or set of Pods. For regular restarts like updating configurations, it is better to use the kubectl scale or kubectl rollout commands designed for that use case.

To delete all failed Pods for this restart technique, use this command:
$ kubectl delete pods --field-selector=status.phase=Failed

Cleaning up

Clean up the entire setup by deleting the deployment with the command below:
$ kubectl delete deployment nginx-deployment

Conclusion

This article discussed 5 scenarios where you might want to restart Kubernetes Pods and walked you through 4 methods with kubectl. There is more to learn about kubectl. To learn more, check out the kubectl commands reference.

Want to know more about Last9? Check out last9.io; we’re building reliability tools to make running systems at scale, fun, and embarrassingly easy. ✌️

Aug 24th, ‘22 | 6 min read

Share:

Related posts
Have a question?

Keep up with everything to do in the world of site reliability engineering, & updates around interesting stories from fighting for your 9s.

Last9 on DiscordJoin our Discord ↗
SOC2 Type II Certified

Last9 cares deeply about its customer’s data and is SOC2 Type II certified. Please contact us at hello@last9.io for the report.

Last9 is SOC2 compliant
Last9 on DiscordLast9 on LinkedInLast9 on TwitterLast9 on Youtube
© 2023 Last9. All rights reserved.