kubectl is really slow and sometimes times out with i/o timeout

0

Everything with kubectl takes forever. That was not the case before. What could be the problem?

Getting pods with logging as you can see it takes more than 20 seconds:

kubectl get nodes --v=8

I0626 11:05:30.744121   86247 loader.go:374] Config loaded from file:  /Users/[REMOVED]/.kube/config
I0626 11:05:30.747674   86247 round_trippers.go:463] GET https://[REMOVED].gr7.eu-central-1.eks.amazonaws.com/api/v1/namespaces/default/pods?limit=500
I0626 11:05:30.747690   86247 round_trippers.go:469] Request Headers:
I0626 11:05:30.747698   86247 round_trippers.go:473]     Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0626 11:05:30.747702   86247 round_trippers.go:473]     User-Agent: kubectl/v1.25.4 (darwin/arm64) kubernetes/872a965
I0626 11:05:51.346710   86247 round_trippers.go:574] Response Status: 200 OK in 20599 milliseconds
I0626 11:05:51.346891   86247 round_trippers.go:577] Response Headers:
I0626 11:05:51.346917   86247 round_trippers.go:580]     Audit-Id: 6cad6d86-1cbc-4df7-8979-ac95dccbb6c3
I0626 11:05:51.346941   86247 round_trippers.go:580]     Cache-Control: no-cache, private
I0626 11:05:51.346959   86247 round_trippers.go:580]     Content-Type: application/json
I0626 11:05:51.346976   86247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f65d7b50-0090-499b-8be0-6fcf34055c25
I0626 11:05:51.346995   86247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ddeeb77c-23bd-4fb0-bb3e-d67e223ba442
I0626 11:05:51.347013   86247 round_trippers.go:580]     Date: Wed, 26 Jun 2024 09:05:51 GMT
I0626 11:05:51.359686   86247 request.go:1154] Response Body: [REMOVED]

Checking the memory and cpu pressure, but it looks fine:

kubectl describe nodes:

Node 1:
  Resource           Requests       Limits
  --------           --------       ------
  cpu                880m (22%)     0 (0%)
  memory             10290Mi (33%)  10598Mi (34%)
  ephemeral-storage  0 (0%)         0 (0%)
  hugepages-1Gi      0 (0%)         0 (0%)
  hugepages-2Mi      0 (0%)         0 (0%)
Node 2:
  Resource           Requests      Limits
  --------           --------      ------
  cpu                1080m (55%)   0 (0%)
  memory             2029Mi (28%)  4897Mi (69%)
  ephemeral-storage  0 (0%)        0 (0%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)

Checking Checking with kubectl top for each node:

kubectl top node []

Node 1:
NAME                                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
[REMOVED]   144m         7%     3876Mi          54%   
Node 2:
NAME                                          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
[REMOVED]   64m          1%     5950Mi          19%
asked a year ago1.1K views
5 Answers
2
Accepted Answer

Hello,

Slow kubectl..? Here I am Giving Some Steps to resolve the issue:

  • API Server Issues: Check Kubernetes cluster logs for API server errors https://kubernetes.io/docs/concepts/cluster-administration/logging/.
  • Network Woes: Verify network connectivity with ping or traceroute.
  • High Cluster Load: Monitor resource usage with kubectl top pods & kubectl top nodes.
  • Outdated kubectl: Upgrade to the latest stable version using your package manager.
  • Large Resource Sets: Filter kubectl get commands with specific selectors (e.g., kubectl get pods -l app=myapp).

For detailed troubleshooting steps, refer to the Kubernetes documentation: https://kubernetes.io/docs/concepts/overview/kubernetes-api/

profile picture
EXPERT
answered a year ago
EXPERT
reviewed a year ago
profile picture
EXPERT
reviewed a year ago
profile picture
EXPERT
reviewed a year ago
1

Hi

Check these steps to resolve the issue:

API Server Logs:

Check the API server logs on the master node (often located at /var/log/kubernetes/kube-apiserver.log). Look for errors or warnings that might indicate overloaded conditions.

Network Connectivity:

Use ping from your kubectl client machine to reach the API server's IP address. High latency or packet loss can signify network issues. Verify there are no firewalls blocking traffic between the client and the API server.

API Server Resource Usage (Optional):

Use tools like kubectl top pods -n kube-system to check resource usage of pods in the kube-system namespace (where the API server typically resides). If the API server pod shows high CPU or memory consumption, it could indicate a problem.

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/

https://platform9.com/blog/the-six-most-popular-kubernetes-networking-troubleshooting-issues/

https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/kubectl_top_pod/

profile picture
EXPERT
answered a year ago
1

Hello,

If kubectl is slow or times out, consider these steps:

Check Network Latency:

Verify your network connection to the Kubernetes API server using tools like ping and traceroute. Monitor Cluster Load:

Ensure your cluster is not overloaded. Use kubectl top nodes and kubectl top pods to check resource usage.

Inspect API Server Logs:

Check the API server logs for errors or performance issues.

Verify DNS Resolution:

Ensure DNS resolution is functioning correctly by checking the status of your kube-dns or CoreDNS pods.

For more detailed troubleshooting, refer to:

Kubernetes Network Troubleshooting: Monitoring Kubernetes Cluster: https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-usage-monitoring/

profile picture
EXPERT
answered a year ago
1

It turned out my local kubectl version was 1.25 while the cluster was running on 1.30 or 1.28 this morning. With the up to date version of kubectl it's back to normal speed. Thanks so much for your answers. I would have never guessed that's the problem

answered a year ago
profile picture
EXPERT
reviewed a year ago
0

API Server Logs look fine to me, except for every 5 minutes CRD are being updated. Every second there are about 4 events of this type: {"kind":"Event","apiVersion":"audit.k8s.io/v1", ...}

Network connectivity is weird, so my personal connection is really fast, but trying to: ping [REMOVED].gr7.eu-central-1.eks.amazonaws.com results in Request timeout immediately and 100% packet loss.

kube-system looks fine to me as well: kubectl top pods -n kube-system NAME CPU(cores) MEMORY(bytes)
aws-node-mx7fv 3m 56Mi
aws-node-wgf4h 3m 55Mi
coredns-695677774b-9ksgx 2m 14Mi
coredns-695677774b-j9ss2 2m 18Mi
kube-proxy-4whv2 1m 13Mi
kube-proxy-mnbgh 1m 14Mi

answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions