- Newest
- Most votes
- Most comments
I've had three worker nodes in my EKS cluster, one of them was started by the AWS auto scaler. However, the kubelet was started with the parameter
...
"clusterDNS": [
"10.100.0.10"
],
...
when it should have been started with the parameter
...
"clusterDNS": [
"172.20.0.10"
],
...
The worker node with the wrong parameter also was the one node with the fewest pods scheduled to it so that each new Pod was scheduled on it.
In order to solve the issue, I shut down the node with the incorrect parameter. After shutting down the node, the AWS auto scaler started a new worker node. This new worker node had the correct parameter and the problem I observed was solved.
In order to check on the kubelet parameters, I used this method: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/#generate-the-configuration-file
Relevant content
- asked 4 months ago
- asked 7 months ago
- asked 5 months ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 months ago