- Le plus récent
- Le plus de votes
- La plupart des commentaires
I've had three worker nodes in my EKS cluster, one of them was started by the AWS auto scaler. However, the kubelet was started with the parameter
...
"clusterDNS": [
"10.100.0.10"
],
...
when it should have been started with the parameter
...
"clusterDNS": [
"172.20.0.10"
],
...
The worker node with the wrong parameter also was the one node with the fewest pods scheduled to it so that each new Pod was scheduled on it.
In order to solve the issue, I shut down the node with the incorrect parameter. After shutting down the node, the AWS auto scaler started a new worker node. This new worker node had the correct parameter and the problem I observed was solved.
In order to check on the kubelet parameters, I used this method: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/#generate-the-configuration-file
Contenus pertinents
- demandé il y a un an
- demandé il y a un mois
- demandé il y a un an
- demandé il y a 7 mois
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a un an
- Pourquoi le plug-in CNI de mon VPC ne parvient-il pas à atteindre le serveur d'API dans Amazon EKS ?AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a un an