- Newest
- Most votes
- Most comments
This error suggests that there's an issue with the creation of new nodes in your EKS cluster after updating to version 1.30. The NodeCreationFailure error indicates that the nodes are being created but are unable to join the cluster successfully. There are several things you can check to troubleshoot this issue:
-
IAM Role Permissions: Ensure that the IAM role associated with your node group has the necessary permissions. The role should have policies that allow it to interact with EKS and other required AWS services.
-
VPC and Subnet Configuration: Verify that your VPC and subnet settings are correct and that the nodes have proper network access. The nodes need to be able to communicate with the EKS control plane.
-
Security Group Settings: Check if the security group associated with your nodes allows the required inbound and outbound traffic for EKS.
-
Node Instance Type: Confirm that the instance type you're using for your nodes is compatible with EKS 1.30 and has sufficient resources.
-
AMI Version: Make sure you're using an up-to-date Amazon EKS-optimized AMI that's compatible with EKS 1.30.
-
CloudWatch Logs: If you have enabled CloudWatch logging for your EKS cluster, check the logs for any specific error messages that might provide more details about the failure.
-
Outbound Internet Access: Ensure that your nodes have outbound internet access or access to the required VPC endpoints if you're running in a private subnet.
-
Kubernetes Configuration: Verify that the Kubernetes configuration (kubeconfig) is correctly set up and that you can access the cluster using kubectl.
-
AWS CLI and Terraform Version: Make sure you're using the latest versions of the AWS CLI and Terraform, as older versions might have compatibility issues with newer EKS versions.
If you've checked these areas and are still encountering issues, you may need to review the Terraform configuration files, particularly the eks.tf file mentioned in the error message, to ensure all settings are correct for EKS 1.30. Additionally, you might want to try creating a new node group manually through the AWS console to see if the issue persists outside of Terraform.
Remember to always test changes in a non-production environment before applying them to your production cluster to minimize downtime of your applications.
Sources
AWS EKS configuration issues | AWS re:Post
Troubleshoot problems with Amazon EKS clusters and nodes - Amazon EKS
Relevant content
- asked 4 months ago
- AWS OFFICIALUpdated 23 days ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 2 years ago