1 個回答
- 最新
- 最多得票
- 最多評論
0
Hello.
- Please check if your nat gateway has a public IP an configured in a public subnet.
- Ensure that DNS resolution and DNS hostnames are enabled for your VPC.
- Check if your security groups are allowing traffic between the EKS control plane and worker nodes.
Regards, Andrii
Thanks, Andrii. All three are checked. However, the problem still persists.
- Yes
$ aws ec2 describe-nat-gateways --nat-gateway-ids <my-nat-gateway-id> --query 'NatGateways[].NatGatewayAddresses[].PublicIp' [ "<correct-public-ip>" ]
- Yes
$ aws ec2 describe-vpc-attribute --vpc-id <my-vpc-id> --attribute enableDnsSupport { "VpcId": "<my-vpc-id>", "EnableDnsSupport": { "Value": true } } $ aws ec2 describe-vpc-attribute --vpc-id <my-vpc-id> --attribute enableDnsHostnames { "VpcId": "<my-vpc-id>", "EnableDnsHostnames": { "Value": true } }
- Yes, the security group of the the autoscaling group associated with the node group has both inbound and outbound allowing 0.0.0.0/0 all protocol, all port range, all type.
相關內容
- AWS 官方已更新 1 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 1 年前
Did you manage to figure this out? I see similar issues when launch a node group in private subnet.