- 최신
- 최다 투표
- 가장 많은 댓글
Looking at the worker node security groups is where I think I found the issue.
The AWS EKS kube-dns endpoints and pods were on the private subnet.
I have two CloudFormation stacks....one for autoscaling nodes in the private subnets and one for autoscaling nodes in the public subnets.
They didn't have a common security group so the pods running in the public nodes weren't able to access the kube-dns pods running on the private nodes.
Once I update the worker node security groups to allow cross communication the dns started working.
Pls post if anyone sees any unintended consequences. Thx!
ahec wrote:
Looking at the worker node security groups is where I think I found the issue.The AWS EKS kube-dns endpoints and pods were on the private subnet.
I have two CloudFormation stacks....one for autoscaling nodes in the private subnets and one for autoscaling nodes in the public subnets.
They didn't have a common security group so the pods running in the public nodes weren't able to access the kube-dns pods running on the private nodes.
Once I update the worker node security groups to allow cross communication the dns started working.
Pls post if anyone sees any unintended consequences. Thx!
Thanks for this. I was having DNS issues with S3 endpoints and I have a similar setup to yours. I have two ASG; one in each AZ per the cluster autoscaler documentation. The CF templates I used were the AWS ones so they did not automatically add the cross-AZ security group rules (the default template adds a self referencing rule to the SG it creates for worker nodes). Adding a rule for all traffic for cross-AZ node communication fixed our DNS issues immediately.
Edited by: rrasco on Aug 27, 2019 1:55 PM
관련 콘텐츠
- AWS 공식업데이트됨 2년 전
- AWS 공식업데이트됨 2년 전