1 Answer
- Newest
- Most votes
- Most comments
0
Hi,
Some things you can check are:
- Check that the pod is in RUNNING state and not PENDING
- Verify that the Allowed subnets configured for the ECS service is not a private subnet. Even with Public IP enabled you cannot access pod in private subnet.
--Syd
answered a year ago
Relevant content
- AWS OFFICIALUpdated 4 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 3 years ago
Thanks for your comment. About no.1, since I am new to ECS so I am not really sure what is ECS counterpart of k8s pod but ECS service, task and container seems running properly. About no.2, I just checked all subnets used for ECS are default subnets which are public subnet. I also ensured public ip is enabled. I still cannot get it working.
Sorry for the terminology exchange. You can equate ECS task to the pod :) You stated that tasks are running, it's in public subnets. I cannot think of any other reason that should cause the issue you mentioned about. I was able to follow the documentation and have it running in ap-northeast-1. By any chance do you have a firewall that might block outbound traffic from your side? You can check with ping, traceroute etc by adding the equivalent rule in security groups or allowing all traffic (for testing) I assume you would have tried to delete and started all over again also to rule out if you missed anything
I finally figured out the culprit was sg. I misunderstood the inbound rule setting of a default sg. This doc (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/default-custom-security-groups.html) says default sg inbound rule allows access from network interface and ec2 which are associated with the default sg so access from outside vpc is not allowed. Thanks for your help, Syd!
Thanks for the info!
Thanks for confirming this tutorial is working. It was my mistake.