- Newest
- Most votes
- Most comments
Hi, you may want to follow the guidance of this KC article to fix your problem: https://repost.aws/knowledge-center/elb-fix-failing-health-checks-alb
Best,
Didier
Hello,
- Increase the health check grace period or interval if the application takes some time to fully start up on the container. This will prevent false failures during startup.
- Check for any network ACLs that may be blocking port 8000 traffic.
For further info check these links - https://repost.aws/knowledge-center/fargate-alb-health-checks https://repost.aws/knowledge-center/fargate-nlb-health-checks
Thanks
Hi, thanks for your reply. Actually my network ACL does not block port 8000 and I increased the heath check grace period but it's not working. Somehow the port in ecs service is not opening because when I ping the private ip address and the port, I got "connection refused" or "no route to host".
It's possible the application is only listening on localhost (127.0.0.1
) by default instead of being bound to 0.0.0.0/0
. Binding to localhost is common in development environments for testing locally, but production deployments should listen on all available network interfaces.
I would kindly suggest double checking the host configuration and ensuring it is bound to 0.0.0.0
to allow external access.
this actually is not my problem because I deploy the same image with public nlb with aws copilot and it works.
Relevant content
- asked 2 years ago
- Accepted Answerasked 6 days ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 months ago
Hi Didier, thanks for your comment. However my load balancer is network load balancer and I also followed the steps, but I still havent figured it out why. Actually I forgot to mention that I can ping the host in EC2 instance but I cannot access the port. Seems like port mapping is wrong..