- Newest
- Most votes
- Most comments
For more information. My service using Service discovery and cloud map using internal domain. I never get "Connect timeout on endpoint URL" when create ecs with alb. But my service I using Service discovery and cloud map some time have "Connect timeout on endpoint URL".
Hi Nam, thank you for taking the time to answer. For what I understand from service discovery is that it connects ECS services with DNS names. I am however directly trying to connect to the container IP adress. So unless there is something I do not understand from your answer - I would still expect, given that the instance is "running", that the dashboard ssh connection must work? Best, Alex
I think this is related to the size of my instance. I restarted it and then went into the logs that showed me what happened during the crash (apparently) using this command I got into the log files sudo journalctl -b -1 --no-pager where I found kernel: systemd invoked oom-killer which is related to memory. I then started my website server and used the command free -m to find out free: 10 mb which is close to crashing. I then added a static IP (did not this is possible) and made a snapshot which I used to create a bigger instance (2Gb mem instead of measly 512Mb) for 12$ instead of 5$. Then detached the static Ip from the failing instance. Then I ran the website server from this instance, attached the static IP and when it worked I deleted the low mem instance. Now its running again ... fingers crossed it does not crash again. I find it strange that I needed to restart and check the instance logs but ok if that works I have a way to find out what went wrong.
I think you anwser wrong topic. This is my topic releate AppConfig
The issue you're experiencing with the connect timeout on the endpoint URL when trying to retrieve configurations from AWS AppConfig in your ECS tasks could be due to several factors. Here are some potential reasons and solutions:
-
Network connectivity: Ensure that your ECS tasks have proper network connectivity to reach the AWS AppConfig service. If you're using a VPC, make sure the necessary routes and security group rules are in place.
-
Request timeout: The AWS AppConfig Agent has a default request timeout of 3000ms (3 seconds). If your network conditions are causing delays, you might need to increase this timeout. You can do this by setting the REQUEST_TIMEOUT environment variable for your agent container. For example, you could set it to 5000ms or 5s to give it more time to connect.
-
Proxy configuration: If your ECS tasks are behind a proxy, ensure that the proxy is correctly configured for the AWS AppConfig Agent. You can use the PROXY_URL environment variable to specify the proxy URL.
-
VPC Endpoints: Consider using AWS PrivateLink to create a private connection between your VPC and AWS AppConfig. This can help reduce network-related issues and timeouts. You can create an interface endpoint for AWS AppConfig using the service name: com.amazonaws.[region].appconfig
-
Retry logic: Implement retry logic in your entrypoint.sh script to handle temporary network issues or service unavailability.
-
AWS CLI version: Ensure you're using the latest version of the AWS CLI, as older versions might have issues with certain services or timeouts.
-
Resource constraints: If your ECS tasks are resource-constrained, they might struggle to establish connections quickly. Consider increasing the resources allocated to your tasks.
-
AppConfig service issues: Although rare, there could be temporary issues with the AWS AppConfig service in your region. Check the AWS Service Health Dashboard for any reported problems.
By addressing these potential issues, you should be able to resolve the connect timeout problem and successfully retrieve your configurations from AWS AppConfig in your ECS tasks.
Sources
(Optional) Using environment variables to configure AWS AppConfig Agent for Amazon ECS and Amazon EKS - AWS AppConfig
Security in AWS AppConfig - AWS AppConfig
Relevant content
- asked 3 years ago
- asked 8 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago

I think this is related to the size of my instance. I restarted it and then went into the logs that showed me what happened during the crash (apparently) using this command I got into the log files sudo journalctl -b -1 --no-pager where I found kernel: systemd invoked oom-killer which is related to memory. I then started my website server and used the command free -m to find out free: 10 mb which is close to crashing. I then added a static IP (did not this is possible) and made a snapshot which I used to create a bigger instance (2Gb mem instead of measly 512Mb) for 12$ instead of 5$. Then detached the static Ip from the failing instance. Then I ran the website server from this instance, attached the static IP and when it worked I deleted the low mem instance. Now its running again ... fingers crossed it does not crash again. I find it strange that I needed to restart and check the instance logs but ok if that works I have a way to find out what went wrong.