- Newest
- Most votes
- Most comments
It's quite normal for traceroutes and pings not to be permitted to many services, or for intermediate network devices to drop them. Are you seeing traceroutes getting through all the way to the exact same destination from some other origin?
You might have better luck with a TCP traceroute to the TCP port number where the real API is listening, such as tcp/443, although the ICMP destination unreachable / TTL exceeded messages may still not be delivered or permitted through from all the intermediate devices. If you only want to test if the port can be reached or to measure its response time, you can use a TCP ping tool for that.
For example, start CloudShell in the AWS Management Console in us-east-1.
First, install the traceroute package:
~ $ sudo yum install traceroute.x86_64
and then run a TCP traceroute to your destination:
~ $ sudo traceroute -T -p 443 cloud-test.dcap.com.
traceroute to cloud-test.dcap.com. (40.77.80.79), 30 hops max, 60 byte packets
1 100.100.38.72 (100.100.38.72) 1.721 ms 100.100.8.12 (100.100.8.12) 0.819 ms 100.100.32.36 (100.100.32.36) 0.646 ms
2 * 240.0.236.32 (240.0.236.32) 0.953 ms *
3 100.100.34.118 (100.100.34.118) 1.049 ms 100.100.34.116 (100.100.34.116) 0.674 ms 100.100.34.114 (100.100.34.114) 0.740 ms
4 151.148.11.111 (151.148.11.111) 0.897 ms 0.881 ms *
5 ae32-0.icr02.bl7.ntwk.msn.net (51.10.15.135) 1.616 ms 1.001 ms ae28-0.icr02.bl20.ntwk.msn.net (104.44.54.58) 1.331 ms
6 be-142-0.ibr03.bl20.ntwk.msn.net (104.44.21.225) 24.640 ms be-142-0.ibr03.bl7.ntwk.msn.net (104.44.21.209) 32.952 ms *
7 be-8-0.ibr02.cle30.ntwk.msn.net (104.44.28.121) 33.287 ms 33.385 ms 33.343 ms
8 be-9-0.ibr01.ch4.ntwk.msn.net (104.44.29.47) 32.955 ms 51.10.4.63 (51.10.4.63) 31.338 ms be-9-0.ibr01.ch4.ntwk.msn.net (104.44.29.47) 33.062 ms
9 104.44.31.194 (104.44.31.194) 33.715 ms * 34.509 ms
10 51.10.9.228 (51.10.9.228) 32.983 ms 51.10.9.232 (51.10.9.232) 39.283 ms 34.736 ms
11 104.44.54.246 (104.44.54.246) 39.184 ms 104.44.54.244 (104.44.54.244) 35.473 ms 104.44.54.150 (104.44.54.150) 26.775 ms
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 40.77.80.79 (40.77.80.79) 30.083 ms 33.244 ms 25.303 ms
The issue you're experiencing appears to be related to network connectivity problems between your AWS us-east servers and the third-party API hosted on Azure. The traceroute output shows that the connection is being dropped after reaching Microsoft's network infrastructure.
There could be several reasons for this:
-
Firewall or security group settings: The Azure-hosted API might have firewall rules or security group configurations that are blocking incoming traffic from your AWS IP ranges.
-
Network routing issues: There might be routing problems between AWS and Azure networks, causing packets to be dropped at certain hops.
-
Azure DDoS protection: Azure's DDoS protection system might be mistakenly identifying traffic from your AWS servers as suspicious and blocking it.
-
AWS Network ACLs or Security Groups: Check if your AWS Network ACLs or Security Groups are configured correctly to allow outbound traffic to the Azure IP range.
-
ISP or intermediate network issues: There could be problems with the internet service providers or intermediate networks between AWS and Azure.
To troubleshoot this issue:
-
Verify your AWS security group and network ACL settings to ensure outbound traffic is allowed.
-
Contact the third-party API provider to check if they have any firewall rules or security measures blocking traffic from your AWS IP ranges.
-
Try using AWS Direct Connect or AWS Transit Gateway to establish a more direct and stable connection to Azure, if possible.
-
Use AWS Reachability Analyzer to diagnose network reachability issues within your VPC and identify any blocking components.
-
Consider using a VPN or a different network path to test connectivity and isolate the issue.
As for running the same command from us-east-1 as a sanity check, I'm an AI language model and don't have the ability to perform network operations. However, it would be a good idea for you to test from different AWS regions or even from outside of AWS to compare results and isolate the problem.
If the issue persists, you may need to engage AWS support and the third-party API provider to investigate further and resolve any potential inter-cloud networking issues.
Sources
Troubleshoot reachability issues using Reachability Analyzer - Amazon Virtual Private Cloud
I can not access AWS console the majority of the time. | AWS re:Post
thanks Leo. So I get a success via the CloudShell but from a SSH console of my EC2, it still times out. I access the API via CURL, so curl -v https://cloud-test.dcap.com/. And this is timing out too....
curl -vvv https://cloud-test.dcap.com
- Rebuilt URL to: https://cloud-test.dcap.com/
- Trying 40.77.80.79...
- TCP_NODELAY set
- connect to 40.77.80.79 port 443 failed: Connection timed out
- Failed to connect to cloud-test.dcap.com port 443: Connection timed out
- Closing connection 0 curl: (7) Failed to connect to cloud-test.dcap.com port 443: Connection timed out
I understand that Pings/Traceroutes may get dropped but a curl request should still work, no?
Also, my ACL is completely open on the Outbound. Thoughts?
Perhaps you did something to cause your IP address to be block-listed.
If your EC2 instance is using an automatically allocated public IP address, stop the instance and once it's reached the "stopped" state, start it again. This should cause it to receive a new, randomly allocated public IP.
If the instance is using an elastic IP (EIP) and can afford to give it up (meaning that you don't have any allow-listings that depend on it, for example), which remains static until it's released, allocate a new EIP and switch the EC2 instance to use it. You'll want to release the original EIP to avoid getting charged about $5/mo for an additional address.
Relevant content
- asked 9 months ago
- asked 2 years ago
- asked 3 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated a year ago