Third-party API blocked

0

This is a weird scenario....we're seeing that any pings/traceroutes originating from our servers is us-east are all getting blocked/dropped midway.

The third-party API is hosted on Azure. And if we run a sample traceroute, we see it gets dropped as soon as it hits the microsoft servers (sample dump here):

sudo traceroute cloud-test.dcap.com -I -w 9 traceroute to cloud-test.dcap.com (40.77.80.79), 30 hops max, 60 byte packets 1 100.100.38.88 (100.100.38.88) 0.786 ms 0.776 ms 0.814 ms 2 240.0.236.15 (240.0.236.15) 1607.934 ms 1608.181 ms 1608.449 ms 3 100.100.34.118 (100.100.34.118) 0.760 ms 0.758 ms 0.755 ms 4 151.148.11.111 (151.148.11.111) 0.854 ms 0.881 ms 0.878 ms 5 ae28-0.icr02.bl20.ntwk.msn.net (104.44.54.58) 1.078 ms 1.089 ms 1.086 ms 6 be-162-0.ibr04.bl20.ntwk.msn.net (104.44.21.235) 74.768 ms * * 7 be-6-0.ibr02.cle02.ntwk.msn.net (104.44.30.4) 29.879 ms 29.876 ms 30.039 ms 8 51.10.4.63 (51.10.4.63) 30.586 ms 30.599 ms 30.628 ms 9 * * * 10 51.10.9.232 (51.10.9.232) 29.392 ms 30.394 ms 29.989 ms 11 104.44.55.0 (104.44.55.0) 29.074 ms 28.974 ms 28.900 ms 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * *

What could be causing this?? Also, as a sanity check, if you're on us-east-1, can you run the same command and share your results?

4 Answers
1

It's quite normal for traceroutes and pings not to be permitted to many services, or for intermediate network devices to drop them. Are you seeing traceroutes getting through all the way to the exact same destination from some other origin?

You might have better luck with a TCP traceroute to the TCP port number where the real API is listening, such as tcp/443, although the ICMP destination unreachable / TTL exceeded messages may still not be delivered or permitted through from all the intermediate devices. If you only want to test if the port can be reached or to measure its response time, you can use a TCP ping tool for that.

For example, start CloudShell in the AWS Management Console in us-east-1.

First, install the traceroute package:

~ $ sudo yum install traceroute.x86_64

and then run a TCP traceroute to your destination:

~ $ sudo traceroute -T -p 443 cloud-test.dcap.com.
traceroute to cloud-test.dcap.com. (40.77.80.79), 30 hops max, 60 byte packets
 1  100.100.38.72 (100.100.38.72)  1.721 ms 100.100.8.12 (100.100.8.12)  0.819 ms 100.100.32.36 (100.100.32.36)  0.646 ms
 2  * 240.0.236.32 (240.0.236.32)  0.953 ms *
 3  100.100.34.118 (100.100.34.118)  1.049 ms 100.100.34.116 (100.100.34.116)  0.674 ms 100.100.34.114 (100.100.34.114)  0.740 ms
 4  151.148.11.111 (151.148.11.111)  0.897 ms  0.881 ms *
 5  ae32-0.icr02.bl7.ntwk.msn.net (51.10.15.135)  1.616 ms  1.001 ms ae28-0.icr02.bl20.ntwk.msn.net (104.44.54.58)  1.331 ms
 6  be-142-0.ibr03.bl20.ntwk.msn.net (104.44.21.225)  24.640 ms be-142-0.ibr03.bl7.ntwk.msn.net (104.44.21.209)  32.952 ms *
 7  be-8-0.ibr02.cle30.ntwk.msn.net (104.44.28.121)  33.287 ms  33.385 ms  33.343 ms
 8  be-9-0.ibr01.ch4.ntwk.msn.net (104.44.29.47)  32.955 ms 51.10.4.63 (51.10.4.63)  31.338 ms be-9-0.ibr01.ch4.ntwk.msn.net (104.44.29.47)  33.062 ms
 9  104.44.31.194 (104.44.31.194)  33.715 ms *  34.509 ms
10  51.10.9.228 (51.10.9.228)  32.983 ms 51.10.9.232 (51.10.9.232)  39.283 ms  34.736 ms
11  104.44.54.246 (104.44.54.246)  39.184 ms 104.44.54.244 (104.44.54.244)  35.473 ms 104.44.54.150 (104.44.54.150)  26.775 ms
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  40.77.80.79 (40.77.80.79)  30.083 ms  33.244 ms  25.303 ms
EXPERT
answered a month ago
profile picture
EXPERT
reviewed 9 days ago
0

The issue you're experiencing appears to be related to network connectivity problems between your AWS us-east servers and the third-party API hosted on Azure. The traceroute output shows that the connection is being dropped after reaching Microsoft's network infrastructure.

There could be several reasons for this:

  1. Firewall or security group settings: The Azure-hosted API might have firewall rules or security group configurations that are blocking incoming traffic from your AWS IP ranges.

  2. Network routing issues: There might be routing problems between AWS and Azure networks, causing packets to be dropped at certain hops.

  3. Azure DDoS protection: Azure's DDoS protection system might be mistakenly identifying traffic from your AWS servers as suspicious and blocking it.

  4. AWS Network ACLs or Security Groups: Check if your AWS Network ACLs or Security Groups are configured correctly to allow outbound traffic to the Azure IP range.

  5. ISP or intermediate network issues: There could be problems with the internet service providers or intermediate networks between AWS and Azure.

To troubleshoot this issue:

  1. Verify your AWS security group and network ACL settings to ensure outbound traffic is allowed.

  2. Contact the third-party API provider to check if they have any firewall rules or security measures blocking traffic from your AWS IP ranges.

  3. Try using AWS Direct Connect or AWS Transit Gateway to establish a more direct and stable connection to Azure, if possible.

  4. Use AWS Reachability Analyzer to diagnose network reachability issues within your VPC and identify any blocking components.

  5. Consider using a VPN or a different network path to test connectivity and isolate the issue.

As for running the same command from us-east-1 as a sanity check, I'm an AI language model and don't have the ability to perform network operations. However, it would be a good idea for you to test from different AWS regions or even from outside of AWS to compare results and isolate the problem.

If the issue persists, you may need to engage AWS support and the third-party API provider to investigate further and resolve any potential inter-cloud networking issues.
Sources
Troubleshoot reachability issues using Reachability Analyzer - Amazon Virtual Private Cloud
I can not access AWS console the majority of the time. | AWS re:Post

profile picture
answered a month ago
0

thanks Leo. So I get a success via the CloudShell but from a SSH console of my EC2, it still times out. I access the API via CURL, so curl -v https://cloud-test.dcap.com/. And this is timing out too....

curl -vvv https://cloud-test.dcap.com

  • Rebuilt URL to: https://cloud-test.dcap.com/
  • Trying 40.77.80.79...
  • TCP_NODELAY set
  • connect to 40.77.80.79 port 443 failed: Connection timed out
  • Failed to connect to cloud-test.dcap.com port 443: Connection timed out
  • Closing connection 0 curl: (7) Failed to connect to cloud-test.dcap.com port 443: Connection timed out

I understand that Pings/Traceroutes may get dropped but a curl request should still work, no?

Also, my ACL is completely open on the Outbound. Thoughts?

answered a month ago
0

Perhaps you did something to cause your IP address to be block-listed.

If your EC2 instance is using an automatically allocated public IP address, stop the instance and once it's reached the "stopped" state, start it again. This should cause it to receive a new, randomly allocated public IP.

If the instance is using an elastic IP (EIP) and can afford to give it up (meaning that you don't have any allow-listings that depend on it, for example), which remains static until it's released, allocate a new EIP and switch the EC2 instance to use it. You'll want to release the original EIP to avoid getting charged about $5/mo for an additional address.

EXPERT
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions