- Newest
- Most votes
- Most comments
Assuming everything you listed is completely accurate for all the local zones, particularly regarding the route table(s) associated with your subnets specifying the appropriate VPC-local routes and not forcing cross-subnet traffic to an intermediate firewall or other middlebox, for example, then NACLs (network access control lists) are the only VPC mechanism that could block the cross-subnet connectivity within a single VPC. NACLs have the ability to restrict traffic both when it's leaving a subnet and when it's arriving in a subnet, even within the same VPC. Also note that NACLs evaluate every network packet separately, without any notion of connections, so traffic has to be permitted explicitly in both directions.
NACLs are associated with subnets, and there's always exactly one NACL associated with each subnet. I suggest you check first if the NACL(s) associated with each of the three local zones' subnets contain a rule that explicitly allows connectivity either to/from 0.0.0.0/0 or for your VPC CIDR, and that the rule has a lower rule number (=gets evaluated before higher-numbered rules) than other rules that would deny the traffic.
I’ve tested connectivity between two EC2 instances, one in us-east-1-bos-1a and the other in us-east-1-phl-1a, within the same VPC. These are the key observations from debugging:
- Security Group: A security group was used that allows all inbound and outbound traffic (0.0.0.0/0) on all ports, both directions.
- NACLs: Verified to allow all inbound and outbound traffic (0.0.0.0/0) with rule numbers that ensure they are evaluated before the default deny.
- Route Tables: The subnets in both zones are associated with route tables that have local routes for intra-VPC traffic and a route to the internet via the Internet Gateway.
- Public IP Communication Works: The instances can reach each other over their public IPs using tools like curl and ping.
- Private IP Communication Fails: Attempting to connect via private IPs (e.g. curl, ping, or nc) fails silently. ping shows no replies, and nc hangs.
- VPC Flow Logs: • Logs show that ICMP packets (protocol 1) are being accepted when sent from BOS to PHL. • However, there are no corresponding packets arriving on the destination instance based on Flow Logs and tcpdump output. • This suggests the packet was dropped somewhere after the VPC accepted it, before it reached the destination instance.
- tcpdump Output: • I ran sudo tcpdump -i any icmp on both instances. • BOS showed ICMP echo requests being sent. • PHL did not see incoming ICMP packets. • Similarly, packets from PHL to BOS did not arrive either
So I feel like this is an AWS internal bug? Can someone confirm this?
This appears to be a networking issue specific to the Philadelphia Local Zone (us-east-1-phl-1a) within your VPC. Since you've verified that the PHL instance has internet access but can't communicate with other instances in the same VPC, there are several potential causes to investigate:
-
Route Table Configuration: While you mentioned that the route table shows correct routes in the AWS Console, double-check that the specific subnet for us-east-1-phl-1a has the proper route table association. Sometimes the route table might look correct but not be properly associated with the subnet.
-
Network ACLs: Check if there are any Network ACLs (NACLs) applied to the Philadelphia subnet that might be blocking traffic. Unlike security groups, NACLs are stateless and require explicit allow rules for both inbound and outbound traffic, including ephemeral ports for return traffic.
-
VPC Reachability Analyzer: Use the AWS VPC Reachability Analyzer to run a trace between your PHL instance and one of the other instances. This can help identify where in the network path the connection is failing.
-
Local Zone Connectivity: There could be a specific issue with the Philadelphia Local Zone's connectivity to the parent region. Local Zones have their own networking infrastructure that connects back to the parent region, and there might be a configuration issue or service limitation affecting this particular zone.
-
Instance-level Firewall: Check if there's a local firewall (like iptables on Linux) enabled on the PHL instance that might be blocking outbound connections to private IPs.
-
VPC Block Public Access: Verify that VPC Block Public Access settings aren't enabled, which could potentially interfere with traffic patterns.
-
MTU Settings: Verify that the MTU (Maximum Transmission Unit) settings are consistent across all instances and appropriate for your network configuration.
If none of these solutions work, you might want to try creating a new instance in the Philadelphia Local Zone to see if the issue persists, which would help determine if it's an instance-specific problem or a broader issue with that Local Zone.
Sources
EC2 internet inbound/outbound not working us-east-1, works in all other regions | AWS re:Post
EC2 instance with public IP can't access internet (HTTPS 443 timeout) | AWS re:Post
Relevant content
- asked 3 years ago

Hey Leo, thanks a lot for your quick response! :)
Here are the NACL entries currently associated with the subnets: "Entries": [ { "Egress": true, "CidrBlock": "0.0.0.0/0", "RuleAction": "allow", "RuleNumber": 100 }, { "Egress": true, "CidrBlock": "0.0.0.0/0", "RuleAction": "deny", "RuleNumber": 32767 }, { "Egress": false, "CidrBlock": "0.0.0.0/0", "RuleAction": "allow", "RuleNumber": 100 }, { "Egress": false, "CidrBlock": "0.0.0.0/0", "RuleAction": "deny", "RuleNumber": 32767 } ]
From what I can tell, these rules look correct and shouldn’t be blocking any traffic.
I also double-checked the VPC settings, route table associations, and security groups. They are identical to the working configuration. Just to be sure, I swapped out the PHL Local Zone for Chicago while keeping the exact same setup otherwise and everything worked perfectly there.
So at this point, it really looks like the issue is isolated to the PHL Local Zone.
Thanks again for your help, and let me know if there’s anything else I should check!
Yes, those NACL entries allow all IPv4 traffic and aren't causing your issue, @EEDev. Another typical possibility would be that a local mechanism on the instances, such as an iptables firewall or local route table entry, would be dropping the traffic, but that's also very unlikely, if you're deploying new instances from the same AMI in the different subnets.
One thing you could do is enable VPC flow logs for all traffic in your entire VPC. They would show conclusively if the packets are leaving the source instance and arriving in the expected destination subnet, as well as indicating explicitly if the VPC mechanisms are allowing them through. If packets weren't routed to the correct destination subnet, for example, the VPC flow log would only show the packet on its way out of the source subnet but not arriving at the expected destination. With those logs, it'd be easy to distinguish between a host-local issue and a VPC configuration of functional issue. https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
yes, I'm sorry for the confusion. I meant that Philadelphia <-> Chicago was working. There seems to be an issue specifically between Philadelphia and Boston. I will edit the comment for future references.
Hey, thanks again!
Just to share a minimal reproducible setup:
I created a new security group allowing inbound traffic using the VPC CIDR. Then, I freshly launched three instances: • Boston: t3.medium • Chicago: c6i.large • Philadelphia: t3.medium
All three were assigned the new security group. After SSHing into the instances, here are the results: Ping Chicago <-> Boston works Ping Philadelphia <-> Chicago works Ping Boston <-> Philadelphia doesn't work.
So the issue appears to be mutual communication between Boston and Philadelphia, despite both being in the same VPC with identical security group and subnet settings.
As you suggested, I’ll try enabling VPC Flow Logs to confirm whether packets from Boston are reaching Philadelphia or being dropped mid-VPC.
If there are any other suggestions in the meanwhile let me know :)
Let me know if there’s anything else I should check!
I’ve tested connectivity between two EC2 instances, one in us-east-1-bos-1a and the other in us-east-1-phl-1a, within the same VPC. These are the key observations from debugging: 1. Security Group: A security group was used that allows all inbound and outbound traffic (0.0.0.0/0) on all ports, both directions. 2. NACLs: Verified to allow all inbound and outbound traffic (0.0.0.0/0) with rule numbers that ensure they are evaluated before the default deny. 3. Route Tables: The subnets in both zones are associated with route tables that have local routes for intra-VPC traffic and a route to the internet via the Internet Gateway. 4. Public IP Communication Works: The instances can reach each other over their public IPs using tools like curl and ping. 5. Private IP Communication Fails: Attempting to connect via private IPs (e.g. curl, ping, or nc) fails silently. ping shows no replies, and nc hangs. 6. VPC Flow Logs: • Logs show that ICMP packets (protocol 1) are being accepted when sent from BOS to PHL. • However, there are no corresponding packets arriving on the destination instance based on Flow Logs and tcpdump output. • This suggests the packet was dropped somewhere after the VPC accepted it, before it reached the destination instance. 7. tcpdump Output: • I ran sudo tcpdump -i any icmp on both instances. • BOS showed ICMP echo requests being sent. • PHL did not see incoming ICMP packets. • Similarly, packets from PHL to BOS did not arrive either.