Transit Gateway with Public IPs for VPN

0

I'm seeking clarification on public IP routing behavior within an AWS Transit Gateway configuration.

Scenario: My environment consists of a central Network Account VPC connected to multiple production VPCs via Transit Gateway. Each production VPC hosts a Network Load Balancer (NLB) with both public and private IP addresses. My goal is to allow clients connected to the Network Account VPN to access the NLBs using their public IP addresses.

Questions:

  1. Expected Behavior: Can public IP addresses of NLBs within production VPCs be routed through the Transit Gateway to the appropriate VPC attachment based on static routes configured in the Transit Gateway route table?
  2. Troubleshooting: If the answer to question 1 is yes, why wouldn't traffic destined for the public IPs of the NLBs reach their intended targets when routed through the Transit Gateway?

Additional Information:

  • Network Account VPC CIDR block (e.g., 10.0.0.0/16).
  • Example production VPC CIDR block (e.g., 10.1.0.0/16).

Expected Outcome: I aim to understand the expected behavior of public IP routing within a Transit Gateway setup and any potential limitations. This information will assist in determining the most suitable solution for client access to the NLBs in my environment.

I checked the documentation sections on "How transit gateways work" and "Transit Gateway route tables" but would like further clarification on public IP routing specifically.

2 Answers
2
Accepted Answer

The answer to question (1) is no. Public/Elastic IP addresses can only be associated with resources that are in the VPC where the Internet Gateway is attached.

What you can do is have a NLB or ALB in an externally-facing VPC and then send traffic to other workload VPCs across Transit Gateway. Think carefully about doing this as it will increase the cost of your solution (because the traffic has to be processed by Transit Gateway). Instead, I'd recommend keeping the targets for the load balancer within the same VPC where the load balancer is - this also makes using auto-scaling groups much simpler.

To answer a side question which is kind of implied above: You can absolutely have networks with public IP addresses that are connected via Transit Gateway - because Transit Gateway doesn't really care what IP addressing scheme that you use. But you can't do what you suggest and bring traffic in from the internet in this way because the Internet Gateway does a 1:1 translation of a Public/Elastic IP (which is advertised on the internet as being available in the AWS network) to a private IP address within the VPC.

profile pictureAWS
EXPERT
answered a month ago
profile picture
EXPERT
reviewed a month ago
  • Thanks for the asset @Gary the approach of having a NLB routing to a Private IPv4 in an account (and his VPC) worked. So now I have VPN —> NLB (public IP) —> Transit Gateway (using Private IP) —> other accounts NLB. Just for clarification when I mentioned about routing public IPs I meant using them to allow my clients access it (or routing it) through the VPN to my main network account to the other connected accounts without transverse the traffic to internet (yes - I know that is weird). The clients asked to route a public IP instead of a private IP to facilitate IP allocations.

0

Hello.

In order to communicate with a public IPv4 address via a Transit Gateway, I think it is necessary to communicate with a public IPv4 address via a NAT Gateway, etc.
I think the configuration described in the following document will be helpful.
https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-nat-igw.html

To communicate with NLB's public IP address, you need to communicate with the public network once.
This means that you will need to NAT from your private IPv4 address to your public IPv4 address.
Therefore, if going through TransitGateway, communication to public IPv4 must be routed to the subnet where the NAT Gateway is located.

profile picture
EXPERT
answered a month ago
profile picture
EXPERT
reviewed a month ago
  • Thanks for the answer @Riku

    I checked the documentation But in my scenario there is an additional complexity because each Environment will have a different IPv4 Address - therefore each attachment needs to have a routing table to the public IPv4

    On the documentation:

    "Route table for VPC A

    Destination Target VPC A CIDR | local 0.0.0.0/0 | transit-gateway-id "

    In my case (that I tested and didn't worked - I don't know exactly why)

    "Destination Target VPC A CIDR | local 55.X.X.X/32 (Public IP for Environment PROD) | transit-gateway-id 56.X.X.X/32 (Public IP for Environment PROD2) | transit-gateway-id 57.X.X.X/32 (Public IP for Environment NONPROD) | transit-gateway-id "

  • Please check not only the VPC route table but also the TransitGateway route table. You need to configure the TransitGateway route table to route to the VPC where the NAT Gateway is located.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions