By using AWS re:Post, you agree to the AWS re:Post Terms of Use

Unable to connect to Private ApiGateway via Private Link across VPCs

0

I have a Lambda Function integrated with an API Gateway (private api) provisioned in VPC-A that is designed to speak to an API via an API Gateway (private api) provisioned in VPC-B. In both VPCs I have a VPCEndpoint to allow connection to the Gateways in the respective VPCs. Based on documentation I should be able to utilize the VPCEndpoint in VPC-B to allow an invoke request to come from VPC-A.

When the request to the API from the Lambda from VPC-A is made to the Gateway in VPC-B the request times out signifying there is an issue with the DNS resolution and/or a network connection via the endpoints between the two VPCs. I have been unable to figure the issue in the configuration

  • I have tried all the different ways to invoke the private-api private API's invoke URL from the Lambda in VPC-A. All result in a connection timeout
  • I have verified the security group for the Lambda in VPC-A making the request that it allows traffic outbound on port 443
  • I have verified the security group on the Gateway in VPC-B allows incoming traffic on port 443
  • I have verified the Resource Policy on the Gateway in VPC-B allows connections coming from my VPC-Endpoint provisioned in VPC-B
  • I have the API-Gateway in VPC-B associated with the vpce provisioned in VPC-B
  • I have verified that the Gateway in VPC-B works properly inside VPC-B by invoking it via a python script on a EC2 instance. This behaves as expected

UPDATE

The following is a solution to this problem but it is a different approach than the one laid above. If others come across this issue later I at least wanted to provide the way that I got it working. This is NOT making use a private-link between the two VPCs. I couldn't get the networking to work for whatever eason

VPC-B

  • Has a EDGE (public) type API Gateway
  • The Method Resource on this Gateway that does the PROXY request to the Lambda in this VPC uses IAM_AUTH
  • The invoke-url for this Gateway's resources is: https://${api-gw-id}.execute-api.${aws-region}.amazonaws.com/${stage}
  • No VPC Endpoint is necessary in this VPC

VPC-A

  • Has a PRIVATE type API Gateway that triggers the Lambda that is sending the http request to VPC-B's API Gateway to trigger the Lambda in VPC-B
  • The VPC Endpoint defined to allow communication of other AWS services the PRIVATE gateway has private_dns DISABLED
  • The API Gateway in this VPC is directly associated with the VPC Endpoint in this VPC under "API Settings"
  • The URL to invoke the Lambdas in this VPC is ${api-gw_id}-${vpc-endpointid}.execute-api.${aws-region}.amazonaws.com/${stage}
  • The Lambdas in this VPC have an IAM Role that allows execute-api actions on VPC-B gateway
  • The Lambdas in this VPC invoke VPC-B API Gateway with the public invoke-url of the gateway https://${api-gw-id}.execute-api.${aws-region}.amazonaws.com/${stage}
3 Answers
2

I have banged my head on this issue specifically. It is possible as I've recently done it.

I have a Lambda in Account A within VPC-A that makes calls to a private REST endpoint through a VPCE in Account B. That private REST endpoint has a resource policy allowing traffic from that VPCE only. If you're getting timeouts, it is likely your security group configuration. You mentioned

  • I have verified the security group on the Gateway in VPC-B allows incoming traffic on port 443

which may be the issue. You referenced the gateway but you'll want to make sure your VPCE allows that traffic. Ensure that the security group associated with the VPCE allows inbound traffic from the subnet of the security group of the Lambda in VPC-A. My team uses a rule of thumb that if it's timeouts, it's likely SGs. Good luck!

answered 9 months ago
  • Hi thanks for the response

    My VPCE had the same SG as the Gateway which allowed inbound 443 traffic without subnet restrictions. I updated my original post with how I ended up getting it working but it's not using the private-link solution

0

I have doubled checked and according to the documentation it is possible.

Using resource policies, you can allow or deny access to your API from selected VPCs and VPC endpoints, including across AWS accounts. Each endpoint can be used to access multiple private APIs. You can also use AWS Direct Connect to establish a connection from an on-premises network to Amazon VPC and access your private API over that connection.

https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html

Have you configured the resource policy to allow access from VPC a?

profile picture
EXPERT
answered 9 months ago
  • Thanks for confirming I'm at least on the right track. The Resource Policy on the ApiGateway in VPC B allows traffic coming from the VPC Endpoint defined in VPC-B. I wasn't getting any 403 response code saying the traffic was being blocked the request was just not resolving and timing out.

0

Hello @fluted_whale423,

I am afraid what you are trying to achieve is not possible. As per the documentation, Private APIs are accessible only from within your VPCs,. You can access it using a VPC Endpoint, but that will only be accessible within the same VPC.

Generally speaking you cannot use the VPC connection techniques to send traffic across VPCs. Sometime back I had set up a VPC Peering and tried to use a NAT Gateway across that peering and it didn't work. It is well documented. I think you will have to access the API via Route53. I have not tried it, but I think it should be possible.

profile picture
answered 9 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions