By using AWS re:Post, you agree to the Terms of Use

Using AWS Private Link for application integration

7 minute read
Intermediate
2

We can use AWS Private Link for application integration purpose, by expose the existing application to the users without need direct L3 connectivity.

Overview

AWS Private Link is highly available & scalable technology that enables you to have private connectivity to the service owned by AWS or owned by AWS customer. For example, you can access Amazon S3 control plan via interface endpoint from within VPC without traversing the Internet. You can do the same for many other AWS services that integrate with AWS Private Link. With that capability, it will help you to build fully isolated VPC (e.g. no internet gateway or NAT service).

You can use AWS Private Link to share your owned private application or internal service to the another AWS customers without needed to setup L3 connectivity (VPC routing) with them. Imagine the following scenarios: a) you have a running internal web services in private EKS cluster and need to share it to another team that use separate AWS account, or b) you have legacy private on-premises application that need to be accessed by your modern applications running in Amazon EC2. In that scenarios, typical implementation will looks like this:

  1. You create VPC connectivity such as using VPC peering between your owned EKS VPC and other team VPCs. It must be done every time new other team need to access your internal web service. And you will ended up having multiple VPC peering connection.
  2. You create Site-to-site VPN or Direct Connect to the on-premises network & enable routes for each VPCs that need to access that on-premises application. Then you configure on-premises firewall to allow access coming from resources in the VPC subnets where your modern application run.

There are also some other use cases that suitable to use AWS Private Link, such as:

  1. You need to share a particular application to 3rd party AWS accounts outside your organization, but don’t want to establish VPC connectivity such as VPC peering or setup site-to-site VPN,
  2. You have duplicate VPC CIDR range with other VPCs that need to connect to your private application,
  3. You want to commercialize your private application and become SaaS provider. In this case L3 connectivity (VPC routing) option is not feasible, since the number of participant could be large and will not be practical to setup the VPC connectivity one by one.

Implementation architecture

Looking at the sample use cases above, we can see that we can use AWS Private Link for application integration purpose, i.e. by providing access to the client to access the existing application. The following diagram depict the architecture to share internal application in a private VPC and also share on-premises application using AWS Private Link.

AWS Private Link to share the internal application

The diagram show 2 applications, one runs in AWS and the other runs in on-premises server. Both applications can be consumed by Account B even though there is no L3 connectivity between Account B's VPCs.

In order to understand the concept easily, we can think the model using well-known client-server approach. Let’s examine what should be configured in each side.

Server side (the "provider")

  1. This is where the application that we want to share resides. It may be running on the AWS or in the on-premises environment.
  2. We need to have a Network Load Balancer to front the application. NLB target can be an existing Application Load Balancer, EC2 instances, or IP addresses. You must use IP address as the target, when we deal with on-premises application. In the case of targeting on-premises application, you must provide the underlying connectivity such as using AWS Site-to-site VPN or AWS Direct Connect.
  3. Once we have Network Load Balancer, we need to create VPC Endpoint Service. VPC Endpoint Service will provide a service name that will be shared to multiple AWS customers. With the service name, other eligible AWS customer (e.g. any account that has been granted, including the service owner itself) can submit a request to consume the service by creating a VPC Endpoint. Service name will use the following format: com.amazonaws.vpce.<REGION>.<ENDPOINT_SERVICE_ID>.
  4. You have the other options such as:
    1. Specify manual or automatic acceptance for any incoming connection requests.
    2. Private DNS name, so that users of this endpoint service can use specified DNS name. This option requires domain ownership verification.
  5. Keep in mind that the VPC Endpoint Service is available in the region where you created it. You can access Endpoint Service from other region using VPC Peering.
  6. Once the Endpoint Service ready, you can control permission by set Allow principals who can consume the service. It can be IAM users, IAM roles or AWS accounts. Or you can use wildcard to to allow all principals.

Client side (the "consumer/user")

  1. This is where other AWS customers “consume” or access the VPC Endpoint Service via Interface Endpoint.
  2. VPC Endpoint can be created for at least 4 target: a) AWS services, b) AWS Marketplace services, c) PrivateLink Ready partner services, d) Other service created using VPC Endpoint Service.
  3. The client need to create VPC Endpoint for “Other AWS service”. There are few mandatory items to be provided during VPC Endpoint creation:
    1. VPC Endpoint name,
    2. Service name (for example: com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc),
    3. The VPC and the subnets where the endpoint will be created. Under the hood VPC Endpoint will create Elastic Network Interface (ENI) in each selected subnet - that is why it can also known as "Interface Endpoint",
    4. The security group that will control who can access the ENI - on in the other word users who can access the target application.
  4. If the Acceptance required, then the service owner will need to accept VPC Endpoint creation request before the Endpoint can be created. Otherwise Endpoint will be created automatically.
  5. VPC Endpoint will provide the client with a set of DNS records.
    1. Regional DNS name, with the following format: <ENDPOINT_ID>.<ENDPOINT_SERVICE_ID>.<REGION>.vpce.amazonaws.com
    2. Zonal DNS name for each Availability Zone correspond to the subnets. With the following format: <ENDPOINT_ID-ZONE>.<ENDPOINT_SERVICE_ID>.<REGION>.vpce.amazonaws.com
  6. Finally the user can access the application by accessing the DNS record (either regional or zonal DNS name).

There is cost related to the VPC Endpoint. Client will be charged hourly cost for the VPC Endpoint they've provisioned in each Availability Zone. They also will be charged for the data processing cost (per GB). Visit AWS Private Link pricing page for more details.

Benefits

There are some benefits you can get when using AWS Private Link to share the application, such as:

  1. You don’t need to establish direct L3 connectivity (VPC routing) in between the application VPC with the clients. We decouple VPC provider and VPC clients.
  2. You have granular control which AWS account that can consume the service using Allow principals menu.
  3. It simplify the multi-vpc/multi-account landing zone setup. In the case exposing on-premises application, you can place VPC Endpoints Service in the Shared VPC only. So in the on-premises firewall you only allow limited number of IP addresses (instead of opening the firewall for each and every client VPCs).
  4. It help to solve overlap CIDR range.
  5. For the client VPCs, they don't need internet access to be able to access the service (in any forms, e.g. Internet Gateway, NAT Gateway, etc). Thus it will help to build fully isolated production VPCs.

Conclusion

VPC Endpoint Service (powered by AWS Private Link) provide the option to do application integration without need to establish direct L3 connectivity (VPC routing) to each and every client. Visit AWS Private Link documentation for more details.

EXPERT
published 3 months ago414 views