Skip to content

Multi NSX Edge clusters in Amazon Elastic VMware Service

18 minute read
Content level: Expert
1

This article guides you through the process of deploying multiple NSX Edge clusters within your Amazon Elastic VMware Service environment.

Background

Amazon Elastic VMware Service (Amazon EVS) deploys VMware Cloud Foundation (VCF) with a default VMware NSX networking architecture that includes three NSX Managers and two NSX Edge Node (NSX Edge) appliances configured in an active:standby relationship for high availability. In this standard deployment, all north-south traffic traverses the active Edge appliance, which resides on a single ESXi host. This default deployment provides a resilient networking foundation for most customer workloads, with the NSX Edge Tier-0 serving as the primary network gateway between the software-defined overlay networks and the AWS VPC infrastructure.

In the standard Amazon EVS deployment, NSX Manager creates and manages software-defined overlay networks using GENEVE encapsulation for virtual machine connectivity. The NSX Edge appliances bridge these overlay networks with the underlay Amazon VPC subnets, providing north-south connectivity to on-premises networks via Transit Gateway, public internet via Internet Gateway or NAT Gateway, and other AWS services.

The default active–standby Edge configuration is well‑suited for many workloads. However, customers with I/O‑intensive applications or those looking to maximize network throughput can benefit from a multi‑edge architecture, with Edge appliances distributed across multiple ESXi hosts. By directing traffic to multiple active Edge appliances across hosts, this design increases available bandwidth and provides the flexibility to steer traffic toward the Edge appliances best suited for specific throughput demands.

Amazon EVS now supports deploying multiple NSX Edge clusters, each with its own Tier‑0 Gateway, to further scale north–south capacity. Each Edge cluster still operates in an active–standby model, with only one active NSX Edge per cluster. This allows customers to scale out capacity by adding more clusters and T0s, while maintaining the supported active–standby behavior within each cluster. This is not an active/active T0 design; rather, it is a scale‑out approach that preserves predictable failover and routing behavior.

This blog post demonstrates how to implement a new multi-edge deployment in Amazon EVS, enabling you to unlock additional network capacity and optimize traffic distribution for demanding workloads.

Prerequisites

  • Identify two free IP addresses from your Amazon EVS VM Management VLAN Subnet for NSX Edge Node management interfaces
  • Identify two free IP addresses from your Amazon EVS NSX Uplink VLAN Subnet for NSX Edge Node uplink interfaces
  • Identify one unique private ASN to be used by new multi-edge Tier-0 Gateway

Example table documenting required inputs for deployment.

New NSX Edge DNS NameManagement IP (10.10.1.0/24 VM Management VLAN Subnet)Uplink IP (10.10.6.0/24 NSX Uplink VLAN Subnet)Private ASN
vcf-edge03.amazon.evs10.10.1.4310.10.6.25265002
vcf-edge04.amazon.evs10.10.1.4410.10.6.25365002

Considerations

  • To prevent routing conflicts and unexpected path selection, ensure that the same CIDR block is not advertised simultaneously from multiple Tier-0 Gateways in a multi-edge deployment. Each prefix should have a single authoritative advertising Tier-0 Gateway to maintain predictable routing behavior and avoid EVS VPC route table instability.
  • Logical segments created on separate Tier‑0 Gateways are not routable between each other unless BGP peering is established between the Tier‑0s.

Tasks

  1. Create new DNS A and PTR records for the planned NSX Edge multi-edge deployment.
  2. Create two new VPC Route Server Peers for the planned NSX Edge multi-edge deployment.
  3. Deploy two new NSX Edge nodes in NSX Manager for a new Edge Cluster.
  4. Create a new NSX Edge Cluster and add the new NSX Edge nodes.
  5. Create Tier 0 (T0) Gateway on new NSX Cluster
  6. Create Tier 1 (T1) Gateway on new NSX Cluster

Actions

Task 1: Create new DNS A and PTR records for planned NSX Edge multi-edge deployment

You must first create DNS records for the new NSX Edge appliances in the DNS service used by your Amazon EVS environment.

This guide assumes that Amazon Route 53 is managing your DNS. If you are using a different DNS service, follow the appropriate instructions for creating the required A and PTR records.

  1. Using the AWS console, navigate to Route 53.  Under Route 53 on the left-hand pane, select Hosted zones.
  2. Locate the hosted private zone used for forward DNS lookups for your Amazon EVS environment, e.g. amazon.evs and select it to open its configuration.
  3. Select Create record.
  4. Under Record name, provide a suitable DNS hostname for your first NSX Edge in your multi-edge deployment, e.g. vcf-edge03.
  5. Under Record type, select A – Routes traffic to an IPv4 address and some AWS resources.
  6. For Value, enter the IP address of the new NSX Edge management interface.  This IP address must be a free and unique address from your Management VM VLAN Subnet in the IPv4 format.
  7. Leave TTL and Routing policy as default.
  8. Select Add another record and create an A record for the second NSX Edge in your multi-edge deployment, e.g. vcf-edge04, following the same format as the previous steps.
  9. Select Create records to create the forward DNS entries for your two new NSX Edges.

Next, create reverse pointer records for your two new NSX Edges.

  1. Under Route 53 on the left-hand pane, select Hosted zones.
  2. Locate the hosted private zone used for reverse DNS lookups for your Amazon EVS environment, e.g. 255.10.in-addr.arpa and select it to open its configuration.
  3. Select Create record.
  4. Under Record name, enter the last two octets of the NSX Edge management interface IP address in reverse order, e.g. 50.1 for an IP address of 10.255.1.50.
  5. Under Record type, select PTR – Maps an IP address to a domain name.
  6. For Value, enter the FQDN of your NSX Edge, e.g. vcf-edge03.amazon.evs.
  7. Leave TTL and Routing policy as default.
  8. Select Add another record and create a PTR record for the second NSX Edge in your multi-edge deployment, following the same format as the previous steps.
  9. Select Create records to create the reverse DNS entries for your two new NSX Edges.

Task 2: Create two new VPC Route Server Peers for planned NSX Edge multi-edge deployment

Next, you must create new VPC route server peers on each of the two original VPC route server endpoints that were provisioned during your initial Amazon EVS environment deployment.

The new peers will establish BGP sessions between the existing Route Server endpoints and the new NSX Edge Cluster Tier-0 Gateway, enabling route exchange. 

Because the same Route Server endpoints are used, any NSX segment updates made on the new multi-edge Tier-0 Gateway will propagate to the same Amazon EVS VPC route tables as the original Tier-0 Gateway.

  1. Using the AWS console, navigate to VPC.  Under Virtual private cloud on the left-hand pane, select Route Servers.
  2. Choose Route server peers at the top to display existing Amazon EVS route server peers and select the Create route server peer button. 
  3. Provide an appropriate name for your route server peer.
  4. Select the first Route server endpoint ID from your Amazon EVS environment deployment
  5. Provide a Peer address.  This IP address must be a free and unique address from your NSX Uplink VLAN Subnet in the IPv4 format.
  6. Provide a Peer ASN.  This must be a private ASN and be different to the original Amazon EVS environment Tier-0 Gateway BGP ASN.

Note: The Peer address and Peer ASN specified here will need to be assigned to the new Tier-0 Gateway configuration when deployed in a later step.

  1. For Peer liveness detection, leave as the default BGP keepalive.
  2. Once completed, select Create route server peer.

You must now repeat this process for the second Route Server peer on the second Route Server endpoint. You can use the same AS number for both peers.

Example route server endpoint peer configuration Figure 1 - Example route server endpoint peer configuration

Example route server endpoint peers for original Amazon EVS NSX Edges (01 and 02) and new multi-edge NSX Edges (03 and 04) Figure 2 - Example route server endpoint peers for original Amazon EVS NSX Edges (01 and 02) and new multi-edge NSX Edges (03 and 04)

Note: The new BGP peers will show as down until the new multi-edge Tier‑0 Gateway is deployed and configured in task 5. The sessions will move to a connected state once deployment is complete.

Task 3: Deploy two new NSX Edge nodes in NSX Manager for a new NSX Edge Cluster

The next step is to create two new edge appliances via NSX and then add them to a new edge cluster, via NSX Manager.

  1. Log into your Amazon EVS NSX Manager cluster IP address.
  2. Select System, and from the left-hand pane, expand Fabric and select Nodes.
  3. Choose Add Edge Node.
  4. First, provide a Name and Host name/FQDN as defined in your DNS service e.g. vcf-edge.amazon.evs. Enter a Description as required and select your desired Form Factor. Select Next.
  5. Define credentials and security policy for admin and root accounts, and select Next.
  6. Under Configure Deployment, select the Amazon EVS vCenter Server, Cluster, Resource Pool and Datastore, and select Next.

 Example Configure Deployment inputs Figure 3 - Example Configure Deployment inputs

  1. Under Configure Node Settings, change Management IP Assignment Type to Static and enter the desired IP address with CIDR e.g. 10.10.1.43/24 for the NSX Edge Node management interface.

Note: This IP address must come from the Management VM VLAN Subnet and must match the value defined in the DNS A record for this NSX Edge.

  1. Specify the Default Gateway of your VM Management VLAN Subnet and choose Select Interface to select the VM Management VLAN Subnet distributed port group pg-vm-mgmt.
  2. Specify your Search Domain Name FQDN, DNS Servers and NTP Server details, and select Next.

Example Configure Node Settings inputs Figure 4 - Example Configure Node Settings inputs

Note: For the following steps env-<name> will match the name of your Amazon EVS environment e.g. env-yedy37yhgr.

  1. Under Configure NSX, leave the Edge Switch Name as default. For Transport Zone, select the down arrow and choose both env-<name>-edge-vlan-zone and env-<name>-tz-overlay01.
  2. For Uplink Profile, select the down arrow and choose env-<name>-edge-uplink-profile.
  3. Leave IP Address Type (TEP) as IPv4 and from the IPv4 Assignment (TEP) select the down arrow and choose Use IP Pool. Select env-<name>-edge-tep01 as the IP Pool.
  4. For Teaming Policy Uplink Mapping, choose Select Interface and select the pg-nsx-edge-uplink Distributed Port Group and choose Save. Once completed, choose Finish.

Enter image description here Figure 5 - Example Configure NSX inputs

Once completed, NSX Manager deploys a new NSX Edge using the inputs provided. This process can take several minutes. After the deployment finishes, repeat steps 3–13 to add the second NSX Edge for the multi-edge deployment.

Under Edge Transport Nodes you will find the status of the new Edge deployments. Once the new NSX Edges show as Configuration State Success, you can move on to the next step.

Example Edge Transport Nodes screen showing 4 successfully deployed NSX Edges Figure 6 - Example Edge Transport Nodes screen showing 4 successfully deployed NSX Edges.

Task 4: Create a new NSX Edge Cluster and add the new NSX Edge nodes

Now you have the two new NSX Edges available, they need to be added to a new NSX Cluster.

  1. From the NSX Manager cluster IP address, select System, and from the left-hand pane, expand Fabric and select Nodes.
  2. Select Edge Clusters and choose Add Edge Cluster.
  3. First, provide a Name for the new Edge Cluster. Under Edge Cluster Profile, select nsx-default-edge-high-availability-profile.
  4. Under Transport Nodes, from the list of available nodes, select the two NSX Edge Nodes created in the previous task and select the > arrow to move across to Selected. Select Add.

Example Add Edge Cluster settings Figure 7 - Example Add Edge Cluster settings.

Once the cluster has been created, you can proceed with creating the Tier‑0 and Tier‑1 routers that will be associated with the new Edge Cluster and NSX Edge nodes.

Task 5 - Create Tier 0 (T0) Gateway on new NSX Cluster

First, you will need to create a Tier‑0 (T0) Gateway to establish the north–south routing boundary for your new NSX Edge Cluster.

  1. From the NSX Manager cluster IP, navigate to Networking > Tier-0 Gateways.
  2. Select Add Gateway > Tier-0.
  3. Specify a Name for your T0. Change HA Mode to Active Standby.
  4. Select the Edge Cluster created in Task 4 from the down arrow.
  5. Change Fail Over to Preemptive.
  6. Set Preferred Edge to the first NSX Edge in your new NSX Edge Cluster.

Example T0 base configuration settings Figure 8 - Example T0 base configuration settings

  1. Choose Save. When prompted whether you wish to continue configuring, select Yes.
  2. From the new T0 overview, expand Interfaces and GRE Tunnels and under External and Service interfaces, select Set. Choose Add Interface.

Provide the details for the uplink interface for the first NSX Edge in your new Edge Cluster. First, provide a Name for the uplink. Set Type as External. Provide an IP Address / Mask for the uplink e.g. 10.10.6.252/24.

Note: The IP address should match the route server peer address that you configured in Task 2, Step 5 from the NSX Uplink VLAN Subnet.

For Connected To, from the drop down, select the nsx-edge-uplink-vlan segment. Under Edge Node, from the drop down, select the first NSX Edge in your new NSX Edge Cluster. Set MTU to 1500. You can leave the rest of the settings as default. Choose Save.

Example new multi-edge T0 Interface settings for the first NSX Edge Figure 9 - Example new multi-edge T0 Interface settings for the first NSX Edge

  1. Choose Add Interface and repeat Step 8 for the second NSX Edge in your new Edge Cluster.

Example new multi-edge T0 Interface settings for the second NSX Edge Figure 10 - Example new multi-edge T0 Interface settings for the second NSX Edge

Example new multi-edge T0 interfaces for both NSX Edges Figure 11 - Example new multi-edge T0 interfaces for both NSX Edges

  1. Next, from the new T0 overview, expand Routing.

First, you will need to define an IP Prefix List of CIDRs allowed to be advertised into your Amazon EVS VPC route table. By default, this should be set to all RFC1918 address spaces.

Select the number in blue next to IP Prefix Lists. From the Set IP Prefix List menu, choose Add IP Prefix List. Enter rfc-1918-allow for the name, then select Set. Add four prefixes in the order shown below, to match the screenshot and, once completed choose Save.

Required IP Prefixes to be set within rfc-1918-allow prefix list. Figure 12 – Required IP Prefixes to be set within rfc-1918-allow prefix list.

Next, define the required static routes on the new T0 router. Specify the next‑hop IP address for the default route, as well as the next hops for your two VPC Route Server Endpoints, as those addresses are on a different network to the NSX Edge uplink interfaces.

Each static route should point to the gateway IP address of your NSX Uplink VLAN subnet, ensuring that traffic is forwarded back into your VPC, where it can be processed by the associated VPC route table.

Select the number in blue next to Static Routes. From the Set Static Routes menu, choose Add Static Route. Create three Static Routes, using the table below as a guide and once completed, choose Save.

Note: The values below are for illustration purposes only – please replace the values below to correctly match your Amazon EVS environment

NameNetworkNext Hop – IP AddressAdmin DistanceScope
0-0-0-0-0-10-10-6-10.0.0.0/0 (Default route)10.10.6.1 (NSX uplink VLAN subnet gateway)1None
10-10-101-139-32-10-10-6-110.10.101.139 (VPC route server endpoint 1)10.10.6.1 (NSX uplink VLAN subnet gateway)1None
10-10-101-251-32-10-10-6-110.10.101.251 (VPC route server endpoint 2)10.10.6.1 (NSX uplink VLAN subnet gateway)1None

Enter image description here Figure 13 – Required static routes for default route and two VPC route server endpoints

  1. Next, from the new T0 overview, select the three-dot menu to the left of the new T0 name, and choose Edit. From the Edit context, expand BGP.

Here you will need to configure BGP to peer the new T0 gateway to the VPC route server endpoints, to enable routing updates within the Amazon EVS VPC route table.

First, you will need to update the Local AS to a unique private AS number.

Note: This AS should match what was specified in the VPC Route Server Peer in task 2, step 6.

Next, select the number in blue next to BGP Neighbors and choose Add BGP Neighbor. Enter the IP address of the first VPC route server endpoint. Provide the Remote AS number of the Amazon side.

Under Route Filter, choose Set. Choose Add Route Filter, and for Out Filter, select Configure and find the rfc-1918-allow prefix list from the available filter options. Leave the other route filter settings as default. Choose Add and then Apply.

From Source Addresses, choose the IP address of the first NSX Edge in the new NSX Edge Cluster. Update the Max Hop Limit to 2.

Expand Timers & Password and update Hold Down Time to 15 seconds and Keep Alive Time to 5 seconds. You can leave the rest of the settings as default. Select Save.

Choose Add BGP Neighbor again, and configure a new neighbor for the second VPC route server endpoint, repeating the steps above using the source address of the second NSX Edge in the new NSX Edge Cluster.

Once applied, after a few moments, the BGP status for both neighbors should change to Success.

Example BGP Neighbor configuration Figure 14 – Example BGP Neighbor configuration

  1. Finally, from the new T0 overview, you will need to expand Route Re-Distribution.

Route re-distribution allows the T0 Gateway to control which route sources—both from the T0 itself and from connected T1 routers—are advertised to the Amazon EVS VPC route table and which sources are excluded.

Select the number in blue next to Route re-distribution, and choose Add Route Re-Distribution.

Provide a Name, and ensure Destination Protocol is set to BGP. Select the number in blue next to Route re-distribution and ensure that the sources are set to what is defined within the screenshot below. Once updated, select Apply and Apply again.

Required sources to be enabled for route re-distribution

Figure 15 – Required sources to be enabled for route re-distribution.

The Tier‑0 configuration is now complete, and BGP sessions should be established with your VPC Route Server Endpoints. You can now proceed with configuring the Tier‑1 Gateway

Task 6 - Create Tier 1 (T1) Gateway on new NSX Edge Cluster

Finally, create a Tier‑1 (T1) Gateway to provide the east–west routing tier for your workloads and connect them to the new T0.

  1. From the NSX Manager cluster IP, navigate to Networking > Tier-1 Gateways.
  2. Select Add Tier-1 Gateway
  3. Specify a Name for your T1. Change HA Mode to Active Standby.
  4. Select the new T0 you created in task 5 from the Linked Tier-0 Gateway drop down arrow
  5. Select the Edge Cluster created in Task 4 from the down arrow.
  6. Finally, from the new T1 overview, you will need to expand Route Advertisement
  7. Enable all 7 toggle switches, matching the screenshot below:

Required route advertisement switches to be enabled Figure 15 – Required route advertisement switches to be enabled.

  1. Select Save and then Close Editing

The Tier-1 configuration is now complete, and ready for you to create the required network segments for the new edge cluster.

Note: For more detailed deployment instruction, follow official Broadcom documentation here.

Conclusion

Implementing a multi-edge architecture in Amazon EVS provides a powerful solution for customers with demanding network throughput requirements. By distributing NSX Edge appliances across multiple ESXi hosts, organizations can effectively segregate traffic types, increase available bandwidth, and eliminate network resource contention between critical workloads.

The multi-edge deployment model unlocks the full potential of Amazon EVS for I/O-intensive workloads, providing the scalability and performance isolation that enterprise customers need. Whether you're managing frequent large-scale backups, supporting high-throughput applications, or simply seeking to optimize network resource utilization, the multi-edge architecture offers a proven path forward.

Ready to implement multi-edge in your Amazon EVS environment? Review the implementation steps outlined in this post, and consider how traffic segregation could benefit your specific workload requirements. For additional guidance and support, consult the Amazon EVS documentation or reach out to your AWS account team.