[How to] Enable Multi-cast in VMware on AWS - NSX environment

3 minute read
Content level: Intermediate
1

Applications requires Multicast traffic for clustering functionality to work. In the on-premises VMware environment we have NSX-T full version which is supported. However for the customers who are in process of migrating the workloads to VMC this feature has limitations and no straight forward document which explains these steps. This article should help to setup and enable multicast in VMC environment

I had a customer who were running few application VMs in their on-premises datacenter which uses multicasting as main mechanism in order to form cluster blocks. They are in process of migrating the workloads from On-premises to VMware on AWS SDDCs. The application Architect wanted to ensure that the VMC supports multicasting within AWS VMC so that they migrate the VMS ( Life & Shift) without major downtime/config changes to their applications clusters.

Let's see how things works within VMC world

In VMC setup the Multi casting feature is enabled by default. In SDDC networks, layer 2 multicast traffic is treated as broadcast traffic on the network segment where the traffic originates. It is not routed beyond that segment.

**VMC Limitation: **

  • Optimisation features such as IGMP snooping are not supported.
  • Layer 3 multicast (such as Protocol Independent Multicast) is not supported in VMware Cloud on AWS.

In the above example case, the customer has L2 multicast, let's check if the things work by using the "omping" command.

Example: Run omping command from source VM to the destination VM. ( Make sure the VMs are in the same SDDC Cluster within the SDDC and uses the segments within the VMC range from NSX) 1.

omping -m 239.192.197.125 -p 9106 172.11.78.18 

Check if the source vm is able to receive the multi cast response from the destination.

If you get the error " omping: Given address 172.11.78.18 is not valid multicast address" then the things are not working as expected and we need to drill down to the firewalls.

In my case the customer had the VMware Distributed firewall in place. In that case, we had to allow the multicast address range in the DFW firewall in order to make things work.

**Below are the rules were created in the DFW. **

Configuration change at NSX DFW: (Is need only if it does not work by default)

Allow the source and destination ports explicitly.

Example, if the source VM IP range is 192.17.41.0/24, then enable UDP communication between the networks (multicast): 172.11.78.0/24 224.0.0.0/4.
In DFW, create a new rule for this traffic. Source IP is 172.11.78.0/24 and destination is 224.0.0.0/4. The port is ANY and protocol is UDP.

Use the OMPING to confirm post the network changes

AWS
EXPERT
published 9 months ago729 views