This guest post is by David Espejo, who blogs at vcloudopia.wordpress.com, where you can find his back catalogue of posts. David writes in Spanish, so if you don’t speak Spanish be sure to use a browser that will translate for you. Find out more about the guest blogger program here.
It’s been a long time since my last post, and as always, I appreciate the opportunity that the vBrownBag community offers me to publish here. I’ve been on the road with a growing list of project deadlines due to a new consultant role that I accepted, but I’m learning a lot so I hope to share all that new knowledge here.
In this occasion I will talk about what has often been cited as the main barrier to VXLAN adoption: the requirement of Multicast in the physical network and the resulting operational burden of deploying and managing such requirement.
For VMware and Cisco, two of the main supporters of VXLAN this issue has not been unknown and they have made several upgrades to the product line to offer VXLAN tunneling in different flavors that I have classified with the level of multicast dependence as criteria. Three variants have been identified so far that I describe:
1. vCloud Networking & Security: cheap like a puppy
If an organization has access to vCloud Suite Standard Edition, you already have in your pocket vCNS licensing. With this product, you can create a complete implementation of VXLAN that even can be consumed as organizational networks by tenants in vCloud Director. But here is absolutely necessary the presence of Multicast on the physical network, aspect to be taken into account when evaluating the solution and its operational cost.
2. Cisco Nexus 1000V: living on the edge of the RFC
In early 2013, Cisco announced the implementation of at least two new mechanisms on the Cisco Nexus 1000V that would make it possible to implement VXLAN without the multicast requirement:
a. Nexus 1000V can create multiple replicas of each packet and send each replica to a VTEP on the network using Unicast only. This option is clearly far from scalable because in a massive environment it would produce something similar to a Unicast "storm".
b. Make use of VSM (Virtual Supervisor Module) that is part of the implementation of Nexus 1000V to effectively act as a control plane distributing MAC addresses of virtual machines to VEM (Virtual Ethernet Module) in the network. This, though effective, contradicts what says the VXLAN RFC regarding that the protocol should not depend on a control plane entity nor any centralized administration. Cisco knows that but, well, the customer is always right 😉
VMware NSX: the rise of distributed control plane
During VMworld US 2014, Dr. Bruce Davie shared advanced concepts and future NSX in NET1674 breakout session.
There he detailed the use of VXLAN as tunneling protocol for NSX and, with respect to what concerns us, how NSX enables OVSDB (Open vSwitch Database) as the protocol through which the NSX controller distributes the MAC address table of virtual machines across physical switches that support the protocol and that are part of the infrastructure over which NSX is deployed.
OVSDB is a client-server management protocol that aims to give programmatic access to OVSDB data base containing information on how the Open vSwitch switching mechanism operates. It contains data like, for example, the MAC address table of the VMs in the environment.
The main mechanism by which NSX creates virtual networks (call it segments, virtual wires, tunnels, etc) is VXLAN in this case, and given the task of distributing and updating the MAC address takes OVSDB protocol only requires the presence IP protocol as a transport or underlay thus eliminating the requirement for multicast on the physical network.
However, not all organizations will be able to embrace this approach as a new – perhaps more ethereal than multicast- requirement arises: physical switches that support OVSDB. There is currently Open vSwitch-compliant devices by manufacturers like Juniper or Arista and apparently OVS is here to stay, which is what emerges from its adoption not only by large manufacturers but the OpenDayLight Linux Foundation initiative, that seeks to establish a common framework for the advancement of SDN as an open and interoperable program (the world begins to react against the proliferation of vendor lock-in, or so I hope).
Regards!