This guest post is by David Espejo, who blogs at vcloudopia.wordpress.com, where you can find his back catalogue of posts. David writes in Spanish, so if you don’t speak Spanish be sure to use a browser that will translate for you. Find out more about the guest blogger program here.
VXLAN seems increasingly to be on the lips of all who follow the network virtualization market these days. On one hand the Internet Engineering Task Force (IETF) approved about three weeks passed the state technology VXLAN draft or draft a protocol to be recognized by an RFC, which provides a more solid basis for consolidation.
Moreover, the growing rise of SDN suite of VMware: NSX has led partners like Juniper Networks, Arista Networks, among others; to offer devices capable of integrating with NSX using as VXLAN overlay protocol, for their capabilities and flexibility for infrastructure requirements.
This is the motivation for this series of 2 entries, fingers crossed, will become more posts on the progress of NSX as a platform for SDN.
Let’s get down.
VXLAN: what is it?
It is a protocol overlay networks ( overlay networks ) defined by RFC 7348 that the IETF adopted a few weeks ago, and that uses the IP protocol as the basis on which implements similar LAN segments to VLANs, but with a pattern different encapsulation that greatly increases the number of networks that can be transported, while leveraging the ubiquity of IP and geographical scope to add flexibility and extensibility to implement L2 networks simultaneously.
Why VXLAN?
One of the challenges that network architects are currently facing in the modern data center have to do with the limitations of the ubiquitous Spanning Tree Protocol that is almost the de-facto mechanism that organizations implement to prevent Layer 2 loops. STP however, inevitably leaves many unused links so that in the massive scale of the cloud datacenters, this means a huge cost of investment in network ports that ultimately can not be used.
Furthermore unlike Layer 3 protocols such as IP, Layer 2 networks have limited mobility outside the datacenter and certainly to extend the operational load or activate layer 2 domains at alternate site reduces the options for effective implementation of requirements as Disaster Recovery of virtualized workloads to an alternate datacenter, keeping all the connectivity properties of the main site in an automated fashion.
In the traditional network isolation mechanism in layer 2 which are the VLANs, the VLAN ID is defined as a 12 bits numeric attribute, limiting the number of possible networks to 4094. For many years the possibility of hitting this limit seemed very distant, but in today’s data center with multiple tenants consuming connectivity services while creating their own networks and services, this limit has quickly become an obstacle in sight to the scalability of the data center.
Figure 1 VXLAN adds a 24-bit ID before shipping the package by the IP network. The destination performs decapsulation.
VXLAN arises, therefore, as a joint proposal from vendors such as VMware, Cisco, RedHat, Arista, among others; to become the mechanism that effectively extends the functionality of the VLANs in both scale and scope. Scale because its network identifier, defined as VNI (VXLAN Network Identifier and Segment ID ) is 24 bits in length, allowing a maximum of 16 million networks that can converge on the same VXLAN environment.
The extension of VLAN scope is achieved by VXLAN by leveraing IP protocol so widely implemented and with several mobility mechanisms, as a way to transport L2 encapsulated networks. That is why a VXLAN is also included in the category of tunneling protocols.
Components
The following is a list of components required for an implementation of VLXAN in a VMware environment:
A. VTEP (VXLAN Tunnel Endpoint) is the entity that originates and / or terminates VXLAN tunnels. It is implemented on each ESXi host and consists of the following modules:
1. vmkernel module: part of the distributed Virtual Switch (dVS) and is installed as a vib in the hypervisor. Acts as a data plane to maintain routing tables and handles the packet encapsulation and decapsulation
2. vmknic adapter, one installed on each host when the cluster is prepared for VXLAN. It is responsible for transporting VXLAN traffic control. It acts in part as a control plane to feed the information visible on the IP network routes and feeding the table maintained by the data plane in the VXLAN vmkernel module.
3. VXLAN port group: includes traditional information of vSphere dVS Portgroups. A new portgroup is automatically created each time a network or VXLAN segment is deployed.
B. vCNS Manager : this is the real management plane, as it provides centralized management for VXLAN segments and vmknic interfaces for VXLAN-prepared clusters. It is not part of data traffic path, but it keeps the current state of the VXLAN implementation within a vCenter domain.
C. vCNS Gateway it’s an appliance that provides advanced connectivity and security which is implemented and managed by vCNS Manager. This is a really special component within the architecture of VMware VXLAN because it has the ability to connect VMs from VXLAN segments to the "outside world" with features like NAT, perimeter Firewall, DHCP, DNS Relay, VPN, among others and is an alternative to the eternal question about how to terminate VXLAN tunnels on physical devices. This will be the subject of another post, but let’s stay with the vCNS Gateway as an effective bridge between VXLAN networks and external connectivity services.
Figure 2 Simplified architecture for implementing VMware VXLAN
In the next post we will see the mechanisms under which VXLAN operates and practical steps to implement the protocol in a VMware environment.
Regards!
2 thoughts on “VXLAN: CONCEPTS, OPERATION AND IMPLEMENTATION (1-2) by @davidmirror”
Comments are closed.