VMware Explore happening NOW!

VXLAN: CONCEPTS, OPERATION AND IMPLEMENTATION (2-2) by @davidmirror

This guest post is by David Espejo, who blogs at vcloudopia.wordpress.com, where you can find his back catalogue of posts. David writes in Spanish, so if you don’t speak Spanish be sure to use a browser that will translate for you. Find out more about the guest blogger program here.

It has been a while since I published the first part of this entry and well, it’s time to continue.

The more I learn about it, the harder it is to compress everything into a single blog post. On this occasion I will cover the implementation of VXLAN on a VMware vSphere environment and some of the following topics will be covered in subsequent posts:

A. VXLAN and the multicast requirement: three different levels of (in) dependence from Multicast

B. vCNs Edge as an efficient and cost-effective VXLAN gateway

C. A day in the life of a VXLAN frame

Without further ado, let’s get down to business.

To prepare a cluster for VXLAN, the following prerequisites must be met:

1. A VLAN provisioned on the physical network. This will be used for communication between VTEPs

2. All hosts in the cluster must be attached to a distributed Virtual Switch

3. The physical network must support the transport of packets with minimum size of 1600 bytes without requiring fragmentation

The procedure is performed from the vCNS Manager web interface with the following steps

1. Go to Datacenters > Network Virtualization > Preparation

2. Click on Edit…

clip_image002

3. Select a cluster and associate it with the respective dVS and VLAN ID previously configured:

clip_image003

4. In the next screen, select the NIC Teaming Policy for the uplinks of the dVS to be prepared. Etherchannel, Link Aggregation Group and Static Failover are currently supported. I’ll be using Static Failover as i don’t have a deployment of LAG or Etherchannel in my lab’s physical network.

clip_image004

The outcome of the previous steps should look better than this:

clip_image006

The cause of this “Not ready” status has so much to do with the lack of a DHCP server in the inter-VTEP communication VLAN. So a simple workaround consists of manually assigning IP addresses to the VTEPs which actually corresponds to vmknic ports in each host of the selected cluster.

Now we have to configure the Multicast address ranges and also the VXLAN Network ID ranges for this particular deployment. To achieve that, follow the next steps:

5. Click on “Segment ID” > “Edit…”

clip_image007

It is necessary to configure the same amount of VNI or “Segment IDs” as Multicast IP addresses, because each VXLAN network will take one Multicast address of the group.

The next steps apply only if the environment is not managed by vCloud Director (in such case, the Provider vDC would be ready now to deploy VXLAN segments to tenants)

6. Go to “Network scopes” and create a scope with the previously prepared cluster: clip_image009

7. At this point you can start to create virtual wires, segments or VXLAN networks which can be operated as traditional dVS portgroups where you connect virtual machines. You create the networks using “Networks” -> Add link:

clip_image011

8. Finally, at the “Edges” link a new vCNS Edge instance coudl be created so it can work as a Gateway that provides Access from VMs connected to a virtual wire to the “external world” so they will be able to communicate with Internet services or physical equipment using traditional VLANs.

I will elaborate more on this in a further post.

Regards!