Pure Storage Build Day Wrap Up

Build Day number two is in the can, we had a great week in Silicon Valley with Pure Storage. We initially thought we wouldn’t have enough content for a Build Day with Pure. The array deployment is a 30-minute task, and from there it is simple to operate. Happily, we realized that we could show more than day 1, we could show a lot of the day two operations. We got to see some radical simplicity in the management of the Flash Array and did a whole lot of lifecycle operations on the array. You can see the live part of the build day in this video, and all of the Pure Storage build day videos in this playlist.

This Build Day had a few issues that weren’t what real customers would see. Mostly because customers get factory fresh hardware, we did not.

What is FlashArray?

Pure’s first product was FlashArray; I expect it is still the bulk of Pure’s business. FlashArray is an All-Flash-Array, no spinning disk anywhere to be seen. The FlashArray//M range of products uses a custom 3U rack enclosure that houses two controllers, two or four NVRAM modules, and up to twenty data modules. Each data module has a pair of SSDs; the modules are supplied in a set of ten (or twelve) modules called a Data Pack. Each controller has 1Gb RJ45 connectors for the management network, 10GB Ethernet or 16GB Fibre Channel for the storage network, and additional Ethernet for replication. Additional storage shelves can be added for more capacity. The effective capacity ranges from 30TB to 1.5PB in the FlashArray//M range. Pure doesn’t spend all their time on the tech specifications, they have some interesting programs that are more aimed at the operational and financial challenges of enterprise IT. One aspect is a program where maintenance also covers hardware upgrades, and another is that the upgrades are non-disruptive. Simplicity in ownership and operation is fundamental to Pure.

There is a new generation of FlashArray, the FlashArray//X which uses NVMe flash for maximum performance. The FlashArray//M series was designed to be a comparable price to a disk based array, with better performance. The design decisions on the FlashArray//M series favored containing cost over maximizing performance. The FlashArray//X70 is designed to accommodate Tier-1 apps, so performance is critical along with availability and advanced data services. Pure also has a scale-out flash based object store too, FlashBlade, which is not aimed at virtualization but is excellent for analytics like Hadoop.

What did we do on the live stream?

We started, as usual, with an existing vSphere deployment that was in need of an upgrade. In this case, we had VMs on a NAS that was performing very poorly. We deployed a FlashArray //M50 and attached it to the existing vSphere hosts and vCenter. The storage configuration was completed with a vCenter plugin, inside the vSphere Web Client. Once the new storage was available, I kicked off a PowerShell to migrate a bunch of VMs off the slow NAS onto the FlashArray. The migration of VMs onto the FlashArray was slow as the NAS had only 1GB Ethernet, that is something to fix before the next Build Day. While the VMs were in flight we intentionally failed the controller that was serving IO; the second controller took over as you would expect and the VM migrations continued. As with any dual-controller array, we wanted the failed controller back in service ASAP as we had no further redundancy.

As we already had a controller out of the FlashArray, we decided to do an in-place upgrade from //M50 to //M70. The //M70 controller slid into place and was configured as a replacement for the removed //M50 controller. We then swapped the second controller from //M50 to //M70. The //M70 is higher performance, so we needed to install two additional NVRAM modules. We had replaced the entire brain (both controllers) of the FlashArray without any VM downtime; the entire process took under an hour.

Next, we upgraded the Purity operating system on the array, again one controller at a time and with no VM downtime or loss of IO. Finally, we added another 10 SSD data modules to the array, expanding the array capacity. Again no down time, no loss of IO, and no performance degradation.

As these upgrades progressed, we struck issues with pre-existing configurations on the hardware. The cause was that these were not factory fresh components like customers would receive. We had arrays that are sent out to partners for testing, so all kinds of configurations were present. The //M50 had been in a VMware data center, and the //M70 had been in Zurich, where it had had a couple more shelves of SSDs attached. There was a lot of work to remove these configurations from the arrays, something that customers don’t ever need to do. One nice thing was that the Pure Storage support team saw that we were struggling and offered to help, just as they would with a real customer. When we gave the support team access, they resolved issues rapidly, and we were able to keep going.

What else did we do?

In a pre-recorded session Vaughn Stewart shows how the SCSI unmap allows empty space to be reclaimed from VMs. Without unmap support any deleted file inside a VM continues to consume capacity from the underlying storage forever. When the guest OS uses unmap, and the hypervisor passes that unmap through, the storage array can release the capacity. What we saw was specific to block storage (not NFS) but not specific to vSphere as Hyper-V and KVM have had unmap support for some time.

We also looked at VMware’s VVols and vRealize Automation with Cody Hosterman. Automation is a crucial part of simplification at scale, through the week I heard a lot about customers with multiple arrays under management and the need to use automation to deliver on-demand services. In the live stream we deployed vRealize Orchestrator, but since it was late in the day, we didn’t do much with it.

In the dry run, we also needed to get the network configured. The vBrownBag mobile lab network did not have a lot of spare ports and used mostly RJ45 connections with only a single SFP+. The storage network on the FlashArray uses SFP+ modules for the storage network. We ended up with an Arista 10G switch for the storage network in addition to the 10G Netgear XS708E switch. That Arista switch was very noisy, some of the videos have strange audio as Jeffery had to do quite a bit of filtering to get rid of the fan noise.

What impressed me?

Many things impressed me. The best surprise was Stacie Brown, the Systems Engineer that was on-camera with me. Stacie’s product and company knowledge were excellent and her experience with customers made for interesting discussions. Stacie was also calm and unflappable when the live build went wrong, just the person you want in a real deployment, even if nothing goes wrong.

I was impressed with just how much we got done in the live stream:

  1. Deploy All-Flash-Array
  2. Migrate existing VMs to new array
  3. Upgrade array from great performance to higher performance
  4. Upgrade array firmware
  5. Expand array capacity.

I think that is about a year’s worth of support tasks completed on the array in half a day. This is the kind of operational simplicity that we need.

What did we learn?

Simplicity is an important part of any product that we deploy in an enterprise data center. Simple to deploy, simple to operate and simple to update are all important characteristics. The FlashArray//M fulfills all of these simplicities and adds a simple business model with the guarantees. Using block storage does still mean that there is LUN management, rather than everything being VM centered. Luckily VVols allows block storage with VM centric management.

Resources we used

The main resource we used was Stacie Brown, although she was well prepared with access to Pure’s support documents. Some support documents are only available to customers and partners, but there is plenty of documentation on the support site.

One is the VMware vSphere Best Practices Guide for the Pure Storage® FlashArray

Another great resource is Cody Hosterman’s blog, although you will need to set aside some time to read the posts as Cody does very technical content, full of useful information.