ZeroVM is a Cloud, or even ‘process’ specific hypervisor. That is to say, it was designed from the ground up to be fast, disposable, and uniquely secure. We’ll get into each of those in a bit. ZeroVM was designed to solve some problems that are uniquely Cloud in nature, where compute resources and object storage resources may or may not be local to one another. Local in this case meaning datacenter, geography, etc. The point is they are not on the same box. While this works for lots of use cases, when handling middling to large amounts of data (or even the infamous ‘Big Data’), you start to see processing latency, in that data needs to be copied up out of storage, into the compute instance, operated on, and then pushed back down. If storage and compute are not on the same host, this can be an expensive operation.
Figure 1 – Traditional Compute / Object Storage flow:
Now, if you’ll forgive my horrible PowerPoint diagramming, you can get an idea for how data is flowing through the system. Note the DC Networking in the middle, this was left ambiguous as depending on your (or your service provider’s) implementation, you may traverse more than one network to access your data. That access may or may not be over ‘Public’ connections. All and all it’s a relatively inefficient flow.
Enter ZeroVM. From the ZeroVM website:
Bulky VM instances cannot be integrated into a storage cloud. They require a dedicated cloud, resulting in excessive back and forth shipping of data. This problem is exacerbated by data–intensive workloads such as crunching logs and processing videos. VM embeddability and lightweightness solve this problem.
With ZeroVM, which has an OpenStack project “Zwift”, the ZeroVM bits can be integrated into your Swift storage layer. In turn allowing applications to be built to take advantage of the computing that is normally idle on the storage nodes. Allowing your compute requests to stay local for execution:
Figure 2 – The ZeroVM approach
In this go around, the diagram has been greatly simplified. Each ZeroVM process is now operating on the storage where the data it seeks resides in Swift rather than traversing the network.
Some Important Points
As I wrap up, I wanted to call out some very important points:
- ZeroVM is still under active development. Things are changing, rapidly. Things that work today, may not tomorrow, etc.
- ZeroVM is not a traditional hypervisor. Nor is it containers or JVM. Rather, in ZeroVM, each request is instantiated as it’s own, fully-functional, isolated, 75kb executable in which a few executions are carried out, and then the instance is destroyed.
- Applications won’t really work ‘out of the box’. Rather, ZeroVM provides a GCC so you can cross compile them to function within the context of ZeroVM.
This post so far has been designed to get you familiar with the idea of ZeroVM and how it might benefit you.
ZeroVM Demo Time
On a parting note (or if you’ve done the TL;DR bits), there is a demo. In this, I assume you have Vagrant & one of the VMware plugins (At this time virtualbox doesn’t support all of the SSE instruction sets required and I’ve not tested it.
To get started with ZeroVM, do the following:
git clone https://github.com/bunchc/vagrant-zerovm cd vagrant-zerovm vagrant up --provider=rackspace vagrant ssh
From here you will be able to follow the examples listed here, like this:
$ wget https://zvm.rackspace.com/v1/repo/ubuntu/samples/python.tar --- snip --- 2014-01-22 04:45:07 (1.90 MB/s) - `python.tar' saved [72519680/72519680] vagrant@ZeroVM:~$ echo 'print "Hello"' > hello.py vagrant@ZeroVM:~$ zvsh --zvm-image python.tar python @hello.py Hello