VMDirectPath? Paravirtual SCSI? – vSphere VM Options and You!

2566665372_685c9216f4_m_d[1] This post comes because I am just as confused as the rest of you when it comes to the options available in some of these new vSphere interfaces. I figure it best to take a look at some of the options, and figure when it is best to move away from defaults and start tuning things just right. Today those options are VMDirectPath, and Paravirtual SCSI.

Note: This is what I get for complaining about writers(bloggers?) block on Twitter. This one is another post that originated from the great Mr. @rogerlund. (the picture is by webhamster) More notes! I asked to be called on it, and was. There is some discussion in the comments about what exactly VMDirectPath will do for us, and we found some ‘oddities’ in the VMware docs. Read on!

Specifically Mr. Lund asked me to answer the following three questions: “Which applications would be good for Paravirtual SCSI? Which for VMDirectPath? And; How do we choose?” Because it is hard to decide when you don’t know what does what, let us start with definitions! Yes they are boring, and no there will not be a test.


Paravirtual SCSI Adapters (PVSCSI)

The ‘vSphere Basic System Administration’ guide says this (p116):

“Paravirtual SCSI (PVSCSI) adapters are high-performance storage adapters that can result in greater throughput and lower CPU utilization. Paravirtual SCSI adapters are best suited for high performance storage environments. Paravirtual SCSI adapters are not suited for DAS environments. VMware recommends that you create a primary adapter (LSI Logic by default) for use with a disk that will host the system software (boot disk) and a separate PVSCSI adapter for the disk that will store user data, such as a database.”


We had to jump to page 116 to get this one for VMDirectPath:

“VMDirectPath I/O allows a guest operating system on a virtual machine to directly access physical PCI and PCIe devices connected to a host. Each virtual machine can be connected to up to two PCI devices. PCI devices connected to a host can be marked as available for passthrough from the Hardware Advanced Settings in the Configuration tab for the host.”

However, we find this in the Configuration Examples doc for VMDirectPath seems to conflict a bit:

VMDirectPath allows guest operating systems to directly access an I/O device, bypassing the virtualization layer. This direct path, or passthrough can improve performance for VMware ESXTM systems that utilize high‐speed I/O devices, such as 10 Gigabit Ethernet.

What then is VMDirectPath? It seems that it will allow us to connect an I/O PCI device up to a VM and make things better.

So now that we know what these are… we can start to draw some differences in what they do, and when they might be required.

When & Why?

496392956_4454c0757f_m_d[1] I figure one of these questions is closely related to the other. Thus it makes sense that we will discuss them together. At least, it makes sense to me. Let’s go! (This pic was borrowed from Pete Reed)

Paravirtual SCSI (PVSCSI)

Part (most) of the when & why for PVSCSI is provided in the quote from above. You use PVSCSI when you need a high performance virtual storage adapter. What does this mean? It means you will not use this for your AD server, or print server. You also generally will not use this on local storage, or DAS (Direct Attached Storage).

When would you use it? Well, I’m glad you asked. Remember that database that you were not virtualizing, because of it’s high IO requirement? That graphics rendering app for marketing that let the magic blue smoke out of your last SAN array? These are good candidates for PVSCSI.


This here, is where we answer the when and why, for VMDirectPath. VMDirectPath is quite a bit different than PVSCSI, but no less cool. It allows VMs to directly access PCI(e) I/O devices. Up to two per VM. Now why would you want to do that? Wasn’t hardware abstraction one of the beauties of virtualizing your environment?  Direct access to HBA or 10gb NICs? Those go here. Why? It reduces the overhead of actually virtualizing these operations.

That actually covered a bit of the when as well as the why.


You knew these were coming didn’t you? I know you did. Each of these wonderful technologies have some requirements and some caveats (Who is cav, and what is he eating?) to their use.

Paravirtual SCSI (PVSCSI)

This requires:

  • vSphere (really… it does)
  • One Happy Administrator (it does not work when angry, trust me)
  • OS: Win2k3/2k8 or RHEL5
  • Hardware version 7

Caveats (there is that cav guy… eating again, always eating):

  • No Boot disks
  • No record/replay
  • No FT
  • No MSCS

That about covers it, now onto VMDirectPath!


This one needs… well:

  • vSphere (well… common)
  • Intel Virtualization Technology for Directed I/O (VT-d)
  • AMD IP Virtualization Technology (IOMMU)
  • Devices must be connected to the host and marked available for pass through
  • VMs require hardware version 7

When using VMDirectPath you lose following features:

  • VMotion
  • Storage VMotion
  • FT
  • Device hot add
  • Suspend and resume
  • Record and replay

I think this about covers it, but I am quite sure I left something out. Feel free to call me on it in the comments or via Twitter. I am not entirely sold on it being strictly for I/O only. I’ve poked a few contacts at VMware for some clarification.

24 thoughts on “VMDirectPath? Paravirtual SCSI? – vSphere VM Options and You!

  • Good notes on PVSCSI, but way off base on VMDirectPath.

    VMDirectPath is not PCI passthrough so the funky Entropy card for security is not supported. VMDirectPath is there to remove the (usually very small) IO latency added by the virtualised hardware and to remove the sharing of IO devices that cases perfromance variations. It passes through very specific IO devices, the list below is from the slides in teh vSphere What's New course:

    Full network I/O support with:
    Intel 82598 10 Gigabit Ethernet adapter
    Broadcom 57710 10 Gigabit Ethernet adapter
    Experimental storage I/O support with:
    QLogic QLA25xx 8Gb Fibre Channel
    LSI 3442e-R and 3801e (1068 chip–based) 3Gb SAS adapters

  • You left out all the caveats of using VMDirectPath.

    When using VMDirectPath you lose following features:

    * VMotion
    * Storage VMotion
    * FT
    * Device hot add
    * Suspend and resume
    * Record and replay

    For me VMDirectPath is totally unusable in its current incarnation. Will have to wait next gen before even thinking of using it.

  • I'm not sure what's up with Disqus. I've replied to both your comments via e-mail and they've not shown up here. That said:

    @DemitasseNZ – There seems to be a conflict in the docs, or rather the sysad guide is misleading. This has been updated in the post. Thanks for the heads up.

    @Tomi – Also, thanks for catching that. I could have sworn… but… none the less, they've been updated too. I've borrowed yours.

    Thanks again folks for keeping me honest, and lets keep the comments rolling.

  • Thanks for that! I was working with the following excerpt from the Basic
    Sysadmin guide for vSphere 4:

    VMDirectPath I/O allows a guest operating system on a virtual machine to
    directly access physical PCI and
    PCIe devices connected to a host. Each virtual machine can be connected to
    up to two PCI devices.

    It seems from some additional googling around:

    Virtual Geek:
    15 seconds in, IO Devices (so maybe not the entropy card)

    The config example doc from VMware:
    “VMDirectPath allows guest operating systems to directly access an I/O
    device, bypassing the virtualization
    layer. This direct path, or passthrough can improve performance for VMware
    ESXTM systems that utilize
    high‐speed I/O devices, such as 10 Gigabit Ethernet.”

    So, High I/O it is, I'll get this updated shortly. Someone, however, should
    update the basic sysad doc. 🙂

  • Cody, nobody knows everything, not even Mike Laverick (http://www.rtfm-ed.co.uk/?p=1340), also documents don't aways convey the right message to every reader, so we always need a second set of eyes and brains to check our thoughts.

    I have your site on the links list I give to every student I teach VMware courses, keep the great site going.


  • Thanks for bring this up. I have heard VMDirectPath is targeted at guests running MSFT SQL Server to improve performance. Any thoughts on that particular use case? Any idea if there is a limit on the number of guests per hosts using VMDirectPath?

  • Does VMDirectPath I/O require VSphere to have the correct driver for the PCI and PCIe device?

  • Thanks for the post. I am curious as to why Device Hot Add is a lost feature when using VMDirectPath. Anyone have any insight?

  • Technically VMDirectPath alows passing any PCI device to a VM (It is same PCI Passthrough so what DemitasseNZ says in comment #1 is not true). Of course there are some caveats with respect how drivers/device behave when a device is passthroughed to a VM (since ESX sets up the virtual PCI address to physical PCI address). But for most common cases, things should work.

    vSphere 4 only announced support for few storage & network controllers because that is what *people* thought it was going to be used (and were tested internally) for but turned out that we have had few customers try GPUs, ISDN cards, SSL encryption cards, etc., and they worked fine even though VMware does not officially support those devices yet. In theory they can be added to list of supported devices as soon as there is good business justification since it involves testing and supporting them.

  • BTW, someone was asking about drivers in vSphere for VMDirectPath.

    Answer: VMDirectPath needs driver for whatever guest OS that you have Passthrough-ed device to. For example, if you assigned Emulex HBA to a windows 2008 VM, you need Emulex windows 2008 driver to talk to the device not a driver running in ESX.

    Also anyone trying GPUs, make sure those nVidia supports your model for PCIPassthrough/VMDirectPath and you need their new driver as well.

    BTW, I am not sure if any of you saw this, ATI/AMD demo-ed their GPU Passthrough at VMworld as well.

  • That the best explaination I ever saw. Thank you so much!. Answered all my questions so far.

    In fact we are looking to vertualise SAP servers and as discussed databases require high I/O. I there an alternative option to use Pass through and V motion both at the same time or if not the same thing but a similar solution?

  • I’m actually looking at this functionality for a completely different reason.  I used to be in IT, so I kind of shudder when I run into someone who has no clue. 

    I operate very specialized analytical equipment that is integrated heavily into the PC environment.  Unfortunately, the hardware is much more complicated than, say, a scanner (Or even SAN arrays, for that matter), and it is not mainstream, so driver development is virtually non-existent.  As a result, I have once PC that has two PCI cards in it.  One is a high end data capture board, the other is a multi-channel analyzer. 

    Both of these boards were supported perfectly under Windows XP, and they kind of work in Windows 7 32 bit, but one of them (The multichannel analyzer) takes up a HUGE amount of memory, so both do not function well together on Windows 7, and don’t even get me started on my efforts to get them running under 64 bit…

    The XP driver had all sorts of nice features which made it so that the card only used up additional memory when it was actually being used.  This feature allowed me to use the full amount of RAM when the card was dormant. That feature no longer works in Windows 7.

    Back to the IT department.  They decided to revoke all support for XP machines, and try as I might, they don’t see any reason to keep this machine running XP.  It must be upgraded.  So…

    My thought is to install VMware and run two virtual machines- the CPU will support it, and if I can make 32 bit VMs on a 64 bit Windows base, then I can allocate memory between the to.  (Machine currently has 8GB, I can make it 16 with no problems…)

    The only issue is the pesky PCI cards.  Will the DirectPath I/O allow me to link one card to one VM and run it in an XP 32-bit environment under Windows 7 64 bit?

  • From the sound of it you’re wanting to do this under VMware workstation or
    similar hosted virtualization product. At this time I believe DirectPathIO
    and Paravirt SCSI only work on the vSphere products (ESX Classic and ESXi),
    and even then only for boards that are on the HCL. If your hardware supports
    ESXi I’d say set that up with two XP VMs running on top as a test. That
    said, you’d no longer have a gui to manage the box directly and would need
    to remote manage from another workstation.

  • I know this is quite an old article but i was wondering, why not use PVSCSI all the time on all the VMs? If it’s better then is there a reason not to?

  • I have actually used VMDirectPath with TV Tuner Cards, and it works reasonably well. ESXi 4.1.

  • Pingback: VMWare ESX, storage over 2TB - Just just easy answers
  • we are one of the best share tips provier company in a indan stock market
    for as for further assistance contact us at 0-9015154400

Comments are closed.