The Infamous DISKLOCK error

You all know what I’m talking about. Haven’t you ever attempted some operation on a vSphere VM – VMotion, disk consolidation, etc. and received a vCenter error stating the operation failed because of some lock on the VM’s vmdk?

After receiving an email from my backup software (Veeam) that one of my VMs needed disk consolidation, I went into vCenter & attempted to do so. Upon performing the task, vCenter returned an error:

An error occurred while consolidating disks: msg.snapshot.error-DISKLOCKED

Or, something like this error, as well:

vCenter msg DISLOCK error

I then went into Veeam to see if the backup job this VM was in failed. It did…at least just this particular problem VM in the backup job. I then specifically looked at the error for this VM in the backup job (“
Error: Failed to open VDDK disk [[datastore-name] VM-directory/VM-name-000014.vmdk] ( is read-only mode – [true] ) Failed to open virtual disk Logon attempt with parameters [VC/ESX: [vcenter-name];Port: 443;Login: [domain\user];VMX Spec: [moref=vm-101];Snapshot mor: [snapshot-17944];Transports: [nbd];Read Only: [true]] failed because of the following errors: Failed to open disk for read. Failed to upload disk. Agent failed to process method {DataTransfer.SyncDisk}. ”), and noticed this is an error I’m familiar with. This error is typical when a VM’s vmdk is attached to a Veeam Proxy VM during backup and, upon the backup job completion, the vmdk doesn’t get removed from the Veeam Proxy VM. This is most common when using Veeam’s “hotadd” backup mode. In checking all my Proxy VMs, there was no stray vmdk attached to them, so I was a bit puzzled.

I next went into the Datastore this problem VM was on & looked at the VM’s directory. To my surprise, there were 16 delta files! (hence why I received a notification this problem VM needed consolidated) Well, I couldn’t consolidate as noted above due to a file lock, so to try to remove the lock I attempted a VMotion of this problem VM. vCenter again returned an error:

vmdk is LOCKED

This was better information because it let me know the explicit disk (the parent vmdk & not any of its subsequent deltas) that was locked. After a bit more digging through Google and some VMware KB’s (specifically, these 2 posts – and ), I was able to lock down what object/device had a lock on the VM. To find the lock, open an SSH session to the ESXi Host the problem VM currently resides on then run this command:

vmkfstools -D /vmfs/volumes/48fbd8e5-c04f6d90-1edb-001cc46b7a18/VM-directory/problem-vm.vmdk

When running the command, make sure to use the UUID of the Datastore as shown above and not the Datastore alias/friendly name  (i.e.  vmfs01). When you run the command, there will be output like the sample below: 

Lock [type 10c00001 offset 48226304 v 386, hb offset 3558400
Aug 24 16:15:52 esxhost vmkernel: gen 37, mode 0, owner 4a84acc3-786ebaf4-aaf9-001a6465b7b0 mtime 221688]
Aug 24 16:15:52 esxhost vmkernel: 10:20:00:22.731 cpu5:1040)Addr <4, 96, 156>, gen 336, links 1, type reg, flags 0x0, uid 0, gid 0, mode 100600

Take note of the text after the word “owner”, specifically the red highlighted text above.  Those 12 digits represent a MAC Address and that is the key. That MAC Address represents the MAC of a network adapter of the ESXi Host that the problem VM lock originates from. To find that MAC, simply go into vCenter > Configuration tab > Network Adapters (in the ‘Hardware’ box) and look at the MAC for each Adapter on each Host. Once you find the MAC, there is your culprit… i.e. the Host holding the lock on the problem VM’s vmdk. Once found, you’ll need to either restart the management agents on the Host, or migrate VMs off the Host and perform a reboot. The lock should then be disengaged from the problem VM disk and tasks can be completed as normal (migration, consolidation, etc.).

One thought on “The Infamous DISKLOCK error”

Comments are closed.