Well, it started at midnight, a standard hardware upgrade really. Power down the VM’s, shut down the host, and call the DC team to do their thing. Twenty minutes later, got a ring from the DC, that the upgrade had been completed.
I popped open the VC, and well, there was no local storage. It was gone, just… gone. Ok, well it wasn’t really gone, it was just missing from the VI Client, and from /vmfs/volumes, but looking at fdisk –ul:
# fdisk -ul
Disk /dev/sda: 299.4 GB, 299439751168 bytes
255 heads, 63 sectors/track, 36404 cylinders, total 584843264 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 63 514079 257008+ 83 Linux
/dev/sda2 514080 11004524 5245222+ 83 Linux
/dev/sda3 11004525 15197489 2096482+ 83 Linux
/dev/sda4 15197490 584830259 284816385 f Win95 Ext’d (LBA)
/dev/sda5 15197553 17302004 1052226 82 Linux swap
/dev/sda6 17302068 584605349 283651641 fb Unknown
/dev/sda7 584605413 584830259 112423+ fc Unknown
Disk /dev/sdb: 299.4 GB, 299439751168 bytes
255 heads, 63 sectors/track, 36404 cylinders, total 584843264 sectors
Units = sectors of 1 * 512 = 512 bytes
It was still there… what to do what to do. With a little help from Toren L. @ vmfaq, I managed to get this sorted:
In most cases this situation can be resolved by doing the following steps:
1. Choose the ESX Server in viclilent and Click on Configuration / Advanced Settings.
2. (Optional, depending on environment*) Click on LVM -> LVM.DissallowLVMSnapshot 0 -> LVM.EnableResignature 1
3. Then click OK to apply settings.
4. Click on "storage adapters" and click the "rescan" button (upper right). That should let you see the volume again.
5. Right click the vmhba (under the controller adapter for your machine) and click "rescan".
6. Then the volume should be visible.
7. Then when you go to the summary tab, you right click and click "refresh" and you should see your storage1 volume (or whatever you changed it to).
8. Choose the ESX Server in viclilent and Click on Configuration / Advanced Settings.
9. Click on LVM -> LVM.DissallowLVMSnapshot 1 -> LVM.EnableResignature 0
10. Click OK to apply settings.
11. The name of the visible LUN is now called something similar to snap-5bcf31f1-something. You can now rename it back to what it was called originally. If you have several ESX hosts accessing the LUN you might need to mask out other servers from accessing this LUN in order to rename it.
Anyone else had similar? Care to comment?
* Step number 2 has been corrected – This can cause problems when done on a hypervisor that is part of a cluster, or otherwise has vmotion enabled. It can however, but ignored for most of those cases.
Yep, a known issue., normally when you change storage, or firmware on the array.
See http://rogerlunditblog.blogspot.com/2008/11/dat…
Glad you got it working.
Roger L
And seems to be universal between ESXi and ESX. Odd part is, I did 6x
firmware upgrades last night, no problem.
Bah!
We added a disk to the chassis. This caused the original, separate array, got resigned for some reason. Which of course, caused this to happen. :/
Yeah. At 2am, it’s not something you want to see.
I had the same issue after an upgrade to our emc celerra, was a bit nervous @ first, but after working with the vmware support staff they pointed me in the direction of what you just outlined.
Hi Cody,
Any idea what they upgraded. This will happen if the LUN ID changes. …just curious
now, this is what I call a useful information.. more technical with commands. . . practical work always make it easy to observe the scenario and find solutions .
Glad to hear you like it. Thanks!
Ran through this same issue last night, and had quite a scare until I logged into the console via SSH and verified that my data was still intact. This was the second ESXi server I updated yesterday, and thanks to google and people with greater vmware knowledge, I got it fixed quickly.
Awesome (that you fixed your problem). Glad that we could help!
Ran through this same issue last night, and had quite a scare until I logged into the console via SSH and verified that my data was still intact. This was the second ESXi server I updated yesterday, and thanks to google and people with greater vmware knowledge, I got it fixed quickly.
Awesome (that you fixed your problem). Glad that we could help!