As I’m planning to deploy ESXi 6.0 in our environment, I’m doing research for the all issues or behavioral changes coming with vSphere 6.0. I have discovered one serious change in behavior which came with ESXi 6.0 and I don’t think it was announced anywhere and it can cause outage in your environment.
When you are extending eager zeroed disk your VM will be stunned (paused) for a whole time of disk allocation.
This is a serious change and you should be aware about it, assuming you are usually using eager zeroed disks for a critical IO demanding VMs, where any downtime can cause you the problems.
It is actually not a bug, but feature 🙂 As before, extending eager zeroed disks was done by using lazy zero extend.
If you want to workaround this and keep your VM online in ESXi 6.0 environment, you can convert your vmdks using Storage vMotion to lazy zeroed format, extend and Storage vMotion back to eager zeroed format.
Usually you have eager zeroed disks for a performance reason and such operations will have negative effect on the performance.
What I would love to see from VMware is to implement an option to choose between modes of extend operation, so administrator can decide if he wants to extend using the old or new method. If you are running out of time with business critical application, during peak hours, sometimes there is no time to wait for a maintenance window to do such simple task as disk extension.
You know, if your management was used for something which could be done without downtime, it will be hard to explain that you have lost this functionality after update!
The only other workaround is to keep a spare ESXi 5.5 host in your environment, which is luxury, not everybody can afford.
You can find information about pre-ESXi 6.0 behavior in Cormac’s blog post.
New behavior is described in VMware KB2135380
Information about VM stun and when it is used can found in the another Cormac’s article 😉
I hope you haven’t found out about this the “hard way” after you experienced this yourself. You can at least help the others by sharing 😉
Update January 2016: This behavior has been fixed with ESXi 6.0 Update 1b
Expansion of eager zeroed VMDK causes the VM to be inaccessible
In ESXi 6.0, VMDKs of eager zeroed type are expanded in the eager zeroed format, which takes a long time and might result in the VM being inaccessible.
This issue is resolved in this release.
Latest posts by Dusan Tekeljak (see all)
- Bricked QLogic Broadcom BCM57840 after driver update - July 21, 2017
- Set up an alert for port blocked by vSwitch security policy - June 12, 2017
- Enabling agentless Guest (VM) RAM monitoring with vRealize Operations 6.3+ - February 14, 2017