Home > Blogs > VMware vSphere Blog


Extending an EagerZeroedThick Disk

Here is something which came my way via an observation made by one of our customers (thanks Guido). When extending a VMDK which is EagerZeroedThick, the extended part is only LazyZeroed. Its probably easier to explain using the following example.

 

Step1 - Create an EagerZeroedThick VMDK:

~ # vmkfstools -c 2g -d eagerzeroedthick /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk
Creating disk '/vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk' and zeroing it out...
Create: 100% done. 

 

Step 2 - Look at it's details. The VMFS -- before the LVID indicates that it is eager zeroed thick:

~ # vmkfstools -t0 /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk
Mapping for file /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk (2147483648 bytes in size):
[           0:  2147483648] --> [VMFS -- LVID:4e5cda72-26b067db-5bc1-d8d3855ff8b4/4e5cda72-14e9dc64-690f-d8d3855ff8b4/1:( 581291737088 -->  583439220736)] 

 

Step 3 - Extend it by 4GB.

~ # vmkfstools -X 4g  /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk                  
Grow: 100% done. 

 

Step 4 - Look at it's details again. The VMFS Z- indicates that it is lazy zeroed. Interesting, eh? An eagerzeroedthick VMDK that has been extended with a lazyzeroed chunk!

~ # vmkfstools -t0 /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk
Mapping for file /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk (4294967296 bytes in size):
[           0:  2147483648] --> [VMFS -- LVID:4e5cda72-26b067db-5bc1-d8d3855ff8b4/4e5cda72-14e9dc64-690f-d8d3855ff8b4/1:( 581291737088 -->  583439220736)]
[  2147483648:  2147483648] --> [VMFS Z- LVID:4e5cda72-26b067db-5bc1-d8d3855ff8b4/4e5cda72-14e9dc64-690f-d8d3855ff8b4/1:( 583439220736 -->  585586704384)]

 

Step 5 - If we use the correct options, we can of course grow the VMDK with an eagerzeroedthick chunk as shown below:

~ # vmkfstools -X 6G -d eagerzeroedthick  /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk                                              
Grow: 100% done. All data on '/vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk' will be overwritten with zeros from sector <8388608> onwards.
Zeroing: 100% done. 

 

Step 6 - Look at it again & we see an initial eagerzeroedthicnk section, then a lazyzero section, and finally another eagerzeroedthick section:

~ # vmkfstools -t0 /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk
Mapping for file /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk (6442450944 bytes in size):
[           0:  2147483648] --> [VMFS -- LVID:4e5cda72-26b067db-5bc1-d8d3855ff8b4/4e5cda72-14e9dc64-690f-d8d3855ff8b4/1:( 581291737088 -->  583439220736)]
[  2147483648:  2147483648] --> [VMFS Z- LVID:4e5cda72-26b067db-5bc1-d8d3855ff8b4/4e5cda72-14e9dc64-690f-d8d3855ff8b4/1:( 583439220736 -->  585586704384)]
[  4294967296:  2147483648] --> [VMFS -- LVID:4e5cda72-26b067db-5bc1-d8d3855ff8b4/4e5cda72-14e9dc64-690f-d8d3855ff8b4/1:( 589881671680 -->  592029155328)]

 

If you need to grow your VMDK and you require your VMDK to be eagerzeroedthick, then be sure to use the parameters outlined in step 5 and do it via the CLI. If you do it via the UI, you have no control over the grow options and it will automatically use lazyzero'ing for the extension, even if the initial VMDK is eagerzeroedthick.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

16 thoughts on “Extending an EagerZeroedThick Disk

  1. Satyam Vaghani

    Good point, but may I point out this is by design. The default disk type that vmkfstools assumes when the '-d' option is not explicitly used is zeroedthick. This is true of extend as it is true of create.
    One might argue that at the time of extension, vmkfstools should figure out the disk type based on the file that is being extended, i.e. if it is eagerzeroedthick, extend it as eagerzeroedthick. However, such guesses are not possible since a fully written thin or zeroedthick file might look like an eagerzeroedthick file too.

    Reply
  2. Chogan

    Thank you Satyam. Always great to hear from you.
    We think this may be a concern when one tries to grow an FT or MSCS VM, both of which need to use eagerzeroedthick.
    We're having some conversations internally to see if there is a way to address those use cases.

    Reply
  3. Nate Klaphake

    I am interested specifically in the MCSC piece of this conversation. I have tried and failed to expand a eagerzerothick vmdk while the cluster is running via CLI due to file locks. now I could shutdown both nodes of the cluster and expand the vmdk no problem but that defeats the purpose of a cluster where the only downtime you take is the time it takes to failover.
    Could you possibly introduce a -L lunreset into your eagerzerothick expand command in step 5 and get the expansion to start running without downtime. My guess is that if that would even work(which I doubt) you would be stuck waiting for the 0's to be written before the clustered servers could do anything with the volume.
    would love to hear your thoughts
    Nate

    Reply
  4. Chogan

    Hi Nate,
    Thanks for the comment/question. As you observed, any VM using a VMDK must be powered off before a VMDK can be grown (kb.vmware.com/kb/1007266)
    This is the reason for the file lock, and specifically isn't related to MSCS. I don't believe there is any way around it (and clearing the lock while the cluster is still active isn't recommended). Sorry.
    Cormac

    Reply
  5. Steve

    Hey Cormac - any update on this with the later releases of vSphere/vCenter? I just noticed this exact behavior on our vCenter 5.1 U1 environment when expanded a eager zeroed thick VMDK using the thick client.

    Reply
  6. SP

    hi its a good article and we are facing exact issue in our environment, but when i tried " vmkfstools -X 6G -d eagerzeroedthick /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk " for the VM what i would like to extend, its giving the following error and is unable to extend, please let me how do i get this working.

    Failed to extend disk : One of the parameters supplied is invalid (1).

    also, is this applicable to all vSphere versions, we are ESXi 5.1 build 1065491

    Reply
  7. palong

    Cormac - forgive me responding to this old post, but I recently ran across this issue and wanted to point out something incorrect (or at least ambiguous) about the description and command you list in Step 3 above. You wrote:

    Step 3 – Extend it by 4GB.

    ~ # vmkfstools -X 4g /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk
    Grow: 100% done.

    Actually the -X option calls for the value of , NOT the amount by which you want to extend the disk, correct? So if your original cormac.vmdk file is 10 GB, your command above will not result in a 14 GB cormac.vmdk, but rather will possibly corrupt your current disk, per KB 994 "You must specify the size you want like to Extend To and not how much you want like to Extend By. Otherwise, the disk shrinks to the new smaller size and data inside the VMDK file might get corrupted. "

    Same information is also unclear in KB 2054563

    Reply
  8. SP

    Hi Cormac & palong,
    thanks for creating and responding on this article, i was in a similar situation mentioned in this.
    used the above command for extending the disk type and was sucessfull with message saying 100 % done at the end of the extending process, i can see the extended .vmdk file size in datastore, but thats not reflecting VMware GUI for the VM's vmdk file and is still showing the old size only.

    also, can this be run on Powered ON VMs as well, if so why is this command #vmkfstools -t0 /vmfs/volumes/DATASTORE/VM/VM.vmdk not working on Powered ON VMs and i am getting the below error.

    "Failed to open virtual disk: Failed to lock the file 16392"

    the above command is only working on Powered "OFF" VMs.

    Please check with this and reply.

    thanks
    SP

    Reply
  9. Ralf

    I ran into this today and I cannot belief that this is still an issue. So at the moment there is no way to extend a eager disk in eager format while the VM is running? I just checked the disks in one of our clusters and all of them that were extended in the past are now lazy. This is a real bummer.

    Reply
    1. Ralf

      Ondrej, where did you get the info that this is fixed in 6.0? I can not find anything related in the Changelog and can not test it at the moment.

      Reply
      1. John

        I opened a support case with VMware and they confirmed this "behavior" (they refused to call it a bug and instead said it was by design), is "fixed" in 6.0. Due to the large amount of code changes required, it will not be back-ported to 5.1 or 5.5.

        Reply
        1. Ralf

          Thanks, I also opened a case a while ago and received the answer that this is the intended behavior. But i did not receive the update that this is now fixed in 6.0.

          Reply

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>