April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Categories

April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

VMware Thin and Thick Client Provisioning: A Brief Overview

Thick and thin client provisioning is not as different as you might first assume, both operate by running a client application on the desktop – which then sends and receives data over the network to the server. By going through the differences with you here, we will hopefully help you see how one might benefit your company’s environment over the other.

Thin Clients

A thin client is a network computer without a hard disk drive. Thin provisioning is based around the concept of saving disk space on your data stores. This allows you to over allocate disk space on your virtual machines because thin clients don’t reserve space on the file system (VMFS).

Thin Provisioning Pros

Virtual machine disk usage is minimal.
Cuts down on storage costs.
Allows an organization to maximize the use of space in a storage array.
Reduces the threat of data loss.

Thin Provisioning Cons

The possibility that you can run out of space on an over-allocated data store.
Requires closer storage oversight.
Eliminates the possibility of using some of vSphere’s advanced features – such as Fault Tolerance.
May carry a performance penalty. As new space is made available for thinly provisioned storage expansion, vSphere must reserve the space and zero it out. If you are in an environment where top performance reigns paramount, don’t use thin provisioning.

Thick Clients

A thick client performs most client/server applications. Thick provisioning is based on the concept of allocating the virtual machine disk, reserving all necessary space on the data store at the time of creation.

Thick Provisioning Pros

Prevents over provisioning your data stores, ensuring you don’t have any down time.
You’ll receive the best performance since all of the blocks will be pre-zeroed, cutting out the need during normal operations.

Thick Provisioning Cons

Thick provisioning will decrease your storage space much faster.
There’s the very real possibility of wasting disk space on empty blocks of data.

Thick Options

Lazy Zeroed Thick is a provisioning format in which the virtual machine reserves the space on the VMFS. The disk blocks are only used on the back-end data store when they get written to the virtual machine.

Eager Zeroed Thick is a provisioning format in which the virtual machine reserves all the space on the VMFS and zeros out the disk blocks at the time of creation. Creating a virtual machine with this type of provisioning may take a little longer, but it’s performance is optimal from deployment because there’s no overhead in zeroing out disk blocks on-demand. This means no additional work to the data store for the zeroing operation.

Thick thin

 

eager-vs-lazy-1

 

The VMware Thick Eager Zeroed Disk vs the Lazy Zeroed Thick disk in write performance.

What is the potential write performance difference between the VMware virtual disks: Thick Lazy Zeroed, Thick Eager Zeroed and Thin provisioned types? This has been discussed for many years and there are many opinions regarding this, both in terms of test vs real life write behavior as well as test methods. There is also important factors as storage effiency, migration times and similar to this, however I will in this article try to make the potential “first write” impact more easy to evaluate.

Before the virtual machine guest operating system can actually use a virtual disk some preparations has to be accomplished by the ESXi host. The main tasks that has to be done for each writable part of a virtual disk is that the space has to be allocated on the datastore and the specific disk sectors on the storage array has to be safely cleared (“zeroed”) from any previous content.

In short, this is done in the following way:

Thin: Allocate and zero on first write
Thick Lazy: Allocate in advance and zero on first write
Thick Eager: Allocate and zero in advance

There are some published performance tests between these three disk types often using the standard tool IOmeter. There is however a potential flaw to these tests, caused by the fact that before IOmeter starts the actual test it will create a file (iobw.tst) and write data to each part of that file – which at the same time causes ESXi to zero out these blocks on the storage array. This means that it is impossible to use IOmeter output data to spot any write performance differences between the three VMware virtual disk types, since the potential difference in write performance will already be nullified when the IOmeter test actually begins.

When the difference will only be in the very first write from the Virtual Machine across the virtual disk sectors a way to simulate this is to force a massive amount of writes over the whole disk area and note the time differences. This is of course not how most applications work in the sense that it is uncommon to do all writes in one continuous stream and instead the “first-writes” with ESXi zeroing is likely to be spread over a longer period of time, but sooner of later each sector that is used by the guest operating system has to be zeroed.

 

eager-vs-lazy-2

A way to simulate large amounts of writes could be done from using the standard Windows format tool which, despite some popular belief, actually erases the whole disk area if selecting a “full” / non-quick format. In real life there is not much specific interest how fast a partition format is done in itself, however in this test the format tool is used just to create a massive amount of “first-writes”.

 

eager-vs-lazy-3

This test case uses a VM with Windows 2012 R2 which was given three new virtual hard disks of 40 GB each, where there was one Eager, one Lazy and one Thin. Each disk was then formatted with NTFS, default allocation unit, no compression, but with the non-quick option.

 

eager-vs-lazy-4

 

 

 

One first observation was in ESXTOP while looking at the ratio between the writes that the virtual machine actually commits compared to how many writes are being sent from ESXi to the LUN.

Above we can see ESXTOP while doing a full format of an Eager Zeroed Thick disk. The important point here is that the numbers are very close. The writes being done at the LUN are only the writes the VM wants to make, i.e. there is no ESXi introduced extra writes since the “zeroing” was done already in advance.

 

eager-vs-lazy-5

 

Above a Lazy Zeroed Thick Disk is being full formated from inside the VM.

What could be noticed is the amount of write IOs being sent from ESXi to the LUN is much higher than the number of write IOs coming from the virtual machines. This is the actual zeroing taking place “in real time” and will make the VM write performance lower than the Eager version while accessing new areas of the virtual disk for the first time.

 

The actual time results for doing a full format of a 40 GB virtual disk was:

Eager Zeroed Thick Disk: 537 seconds
Lazy Zeroed Thick Disk: 667 seconds
Thin Disk: 784 seconds

The Eager Zeroed Thick Disk was almost 25 % faster in first-write performance compared to the Lazy Zeroed.

The Eager Zeroed Thick Disk was almost 45 % faster in first-write performance compared to the Thin Disk.

This is obvious in doing a full format which forces the VM to write at all sectors. In a real environment the “first-writes” will naturally be spread over a longer period of time, but sooner or later the Zeroing hit will take place for each part of the disk and might or might not be noticeable to the user. For a typical virtual machine that does the majority of “first-writes” at OS installation this is likely to be of lesser interest, but for VMs with databases, logfiles or other write intensive applications it is possible to result in a higher impact.

 

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>