Home > Blogs > VMware vSphere Blog


PVSCSI and Large IO’s

Here’s a behavior that a few people have questioned me about recently:

Why is PVSCSI splitting my large guest operating system IO’s into smaller blocks?

By default ESXi’s behavior is to pass IO’s from the guest operating system as large as 32MB with the LSI vSCSI adapter (as long as the actual guest operating system doesn’t have a small default transfer size: http://kb.vmware.com/kb/9645697).

However, the PVSCSI vSCSI adapter was designed to pass only 512KB IO’s, or smaller, so anything larger is intentionally split. This behavior is not configurable. During performance testing, since the larger IO’s are split sequentially, it didn’t make a large difference.

There is one last condition in which the physical adapter device driver can request that ESXi split the IO even further. If, for example, the driver is limited to 64KB IO’s, then it will tell ESXi to split any IO’s into 64KB blocks as a maximum instead of the 512KB default.

It’s important to note though that large IO’s can have a negative effect on performance.

Examples:

If the ESXi storage stack is splitting guest IO’s, it must wait for all the array acknowledgements before being able to report the final latency. In this case, this will drive up device average latencies. More info here: http://kb.vmware.com/kb/2036863

As well, some storage arrays do not handle large IO’s well and as a result, that becomes a latency-inducing bottleneck.

So large IO’s are not necessarily the promise land either and like most situations, it depends on your application and infrastructure.

There is an ESXi host wide setting that allows you to define the maximum IO size to be passed to the array. It’s known as “Disk.DiskMaxIOSize” More info here: http://kb.vmware.com/kb/1003469

So if you are seeing different IO’s sizes out of ESXi than you expect, you should check a few different layers: guest operating system transfer size, vSCSI driver, ESXi DiskMaxIOSize and the physical adapter driver.

This entry was posted in ESXi, Performance, vSphere and tagged , , , , , , , , on by .
Mark Achtemichuk

About Mark Achtemichuk

Mark Achtemichuk currently works as a Staff Engineer within VMware’s Central Engineering Performance team, focusing on education, benchmarking, collaterals and performance architectures.  He has also held various performance focused field, specialist and technical marketing positions within VMware over the last 6 years.  Mark is recognized as an industry expert and holds a VMware Certified Design Expert (VCDX#50) certification, one of less than 250 worldwide. He has worked on engagements with Fortune 50 companies, served as technical editor for many books and publications and is a sought after speaker at numerous industry events.  Mark is a blogger and has been recognized as a VMware vExpert from 2013 to 2016.  He is active on Twitter at @vmMarkA where he shares his knowledge of performance with the virtualization community. His experience and expertise from infrastructure to application helps customers ensure that performance is no longer a barrier, perceived or real, to virtualizing and operating an organization's software defined assets.

One thought on “PVSCSI and Large IO’s

  1. Lonny Niederstadt

    Hello! Has the maximum read and write size from Windows guest to ESXi host been increased from 512kb in vSphere 6.0?
    I’m testing SQL Server 2016, was surprised to see an average read size over 1 mb to a Windows PhysicalDrive backed by a ‘local’ vmdk on a pvscsi adapter. Was expecting all reads at 512kb or smaller.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *


*