Home > Blogs > VMware vSphere Blog

PVSCSI and Large IO’s

Here’s a behavior that a few people have questioned me about recently:

Why is PVSCSI splitting my large guest operating system IO’s into smaller blocks?

By default ESXi’s behavior is to pass IO’s from the guest operating system as large as 32MB with the LSI vSCSI adapter (as long as the actual guest operating system doesn’t have a small default transfer size: http://kb.vmware.com/kb/9645697).

However, the PVSCSI vSCSI adapter was designed to pass only 512KB IO’s, or smaller, so anything larger is intentionally split. This behavior is not configurable. During performance testing, since the larger IO’s are split sequentially, it didn’t make a large difference.

There is one last condition in which the physical adapter device driver can request that ESXi split the IO even further. If, for example, the driver is limited to 64KB IO’s, then it will tell ESXi to split any IO’s into 64KB blocks as a maximum instead of the 512KB default.

It’s important to note though that large IO’s can have a negative effect on performance.


If the ESXi storage stack is splitting guest IO’s, it must wait for all the array acknowledgements before being able to report the final latency. In this case, this will drive up device average latencies. More info here: http://kb.vmware.com/kb/2036863

As well, some storage arrays do not handle large IO’s well and as a result, that becomes a latency-inducing bottleneck.

So large IO’s are not necessarily the promise land either and like most situations, it depends on your application and infrastructure.

There is an ESXi host wide setting that allows you to define the maximum IO size to be passed to the array. It’s known as “Disk.DiskMaxIOSize” More info here: http://kb.vmware.com/kb/1003469

So if you are seeing different IO’s sizes out of ESXi than you expect, you should check a few different layers: guest operating system transfer size, vSCSI driver, ESXi DiskMaxIOSize and the physical adapter driver.

This entry was posted in ESXi, Performance, vSphere and tagged , , , , , , , , on by .
Mark Achtemichuk

About Mark Achtemichuk

Mark Achtemichuk is a Senior Technical Marketing Architect specializing in Performance within the Cloud Infrastructure Marketing group at VMware. Certified as VCDX #50, @vmMarkA has a strong background in datacenter infrastructures, cloud architectures, experience implementing enterprise application environments and a passion for solving problems. He has driven virtualization adoption and project success by methodically bridging business with technology. His current challenge is ensuring that performance is no longer a barrier, perceived or real, to virtualizing an organization's most critical applications, on their journey to the cloud.

One thought on “PVSCSI and Large IO’s

  1. Lonny Niederstadt

    Hello! Has the maximum read and write size from Windows guest to ESXi host been increased from 512kb in vSphere 6.0?
    I’m testing SQL Server 2016, was surprised to see an average read size over 1 mb to a Windows PhysicalDrive backed by a ‘local’ vmdk on a pvscsi adapter. Was expecting all reads at 512kb or smaller.


Leave a Reply

Your email address will not be published. Required fields are marked *