posted

11 Comments

I was having a discussion today about something that had not come up in a long, long time. It was about how controller numbers, target numbers and device numbers are assigned on an ESXi host.

The scenario is a Microsoft Exchange implementation where the requirements are to have separate LUN for each DB & Log. The net result is a requirement for 170 LUNs. Our configuration maximums guide states that an ESXi host will support a maximum of 1024 paths which, with 4 paths per LUN, allows for 256 LUNs (which is also the maximum number of LUNs per ESXi host too). There was some confusion with our Virtual Machine Storage Maximums, especially around the number of targets, so I put together this scenario to explain the host maximums.

Let's say that there are two HBAs in the ESXi host and all 170 LUNs were presented from a single array via two storage processors. You would theoretically see paths like this (I'll come back to why I use the word theoretically shortly):

– from the first HBA to first storage processor: c0t0d0 … c0t0d170
– from the first HBA to second storage processor: c0t1d0 … c0t1d170
– from the second HBA to first storage processor: c1t0d0 … c1t0d170
– from the second HBA to second storage processor: c1t1d0 … c1t1d170

Here c0 & c1 represent the HBAs (controllers), t0 & t1 represent the storage processors. I should point out that in later versions of ESXi, the controller number was changed to include the actual HBA name (e.g. vmhba1) and that the 'c' reference now relates to channel, but for the purposes of this discussion we'll stick with the older nomenclature. In the list above, d0 thru to d170 represent the devices/LUNs. These are the same set of LUNs, but discovered on different paths/HBAs/targets.

If you had two arrays, with 85 LUNs presented from each, again via two storage processors, you may see something like this:

– from the first HBA to first storage processor on array 1: c0t0d0 … c0t0d85
– from the first HBA to second storage processor on array 1: c0t1d0 … c0t1d85
– from the first HBA to first storage processor on array 2: c0t2d0 … c0t2d85
– from the first HBA to second storage processor on array 2: c0t3d0 … c0t3d85

– from the second HBA to first storage processor on array 1: c1t0d0 … c1t0d85
– from the second HBA to second storage processor on array 1: c1t1d0 … c1t1d85
– from the second HBA to first storage processor on array 2: c1t2d0 … c1t2d85
– from the second HBA to second storage processor on array 2: c1t3d0 … c1t3d85

Now, t0, t1, t2 & t3 represent the storage processors, two from the first array and two from the second array. Each target number will represent a discovered storage processor from the HBA/controller.

In this case, LUN 0 on the first array would have 4 paths:

c0t0d0
c0t1d0
c1t0d0
c1t2d0

And LUN 0 on the second array would also have 4 paths:

c0t1d0
c0t3d0
c1t1d0
c1t3d0

You can have different storage processors represented by the same target number, but they would be on different controllers/HBAs. For example, if HBA c0 was only mapped to the first array and HBA c1 was only mapped to the second array, you may see paths like this:

– from the first HBA to first storage processor on array 1:        c0t0d0 … c0t0d85
– from the first HBA to second storage processor on array 1:     c0t1d0 … c0t1d85

– from the second HBA to first storage processor on array 2:     c1t0d0 … c1t0d85
– from the second HBA to second storage processor on array 2: c1t1d0 … c1t1d85

Now we have two instance of t0, and two instance of t1, but the target numbers now represent completely different storage processors. And this is allowable since they are on unique controller ids, c0 & c1.

Lastly, I say theoretical above as there is no way of guaranteeing which storage processor gets assigned a target number. My understanding is that it is based on order of discovery, and if changes are made to the fabric/network, the discovery order and thus target numbers could change on the next reboot. This doesn't matter as we no longer rely on target numbers for accessing LUNs (like we did in the old days). I'm guessing this is why this doesn't come up in conversation too much these days.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter VMwareStorage

About the Author

Cormac Hogan

Cormac Hogan is a Senior Staff Engineer in the Office of the CTO in the Storage and Availability Business Unit (SABU) at VMware. He has been with VMware since April 2005 and has previously held roles in VMware’s Technical Marketing and Technical Support organizations. He has written a number of storage related white papers and have given numerous presentations on storage best practices and vSphere storage features. He is also the co-author of the “Essential Virtual SAN” book published by VMware Press.