It was quite a busy week again on Planet V12n. The amount of blog posts published every week over and over again is amazing. What amazes me even more is the quality of the blog posts which makes picking a top 5 harder every single week. This is what I ended up with…
- Chad Sakac / Vaugn Stewart – A “Multivendor Post” to help our mutual NFS customers using VMware (1 , 2)
The first core difference is that block (iSCSI/FC/FCoE) use an
initiator-to-target multipathing model based on MPIO. The domain of
the path choice is from the initiator to the target. For NAS – the
domain of link selection is from one Ethernet MAC to another Ethernet
MAC – or one link hop. this is configured from the host-to-switch,
switch-to-host, and NFS server-to-switch and switch to NFS server, and
the comparison is shown below (note that I called it “link
aggregation”, but more accurately this is either static NIC teaming, or
dynamic LACP) - Rodney Haywood – Nehalem Memory with Catalina
In order to increase the number of memory sockets without sacrificing
memory bus clock speed, the ASIC adds a small amount of latency to the
first word of data fetched. Subsequent data words arrive at the full
memory bus speed with no additional delay. The first word delay is in
the order of 10% but I have heard from some spies that testing shows
this is looking like a non-issue. Its especially a non-issue compared
to the constant 10% latency hit and 28% drop in bandwidth you would get
if you populated the channels in the normal Nehalem way. - Brian Noris – Securing ESX Service Console
Ive been doing a fair bit of virtualization security lately and I
thought id share a few tid bits on what Ive done and why. If y0u find
this useful then check back every couple of days as ill be adding
additional steps and verifying if these apply to VI3, Vsphere or both. Most of you who are familiar with ESX will know the default “Out Of
The Box” behaviour restricts the user root from logging in directly via
SSH which generally means either root user must authenticate as a
standard user and then SU to root or log in directly from the console. - Eric Sloof – Diskless Boot of ESX4 and ESX4i with PXE and iSCSI
Since EDA and UDA are still in their beta phase and there aren’t much
alternatives available for installing a VMware ESX4 or ESX4i server
unattended, I thought “let’s build one myself”. I’m not a Linux guy so
I had to create a Windows distribution server. In my search I’ve
discovered a great little piece of software called CCBoot. This windows
application enables a diskless boot of an ESX4i with iSCSI. Diskless
boot makes it possible for ESX server to be operated without a local
disk. The 'diskless' server is connected to a VMDK file over a network
and boots up the hypervisor from the remotely located VMDK file. CCBoot
is the convergence of the rapidly emerging iSCSI protocol with gPXE
diskless boot technology. Remote boot over iSCSI, or CCBoot, pushes the
iSCSI technology even further, opening the door to the exciting
possibility of the diskless computer. - Dominic Rivera – DRS and anti affinity rules
An anti-affinity DRS rule is used when you want to keep 2 virtual
machines on seperate hosts, usually because they provide a redundant
service and locating them on the same host would eliminate that
redundancy. Unfortunately an anti-affinity DRS rule can only be created
for exactly 2 VMs. As you can see from the table below, once you get to
creating anti-affinity rules for sets of VMs larger than 4, the
creation of the rules becomes daunting.