Uncategorized

Top 5 Planet V12n blog posts week 11

I had a lot of catching up to do this week. Due to the VCDX Defenses in Munich I did not have a lot of time to read blog articles during the week. Normally I take 30 minutes every day, at least, to catch up and read the interesting articles my favourite bloggers wrote. This week I had to prep next days sessions. We had eleven candidates and each of them handed in at least 100 pages of documentation(Design, Test Plans, Ops, etc). But I did manage to catch up yesterday and today, after doing this I came up with the following top 5:

  • Scott Lowe – Understanding Network Interface Virtualization
    As the proliferation of virtualization continues, this trend toward increased complexity also continues unabated. How, then, are we supposed to address this? NIV is intended to help address this problem. NIV seeks to remove the complexity from the edge—the NICs and vNICs—and drive that complexity toward the bridges. That is a key underlying principle behind NIV. Look back at the definitions: one characteristic of an IV-capable bridge is that the IV-capable bridge and all of its associated IVs appear to the outside world as a single bridge.
  • Vaughn Stewart – Transparent Storage Cache Sharing– Part 1: An Introduction
    The enabling components of TSCS is the ability within Data ONTAP to deduplicate storage objects (files, LUNs, volumes) and to create zero costs clones of storage objects (files, LUNs, volumes). These storage savings technologies often get 'parroted' quite regularly by some of the vendors offering traditional storage array platforms. For the sake of this discussion I’d like to differ any comparisons around storage savings technologies for a future post where we can spend the appropriate attention required to discuss these technologies inn greater detail.
  • Alan Renouf – Dell ESXi Management
    To help reduce the system footprint and to simplify deployment, the ESXi software does not have a traditional service console management interface where Dell OpenManage agents are installed. Instead, to provide the required hardware manageability, VMware has incorporated the standard Common Information Model (CIM) management profiles into the ESXi software. The CIM framework is an open standard framework that defines how managed hardware elements in a system are represented.The CIM framework consists of CIM providers developed by hardware vendors to enable monitoring and managing of the hardware device. By representing the hardware elements using standard CIM, ESXi provides any management tool (that implements the same open standards) the ability to manage the system.
  • Nicholas Weaer – FCoE Multi-hop: Why Wait?
    Let me visualize it for you. I want you to picture a FedEx Express truck. It has a simple job. It is given packages (frames) and it delivers them to addresses (FC address). Now, the FedEx Company prides itself on reliable delivery. It has all kinds of processes and methods(Flow Control, Classes of service) for ensuring that the truck reaches the address and delivers the packages on time. These methods have been finely tuned specifically for this job.
  • Duncan Epping – Scale Up
    Now it’s not only the associated cost with the impact of a host failure it is also for instance the ability of DRS to load balance the environment. The less hosts you will have the smaller the chances are DRS will be able to balance the load. Keep in mind DRS uses a deviation to calculate the imbalance and simulates a move to see if it results in a balanced cluster. Another thing to keep in mind is HA. When you design for N+1 redundancy and need to buy an extra host the costs associated for redundancy is high for a scale up scenario. Not only the costs associated are high, the load when the fail-over needs to occur will also increase immense. If you only have 4 hosts and 1 host fails the added load on the 3 hosts will have a higher impact than it would have on for instance 9 hosts in a scale out scenario.