Uncategorized

Top 5 Planet V12n blog posts week 40

Week 40 already, before you know it it is christmas again. This week we had some excellent posts again, but probably the most exciting thing that happened this week was the VMTN Community Podcast with Vaughn Stewart, Chad Sakac, Andy Banta, Eric Schott and Adam Carter. The vGods of iSCSI. If you didn't join last wednesday you can find it via Vaughn's article here. Here's this weeks top 5:

  • John Arrasjid – VCDX Tips from VCDX 001 John Arrasjid
    Practice what you preach and learn from others. Architects listen first. Don’t assume the answer before the discussion starts! Scenarios for VCDX defenses test journey to solution, not necessarily the final answer. Whiteboard, talk and ask questions. Troubleshooting scenarios – think of the architecture and implementation approach to resolution. Logs, design, SC commands.
  • Eric Gray – PowerShell Prevents Datastore Emergencies
    When a datastore is about to run out of space, the fastest resolution may be to simply migrate virtual disks to another datastore. VMware Storage VMotion provides that capability with zero downtime for VMs and no disruption to end users. Fortunately, PowerCLI can perform this feat with ease, thanks to the Move-VM cmdlet.
  • Chad Sakac – HOWTO: Use Site Recovery Manager and Linked Clones together
    VMware and EMC collaborated on a project recently with a customer, and that project included documenting the detail on why this occurs, and also the workaround.
    If you’re interested – read on!
    The key is that the ADAM and View SQL databases actually store the vCenter instance name (in the form of a Moref ID – also known as the MOID), which after SRM failover has changed, which breaks the replica/linked clone relationship. Further, the parent location is explicitly in the vmdk descriptor.
    You can (without doing anything fancy), you can deploy new desktop pools, but can’t access existing linked clones, or recompose or refresh.
  • Massimo Re Ferre' – Ad Hoc Designed Infrastructures: do they still make sense?
    Simply put, IT is comprised of two major building blocks: Functional Requirements and Non-Functional Requirements. This is how Wikipedia defines them:
    Functional Requirements: "A functional requirement defines a function of a software system or its component. A function is described as a set of inputs, the behavior, and outputs (see also software)"
    Non Functional Requirement: "A non-functional requirement is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. This should be contrasted with functional requirements that define specific behavior or functions".
    So the question I have been thinking about for the last few years is simple: in a virtualization context, do I really need – during a customer engagement – to go through a deep level analysis of the applications currently being deployed or soon to be deployed? In addition, defining the new virtualized infrastructure to support the applications mentioned, do I need to analyze all those applications one-by-one (from a Non Functional Requirement perspective) or can I treat them as a whole? You can depict the answer from the following two slides which are included in a set of charts I created back in 2007.
  • Duncan Epping – What's that ALUA exactly?
    This “problem” has been solved with vSphere. VMware vSphere is aware of
    what the most optimal path is to the LUN. In other words VMware knows
    which processor owns which LUNs and sends traffic preferably directly
    to the owner. If the optimized path to a LUN is dead an unoptimized
    path will be selected and within the array the I/O will be directed via
    an interconnect to the owner again. The pathing policy MRU also takes
    optimized / unoptimized paths into account. Whenever there’s no
    optimized path available MRU will use an unoptimized path; when an
    optimized path returns MRU will switch back to the optimized path. Cool
    huh!?!