Home > Blogs > VMTN Blog > Monthly Archives: June 2010

Monthly Archives: June 2010

Top 5 Planet V12n blog posts week 25

The World Cup tournament just entered the knockout stage. Today England plays against Germany and I guess it is needless to say that I will be supporting England, or should I say "Engerland"? (Hey, no one loves their neighbours.) All of this has of course nothing to do with the reason I am writing this article. This article is about the top 5 Planet V12n articles of week 25. This week we've got a "newcomer", I guess this is my way of saying welcome William. Here we go:

  • Kendrick Coleman – Why vSphere Needs NFSv4

    If you are familiar with my blog, you'll know that I'm a huge advocate of the NFS protocol with VMware. I firmly believe that over the next few years, ethernet storage will be the front-runner of VMware deployments. Most of the people that I talk to that have a Fiber-Channel (FC) based environment are in large enterprises that made the switch to VMware but used their existing FC environment. Which is great, but now is the time everyone is starting to virtualize their whole environment and money talks when it comes to scalability. I won't go into Ethernet vs FC because there is boat loads of information already out there, but let's talk about NFS. NFS is that guy sitting in the corner that doesn't get much attention, but NFS is making headway into the marketplace.
  • Vaughn Stewart – Data Compression, Deduplication, & Single Instance Storage
    Storage savings technologies are all the rage of the storage and backup industries. While every vendor has their own set of capabilities, it is in the best interest for any architect, administrator, or manager of data center operations to have a clear understanding of which technology will provide benefits to which data sets before enabling these technologies. Saving storage while impeding the performance of a production environment is a sure-fire means to updating one's resume.
  • Daniel Eason – VMware DPM usage – My view
    DPM technology is excellent and to be honest plain common sense, it has moved from being experimental into full blown production supportable within the later versions of ESX and now as a de facto proven product within vSphere. Core Main benefit of DPM is simple, it will dynamically turn off virtual hosts that are not needed at non peak times which is great, it avoids the cost that would have been occurred by running even vSphere hosts in an under utilised state. So I’ll get to the point do I think DPM is capability that can be used it to obtain the saving to anyone? well not really to be honest, I am in the non enthusiastic camp when it comes to DPM and the reasons I think this are as follows…
  • William Lam – ESXi syslog caveat
    Append the above entries between the tags. Once you have updated the vpxa.cfg file, you will want to run the follow command on the Busybox console to ensure the changes are saved and backed up to the local bootbank for ESXi. There is an automated cron job that runs every hour which calls /sbin/auto-backup.sh
  • Scott Drummonds – Private Clouds, People Consolidation, and Chargeback
    The beauty of virtualization is that not only can the physical resources be shared, as any VMware demonstration will prove, but the people that support the infrastructure can be shared, too. This concept is already understood by VMware’s more mature customers, who have been telling VMware for years that virtualization can save more money in operational expenses than capital expenses. These savings are coming after thinning the ranks of dedicated infrastructure specialists and refocusing them on higher value opportunities.

Top 5 Planet V12n blog posts week 24

As it was fathers day yesterday and I also had to fly out to London I totally forgot to hit the "publish" button. I did however create a Top 5:

  • David Davis – VIDEO: Mike DePetrillo speaking on VMware vCloud
    One of the most controversial parts of Mike’s presentation is when he says that vCloud is really sold to the CIO and the message to the IT group is that you will have to change in order to keep your job. In other words, “the cloud” will assimilate the infrastructure as we know it and IT people will have to adapt to that, improving their skill set, in order to move to different roles in the IT organization where they can accomplish the more important IT projects with real ROI (not just maintaining the SAN LUNs, or whatever they do). Watch the video to hear the vCloud message for yourself… Note: Mike doesn’t show a “Project Redwood” demo – sorry.
  • Eric Sloof – StarWind iSCSI multi pathing with Round Robin and esxcli
    After you have created a StarWind iSCSI target, it’s ready to service connections. You can established a connection to an iSCSI target and it appears as a new datastore on your ESX server. I’ll show the operations you need to complete to create and format the datastore in the way your ESX server can create virtual machines on it.
    I’m also going to show how the esxcli command can be used for PSA (pluggable storage architecture) management and explain how to use the vSphere Client to manage the PSA, the associated native multipathing plug‐in (NMP).
  • Tod Muirhead – Scale-Out Performance of Exchange 2010 Mailbox Server VMs on vSphere 4
    The performance in the 4000 user tests shows a rise of only 30ms in the 95th percentile SendMail response time between a single 4-vCPU VM and four 1-vCPU VMs. The 8000 user tests show an increase of approximately 140ms in the same metric when comparing the single 8-vCPU VM with four 2-vCPU VMs. Even though this is a significant percent increase, the absolute increase is still relatively small in comparison to the 1 second threshold which is where users will begin to perceive a difference in performance.
  • Martin Klaus – Operations Management in the Virtualized Environment – What’s different?
    As the foundation for the Private Cloud, virtualization enables server, storage and networking resources to be shared very efficiently across applications. Virtualization also allows you to standardize your service offerings. Templates for your corporate Windows or Linux images can be provisioned as virtual machines in minutes. Even higher-level server configurations with complete web, application and database server stacks can become building blocks for your Enterprise Java environments or Sharepoint instances, further simplifying the provisioning process and lessening the need for one-off admin tasks. Automated backup, patch and update processes are additional benefits that are easy to realize with virtualized infrastructure.
  • Scott Lowe – The vMotion Reality
    In his article, Benik states that the ability to dynamically move workloads around inside a single data center or between two data centers is, in his words, “far from an operational reality today”. While I’ll grant you that inter-data center vMotion isn’t the norm, vMotion within a data center is very much an operational reality of today. I believe that Benik’s article is based on some incorrect information and incomplete viewpoints, and I’d like to clear things up a bit.

Top 5 Planet V12n blog posts week 23

As I was watching one of the World Cup games yesterday evening I totally forgot to click "publish". Thanks Jason for pointing this out. Here's this weeks top 5:

  • Aaron Delp – Comparing Vblocks
    I believe one of the most interesting concepts to come along in our industry recently has been Cisco/EMC/VMware's Vblock. My best definition for Vblock is a reference architecture that you can purchase. Think about that for a second. Many vendors publish reference architectures that are guidelines for you to build to their specifications. Vblock is different because it is a reference architecture you can purchase. This concept is a fundamental shift in our market to simplify the complexity of solutions as we consolidate Data Center technologies. We are no longer purchasing pieces and parts, we are purchasing solutions.
  • Scott Drummonds – VMDirectPath
    The only reason why anyone is considering VMDirectPath for production deployments is the possibility of increased performance. But the only workload for which VMware has ever claimed substantial gains from this feature is the SPECweb work I quoted above. That workload sustained 30 Gb/s of network traffic. I doubt any of VMware’s customers are using even a fraction of this network throughput on a single server in their production environments.
  • Jason Boche – NFS and Name Resolution
    A few weeks ago I had decided to recarve the EMC Celerra fibre channel SAN storage. The VMs which were running on the EMC fibre channel block storage were all moved to NFS on the NetApp filer. Then last week, the Gb switch which supports all the infrastructure died. Yes it was a single point of failure – it’s a lab. The timing for that to happen couldn’t have been worse since all lab workloads were running on NFS storage. All VMs had lost their virtual storage and the NFS connections on the ESX(i) hosts eventually timed out.
  • Frank Denneman – Memory Reclaimation, When and How?
    Back to the VMkernel, in the High and Soft state, ballooning if favored over swapping. If it ESX server cannot reclaim memory by ballooning in time before it reaches the Hard state, the ESX turns to swapping. Swapping has proven to be a sure thing within a limited amount of time. Opposite of the balloon driver, which tries to understand the needs of the virtual machine let the guest decides whether and what to swap, the swap mechanism just brutally picks pages at random from the virtual machine, this impacts the performance of the virtual machine but will help the VMkernel to survive.
  • Duncan Epping – Is this VM actively swapping?
    At one point the host has most likely been overcommitted. However currently there is no memory pressure (state = high (>6% free memory)) as there is 1393MB of memory available. The metric “swcur” seems to indicate that swapping has occurred” however currently the host is not actively reading from swap or actively writing to swap (0.00 r/s and 0.00 w/s).

VMware vExpert 2010

[Updated Monday 7 June]

The invitations to the VMware vExpert 2010 program have been sent out. Emails were sent out Friday and Monday; the timing had no bearing on the merit of your application! (If you were expecting an invitation, please check your junk mail filters. Although I tried not to use any words like Congratulations! or You Win a Million Dollars! or Free Herbal Prescriptions! I've gotten reports that spam filters did catch a few of the outbound emails.)

We had a great selection of candidates this year, and I'm looking forward to working with all of you. All of the judges were very impressed with the applicants, and we made some very hard decisions about who to accept in the program. 

If you applied but did not get selected, I would be happy to work with you on planning for 2011 and how you might work toward a vExpert designation. The vExpert award looks backward on what you did the year before, and in the seven short months until Jan 2011 you could make quite an impact. Things move very quickly in the social media world, and people who rock it hard can get noticed quickly. 

There were a number of common cases in applications that weren't accepted:

  • You didn't demonstrate enough activity. If your claim to vExpert fame is a blog, then you should blog like you mean it. If you are active on the community, then you should be very active. Although we tried to evaluate quality over quantity, blogging or answering questions on the community is an endurance sport, and the way to grow in knowledge and grow an audience is to be consistent over time. Take the time to blog (or speak, or whatever you do) every day. This is hard work. Work hard, but you just have to do one step at a time. After a year, you'll be shocked at how much you accomplished. (Now life may have intervened — babies, work, health, and happiness are all part of living and should take precedence over virtualization evangelism. We'll catch you next year when you come up for air. No worries.)

  • You participated but did not create. You came to events, podcasts, and more. You supported and commented and tweeted. You probably learned a lot, and you now know more people, but you didn't do a lot of sharing of your expertise. Creating is hard work, and we looked for people who sat their butts in their chairs and typed or powerpointed or otherwise instantiated their knowledge so that others could benefit. You have something to say to the world. Say it. What problem did you solve at work today? What are you passionate about? Give back to the world. 

  • Your didn't differentiate yourself. There are two related parts to this problem: one you can't do much about, but one that's the key to success. If you are in the English-speaking virtualization world, the bar for evangelism is very high. We're a bunch of smart people, and you're competing for people's attention against both geniuses and overachievers. (Oh, yes, I'm talking about the Dutch.) You can't do much about where you live, but you can figure out how to make yourself stand out. Don't just blog product and press releases. Go beyond. Blog your passion and tell people about what's important to you. Make a picture or a comic or a presentation or a video. Become "that guy that does that amazing thing." Dare to be memorable.

  • You didn't demonstrate enough "above and beyond" activity outside your normal job. If your day job is to sell virtualization products, you had to pass a high bar to receive a vExpert award. The judges have a soft spot in our hearts for people who could be lounging on the couch at home or even at a hotel, but instead push it harder. Invest a slice of your time in yourself. Having fun doing something cool is the best way to stand out in your career. It's much better than not having fun and not standing out.

  • You need to go deeper. Virtualization is a deep topic. vSphere is a deep product that cuts across all IT disciplines. We all start somewhere in our journey and from the perspective of where we've been. Be humble enough to realize that you might not understand the whole landscape yet. Do your homework. Listen, learn, break out of your silo. 

  • You didn't demonstrate enough reach outside your company. The vExpert award is at some level about evangelism. Sharing your expertise internal to your own company is wonderful, but the judges were also looking for people who had created a platform where they could influence beyond the boundaries of a single company — thus the emphasis on a blog or speaking engagements. Go out and conquer! If you're introverted, write. If you're extroverted, speak. If you're brilliant, teach. If you're not brilliant, hook up with people that are and help organize! Make waves.

  • Your application note wasn't detailed enough. Often, the judges couldn't determine exactly what impact you had in your activities, or exactly what you did. If someone else nominated you, they may not have adequately described exactly how awesome you were in 2009. I think we're moving to an application model (vs a nomination model) for next year. Get ready to apply for 2011 – now is not the time to be modest. Allow yourself to excel and then just let us know what you've been up to. 

  • Your activity was mostly in 2010. The award was given out for things you did in 2009. The next vExpert selection will be in seven short months, so if you just got going in 2010, you have a great runway to join the program next year. Keep it up!

I hope something in that advice resonates with you. I'd love to work with you throughout this year and next – give me a buzz. For those that did get selected as vExperts, I've got more to say about why and what's coming up, but that came come later. I hope you're as excited as I am. Doors will be opening to our vExpert community site tomorrow!

John Troyer

Top 5 Planet V12n blog posts week 22

Week 22 already. Almost half way down 2010. Next week the Fifa World Cup starts. For those of you, probably Americans, who don't have a clue what it is about: World Cup Soccer. And yes this is the most widely viewed sports event there is and the cool thing about it is that you get to watch sports for 45 minutes in a row before you have a commercial break! Anyway, there's one thing left to say before I will list this weeks Top 5: GO HOLLAND!

  • Cody Bunch – The Math Behind the DRS Stars
    In our particular case, not much to look at, as well, she is seemingly a well balanced cluster. However let’s work through the formula with the assumption that we have a 2 node cluster and a standard deviation of 0.282 (the “target” from above): 6 – ceil(0.282 / 0.1 * sqrt(2)).
  • Eric Sloof – Caveat when using – Percentage of cluster resources reserved as failover spare capacity
    I think everyone knows the three admissions control policies which can be enforced on a VMware HA Cluster.  If you are using the default “Host Failures Allowed” policy, you must keep in mind that the largest virtual machine reservation will decide how big your cluster slot size is going to be. In most cases when you are using reservations that differ, I would prefer to use the “Percentage of cluster resources reserved as failover spare capacity”. But be careful, I’ve pulled two quotes which warn us for scattered resources and the need to set restart priority on large virtual machines.
  • Simon Long – VMware ESXi 4 Log Files
    This is the ESXi Host Agent log. It contains information on the agent that manages the ESXi Host and it's VM's. I don't tend to use this log as much as I used to with ESX, purely because it has been amalgamated in the message log. If you are troubleshooting a Host issue and don't want vmkernel logs getting in the way, this is the log for you. The log entries are time stamped (using UTC timezone) which is pretty handy when looking back to see what happened when an error occurred or something failed.
  • Arnim van Lieshout – PowerCLI: Reset CPU and Memory Limits
    Today I noticed a memory limit on a vm. After investigating my environment using the vEcoShell and the Community PowerPack, I found more vms with memory limits set. It turned out that there was a template which had the limit set. I could easily reset all limits using the GUI, but I thought I rather do it with PowerCLI. Alan Renouf did a post already on a oneliner to reset all cpu and memory limits back in july 2009. After trying that code I found it rather slow. If you want to speed up things in PowerCLI you need to use the Get-View cmdlet. After some digging in the vSphere API Reference, I came up with a different peace of code that is much faster.
  • Duncan Epping – esxtop -l
    As most of you know esxtop takes snapshots from VSI nodes (similar to proc nodes) to capture the running entities and their states. The rate in which these snapshots are taken can be changed with the “s”. The default setting is 5 seconds and the minimum, which most people probably use, is 2 seconds. This means that every entity (worlds, for instance a virtual machine) and the associated info is queried again every two seconds. As many of the metrics shown in esxtop are calculated based on the difference of two successive snapshots, e.g. %USED (CPU), esxtop just rereads all the info(all entities and all values) and calculates the values of the metrics.