In a blog post last month titled, “vSphere loves 10GigE”, I mentioned a deployment paper was in the works.
The paper titled, “Deploying 10 Gigabit Ethernet on VMware vSphere 4.0 with Cisco Nexus 1000V and VMware vNetwork Distributed Switches”, is now posted on Cisco website. We will follow suit in the coming week and post it at the usual place under resources at vmware.com/go/networking.
I wanted to remind everyone, of what I have already seen floating around the internet, but still important enough to remind. Our next release of SRM is going to require a 64 bit OS. This is the same as our next release of VC as it too will require a 64 bit host OS. This change is required to support the increased capabilities of our products. As we scale our products to match our customers needs, generally 1 – 2 years in advance of where they will need all the capabilities of a given product we have had to use a 64 bit OS. This will show itself in increased numbers in things like more simultaneous vSphere client connections.
I have looked at the upgrade instructions for moving from a 32 bit platform to a 64 bit platform for both VC and SRM. In fact, I am testing them now as I take a break to write this. Our release notes (and VC upgrade guide) will help you with what needs to be done, and for SRM I will provide additional help in this blog. But look forward to this, as a 64 bit OS is what we need to start delivering great new capabilities!
My previous post was on why vSphere loves 10GigE. You can converge all those 1GigE links into a pair of 10GigE to not only improve your network performance but reduce the complexity of your infrastructure.
FCoE takes reduction of complexity one step further in eliminating the HBA, Fibre Channel links and adjacent Fibre Channel switches/directors and carries that traffic over lossless Ethernet.
In conjunction with Cisco and Emulex, we’ve been running a “SAN Virtuosity” series of webcasts with accompanying co-authored papers.
The next SAN Virtuosity webcast is on the topic of FCoE on Wednesday, June 23, 2010 at 9:00am PDT. This session will cover how you can converge your SAN and LAN with vSphere using FCoE. You can register here.
We just posted VMware Data Recovery 1.2! While it is only a dot one increase in terms of version number (don't ask, long story), all the minor changes that we implemented really do add up to a major change. Here are some highlights of the new release:
- a file level restore client for Linux virtual machines
- ability to run up to 10 VDR appliances per vCenter Server instance
- ability to fast switch between the deployed appliances via the vSphere Client plug-in
We also included the fixes to customer found problems, so VDR 1.2 is both a feature and maintenance release. But instead of listing out every fix, feature and enhancement, I thought a 3 minute video walk through would be good use of everyone's time. Additional documentation can be found here. Enjoy!
In our last post, we started to dive deeper into writing CIM client code, and looked at some of the IPMI based data. In this post, we'll explore a little more detail on some of the log data that comes from IPMI and touch on a few new CIM concepts.
Great news for customers who want to deploy Linux in VMware environments!
Thanks to an OEM agreement with Novell that was announced earlier today (read press release), VMware will be able to provide everyone who purchases new vSphere licenses or upgrades with SUSE Linux Enterprise Server and subscription to patches and updates at zero additional cost. Zero is a big number when you consider that qualified customers will be getting an industry proven Linux OS with an extremely broad application support (more than 5,000 apps) like SUSE Linux Enterprise Server along with patches and updates for no additional cost.
And It doesn’t end there…
In addition to the cost savings, customers will also enjoy a streamlined support experience for SLES running on vSphere because VMware will also provide direct technical support for SLES. Customers will be able to purchase technical support for SLES directly from VMware or from our extensive reseller network. Technical support for SLES in not included with vSphere SnS and is not mandatory.
When is SLES for VMware going to be available?
SLES for VMware will be available in 3Q 2010 – more announcements to come. At GA we will provide more details around available options for support and pricing. Regardless of the GA date, everyone who purchases new qualifying vSphere licenses on or after June 9th will be entitled to SLES for VMware and subscription to patches and updates. We will be following up with customers for how to download SLES for VMware and activate their subscription in the next few weeks.
Why is VMware doing this?
VMware’s mission is to reduce IT complexity by providing a stable and realistic path to the cloud. SLES for VMware provides a cost effective path to accelerate the evolution of fully virtualized datacenters, simplifying the portability of applications between on premise and off-premise private cloud environments.
For more details about terms and conditions and qualifying vSphere SKUs checkout the SLES for VMware home page.
Simon Long has published a great blog on log files in ESXi 4.0. It goes over the primary log files in ESXi 4.0, and the various ways to obtain and view them. If you're used to logging into the COS on classic ESX to view these files, then please do read this entry to see various alternatives for doing this in ESXi. One of my favorites is with a web browser, because you can scroll through an entire log file pretty easily, and use the web browser's built-in search capability to look for a particular string.
One question raised by Simon is, how do you obtain the HA logs (aka AAM logs) on ESXi, since they are not exposed through the various means described. The answer for this is to use the "Export System Logs…" option in the vSphere Client. This will download the usual "diagnostics bundle" which contains all available log files, including AAM, and you can then view the contents locally by opening the archive file.
Thanks to Simon for putting this together.