Since the release of VMware Virtual SAN, I’ve been involved in numerous costumer and field conversations around the topic VMware Virtual SAN’s ability to take advantage of data locality.
I have addressed the question in several of the Virtual SAN presentations I have delivered, but I realized that this was an ongoing topic of discussion and one which we needed to provide more details in order to satisfy everyone that has been wondering about this topic. I figured it was time to put together some form of OFFICIAL collateral providing in-depth details around this topic.
So, like any storage system, VMware Virtual SAN makes use of data locality. Virtual SAN uses a combination of algorithms that take advantage of both temporal and spatial locality of reference to populate the flash-based read caches across a cluster and provide high performance from available flash resources.
For more details on this topic, download the new Understanding Data Locality in VMware Virtual SAN white paper from the link below:
Understanding Data Locality in VMware Virtual SAN
For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds
Today we are excited to announce the launch of the vSphere Beta Program. The vSphere Beta is open to everyone to sign up and allows participants to help define the direction of the world’s most widely adopted, trusted, and robust virtualization platform. Future releases of vSphere strive to expand vSphere 5.5 with new features and capabilities that improve IT’s efficiency, flexibility and agility to accelerate your journey to the Software Defined Enterprise. Your participation will help us continue to drive towards this goal.
This vSphere Beta Program leverages a private Beta community to download software and share information. We will provide discussion forums, webinars, and service requests to enable you to share your feedback with us.
You can expect to download, install, and test vSphere Beta software in your environment. All testing is free-form and we encourage you to use our software in ways that interest you. This will provide us with valuable insight into how you use vSphere in real-world conditions and with real-world test cases, enabling us to better align our product with your business needs.
The vSphere Beta Program has no established end date and you can provide comments throughout the program. But we strongly encourage your participation and feedback in the first 4-6 weeks of the program.
Some of the many reasons to participate in this vSphere Beta Program include:
- Receive early access to the vSphere Beta products
- Gain early knowledge of and visibility into product roadmap
- Interact with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers
- Provide direct input on product functionality, configurability, usability, and performance
- Provide feedback influencing future products, training, documentation, and services
- Collaborate with other participants, learn about their use cases, and share advice and learnings
Sign up and join the vSphere Beta Program today at: https://communities.vmware.com/community/vmtn/vsphere-beta
What’s vSphere Big Data Extensions?
VMware vSphere Big Data Extensions (BDE) is a feature within vSphere to support Big Data and Hadoop workloads. BDE provides an integrated set of management tools to help enterprises deploy, run, and manage Hadoop on the vSphere platform. Through the vSphere vCenter user interface, enterprises are able to manage and scale Hadoop seamlessly. By combining with vCloud Automation Center, we can also provide an on-premise Hadoop as a Service solution for Hadoop users.
What’s new in BDE 2.0?
-Support for the latest Distributions of Apache Hadoop 2.0 Software. In addition to the previously supported Hadoop distributions, Big Data Extensions users may now also deploy and manage Apache Bigtop 0.7.0, Cloudera CDH5, Hortonworks HDP 2.1, MapR 3.1, and Pivotal PHD 2.0.
-CentOS 6.4 Operating System for the Hadoop Template Virtual Machine. The Hadoop Template Virtual Machine now uses CentOS 6.4 as its default operating system. This provides an increase in performance, as well as native support for all Hadoop distributions for use with Big Data Extensions.
-IPv6 Support for Serengeti Management Server Network. User can use IP version 6 (IPv6) for network addressing within the Serengeti Management Server network.
-Support for Internationalization (I18N) Level 1. User can specify vCenter Server resources using any character set supported by the vCenter Server system on which you deploy Big Data Extensions.
-Serengeti Management Server Administration Portal. Serengeti Management Server Administration Portal helps user to view, manage, and troubleshoot Serengeti Services in a central web UI.
-Improved Error Handling. Big Data Extensions provides improved error handling and reporting to help user more easily identify, understand, and recover from error conditions.
Thin vs Lazy Thick vs Eager Thick disk performing profiling on all flash arrays:
Which type of disk is better?
There has always been an ongoing debate on whether thin or thick disks are best suited for high performance IO workloads. While Eager Zeroed Thick disks provide the best performance, they occupy the entire space including unused space in the overlaid file system and are hence not efficient with space utilization. Thin disks consume only the space used by the overlaid operating system or application but have underlying performance concerns for high IO workloads.
Having requirements such as eager zeroed thick for high IO workloads made the job of both the virtual and storage admins difficult. There is extra effort and coordination required to deviate from the norm of using thin disks for traditional virtual machines and to separate out the LUNS with special formatting. These special requirements were operationally hard to implement and maintain.
Flash storage has revolutionized and improved its performance many fold. All flash storage arrays holds the promise of greatly improved performance. Can use of All Flash arrays obviate the need for different storage LUNS and formatting for high IO workloads?
Today we’ll take a look at running a recovery plan for SRM programatically, from the API via PowerCLI.
In almost all scenarios, falling over in an automated fashion is a poor idea. There is a lot of risk associated with it and a lot of potential liability for failing over due to incorrect reasoning. Failing over automatically in *test mode* however makes an awful lot of sense!
The best way to verify backup data integrity is to routinely perform restores from this backup data. For a variety of reasons, the majority of administrators do not verify backups using this method very often (if ever). Wouldn’t it be great if a backup and recovery solution provided the option to do this automatically on a regular schedule? Wouldn’t it be even better if the solution reported the results of the verification exercise? I am happy to report that is one of the new features of vSphere Data Protection (VDP) Advanced 5.5! Keep reading for more information and to download a short white paper on the topic that includes best practices…
I will presenting at a couple of upcoming VMware User Group (VMUG) meetings this week in Florida. I’ll be delivering a presentation on VMware’s Software Defined Storage Portfolio with focus on Virtual SAN (VSAN) recommended practices, use cases, and vCloud Suite interoperability capabilities.
For those interested in pursuing the VCDX Certification, I will also be participating in the delivery of a VCDX Boot Camp along with Florida’s own local VCDX, Chris McCain (VCDX#79).
On Tuesday, December 3rd, I’ll be presenting at the Tampa VMUG Meeting at the USF Marshall Student Center. Go to the Tampa VMUG Meeting site for more information about the Tampa VMUG Meeting and registration.
On Wednesday, December 4th, I’ll be presenting at the Orlando VMUG Meeting at the PUP Corporate Office. Go to the Orlando VMUG Meeting site for more information about the Orlando VMUG Meeting and registration.
Oh yeah! Last but not least, on Thursday, December 5th, I’ll be presenting at the Miami VMUG Workspace at the JW Marriott Marquis Miami. Go to the Miami VMUG Workspace meeting site for more information about the Miami VMUG Workspace and registration.
I hope to see and meet many of you at the events, and hopefully answer your questions with regards to VMware’s Software Defined Storage portfolio. Have your questions ready.
For future updates, be sure to following me on Twitter: @PunchingClouds
One of the new 5.5 features in vSphere Replication is the ability to retain historical replications as point-in-time snapshots on the recovered virtual machines.
Using this feature is quite handy in order to recover from systems that have corrupted data or viruses or even to do auditing of system changes and the like. While VMs protected with vSphere Replication can be recovered manually, and one by one, the full automation of recovery is of course offered by Site Recovery Manager.
In this post I’ll look at how we configure these multiple points in time (MPIT) during replication, and how we interact with them after failover by SRM.
Today Pivotal announced the availability of Pivotal CF. Jointly developed with VMware, the Pivotal CF product includes a packaged and supported version of the Cloud Foundry open PaaS for VMware vSphere.
In April 2011 VMware first launched Cloud Foundry, an Apache-licensed open source Platform as a Service (PaaS) and an associated vSphere-based public cloud service. A year later, in April 2012, we announced a DevOps toolchain called BOSH, used to deploy and manage Cloud Foundry at scale on virtualized infrastructure. In April 2013 VMware and EMC formed Pivotal, a spinout company using technology from both companies including Cloud Foundry.
VMware’s vision for Cloud Foundry has always been to deliver maximum agility to application developers across both public and private cloud environments. In working with Pivotal to deliver Pivotal CF we have fulfilled that vision, bringing the incredible productivity of Cloud Foundry to vSphere customers.
This is part three of a set of blogs about the fantastic performance improvements found in vSphere Replication 5.5. Take a look at the prior two posts to understand what has changed and why it such a change by reading:
But now I do have a few warnings about all this improved vSphere Replication performance. Why is this not unequivocally a great thing? Because you may run the risk of overloading certain systems with this.
Let’s look at the technical issues at hand Continue reading