A new version of the VMFS Technical Overview and Best Practices white paper is now available. It has been updated for vSphere 5.x & VMFS5. You can download the paper from the VMware Technical Resources web site here. This new paper has been updated with VMFS5 limits, and also includes discussions around interoperability with newer vSphere features. It also has updated information around when and where to use Raw Device Mappings (RDMs), a feature which has seens it requirements change over the years.
The paper has been maintained by a number of technical marketing personnel over the years. We hope you find this latest version useful.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage
In this post I’ll show 6 VMs being protected with vSphere Replication. 2 VMs each will reside on fibre channel data stores (EMC CX4), iSCSI data stores (Falconstor NSS Gateway), and an NFS datastore (EMC VNX5500). I’ll replicate them onto different datastores, fail them over, reprotect, and fallback.
In previous releases of ESXi, only SNMP v1 and v2c was supported on the host. With the latest release of ESXi 5.1, we now have added support for SNMPv3 which provides additional security when collecting data from the ESXi host. You also have the ability to specify where to source hardware alerts using either IPMI sensors (as used by previous release of ESXi) or CIM indicators. You can also filter out specific traps you do not wish to send to your SNMP management server.
In addition to SNMPv3 support, we also now have an ESXCLI equivalent command to the old vicfg-snmp command. This means that you no longer have to use multiple commands to configure your ESXI hosts and can standardize on just using ESXCLI for all your host level configurations.
VMware vCloud Networking and Security Edge is part of the vCloud Networking and Security solution and provides network edge security and gateway services such as DHCP, VPN, NAT, Firewall and Load Balancing. Edge provides load balancing for TCP, HTTP, and HTTPS traffic. Edge maps an IP address to a set of backend servers for load balancing. In this blog, I am going to show step-by-step configuration illustrating how easy it is to deploy and configure load balancing using Edge.
Each Edge virtual appliance can have a total of ten uplink and internal network interfaces. In the three-tier application below, Web, App and DB tiers are on three different internal interfaces of the Edge. Uplink interface is connected to 10.20.181.0/24 network with access to corporate network. In this example, we are going to load balance HTTP and HTTPS traffic to two internal web servers (192.168.1.2 and 192.168.1.3) using an external virtual address (10.20.181.170).
Last week I wrote an article about resxtop failing to connect to an ESXi 5.1 host due to SSL Certificate validation which has been implemented in resxtop and I provided a few workarounds that you can use until a fix is released for resxtop. As promised at the end of that article, I will show you how you can automate the creation proper certificates for environments using CA self-signed SSL Certificates so you can continue using resxtop with ESXi 5.1 until a fix is released.
With the vSphere Web Client being the current and future client user interface for vCenter Server managed objects and resources, I thought it would be a good idea to show folks how to enable the vSphere Storage Appliance (VSA) 5.1 Manager plug-in to the new vSphere Web Client. In most cases I see that VSA customers tend to manage their VSA based infrastructures from the C# based vSphere Client (Thick Client). With the new capabilities of the vSphere Web Client, the use of the vSphere C# client to manage the VSA based infrastructures is no longer a requirement.
This vSphere Web Client becomes very appealing in scenarios where remote or local offices can’t provide accessible Windows based systems where the vSphere C# Client can be installed because of security or budgetary reasons. The vSphere Web Client is a compelling and convenient solution for ROBO scenarios as long as there is connectivity to the management environment where the vCenter Server and VSA Manager system resides.
Having the capability to administer and manage any VSA based infrastructures from any operating system (Windows, Linux, Apple) with a browser that supports Flash can be powerful and cost effective.
The process for getting the VSA Manager plug-in enabled in the vSphere Web Client is not as simple as some of the other vCloud Suite components, but this is something that could be address for future release. This solution is currently supported only under Windows based vCenter Server deployments and not with vCenter Server Appliances. Follow the steps and examples below in order to successfully enable the VSA 5.1 plug-in for the vSphere Web Client.
If you have recently installed the latest vCLI 5.1 release and using the remote resxtop utility to connect to a vSphere 5.1 host, you might have noticed one of the following error messages: Login failed, reason: HTTPS_CA_FILE or HTTPS_CA_DIR not set or SSL Exception: Verification parameters
As you might know by now, with the release of vSphere 5.1, VMware has enhanced vSphere Distributed Switch (VDS) operationally as well as functionally. I talked about the new features briefly in the what’s new paper and also posted evaluation videos on some of the key features. In the next couple of weeks, I am planning to post more technical details about some of these new features. Since there were lots of questions around the new BPDU filter feature, I thought I will address that in this post.
First of all, I will provide some background behind the BPDU frames for those who are new to this particular networking term. BPDU stands for Bridged Packet Data Unit. These are the packets that are exchanged between physical switches as part of the Spanning Tree Protocol (STP). STP is used to prevent loops in the network and is enabled on the physical switches. When a link on the physical switch port goes up, STP protocol starts its calculation and BPDU exchange to determine if the port should be in forwarding or blocking state. Bridge Packet Data Unit (BPDU) packet exchanges across the physical switch ports help identify Root Bridge and form a tree topology. VMware’s virtual switch doesn’t support STP and thus doesn’t participate in BPDU exchanges. If a BPDU packet is received on an uplink, VDS drops that packet. Also, VDS doesn’t generate BPDU packets.
The STP process of identifying root bridge and finding if the switch ports are in forwarding or blocking state takes somewhere around 30 to 50 seconds. During that time no data can be passed from those switch ports. If a server connected to the port can’t communicate for that long, the applications running on them will time out. To avoid this issue of time out on the servers the best practice is to enable Port Fast configuration on the switch ports where server’s NICs are connected. The Port Fast configuration puts the switch port immediately into STP forwarding state. The port still participates in STP protocol and if a loop is detected the port can enter the blocked state.
The third installment of this blog series is based on the vSphere Storage Appliance (VSA) 5.1 and the support of remote office/branch office (ROBO) use case. With the release of VSA 5.1 VMware introduced the support and capability for the vSphere Storage Appliance (VSA) to centrally manage implementation across Remote Office/Brach Office (ROBO). This a compelling solution for customers that are required to manage, operate, and maintain ROBO type of environments. Some of the core benefits provided by the VSA are based around the most essential requirements for any business:
Reduce management efforts
Provide, maintain or increase application and infrastructure availability
The new features and capabilities introduced with the vSphere Storage Appliance (VSA) 5.1 greatly enhances it’s usability and the scenarios in which it can be deployed. Scenarios where each remote locations has dedicated hardware and the personnel to manage and support the infrastructures individually such as the scenario illustrated below.
VSA Scenario – Multiple Remote Locations with Decentralized Management