Home > Blogs > VMware vSphere Blog

Configure DHCP and TFTP for Auto Deploy

In the previous post, we covered Enabling Auto Deploy on vCenter Server Appliance 6.

There are several more steps that need to be taken to get Auto Deploy configured correctly.

In this post we discuss the next step in our journey to running Auto Deploy in your environment, which is Continue reading

Elite Database Workshop Program - SQL Server Event #2

The 3rd VMware Elite Database Workshop is set to start at 7AM on Tuesday April 21.  The program is an invitation only event for the some of the world's top SQL Server experts in which VMware selects groups of database specialists within various disciplines to include Oracle and SQL Server and soon to include Big Data and other data focused areas.   The experience consists of Executive Welcomes from both VMware and the storage partner in delivering a particular event.  In this case the partner is Tintri who is providing an 880 high end flash array for the customized labs which will challenge both the creativity and technical expertise of the workshop attendees.  The bulk of the three days will be filled with presentations and open discussion with VMware engineers and product specialists. Well known industry luminaries such as Richard McDougal and Jeff Buell will join the rich lineup over these three days.  The attendee list includes.   Steve Jones, Tim Ford, Arnie Rowland, Wendy Pastrick, Eddie Wurech, Robert Davis, Sean McCown, Allan Hirt, Geoff Hiten,  Randy Knight, Chris Shaw, Jason Strate, Brandon Leach, Shawn Meyers, Melissa Connors. Alumni guests include David Klee, Mike Corey and Denny Cherry.

 

Run Containers on vSphere with Project Photon and Project Lightwave

When you think of VMware, virtualization clearly jumps to mind.  But if you take a step back, virtualization is really a means to an end.  IT pros don’t earn their salary because they run virtual machines -- but VMs support application services that are essential to business, ultimately contributing to the bottom line.  VMware is focused on providing the best place to run any application; from LAMP stacks to business-critical workloads to big data analytics, vSphere can handle it all.

Project Photon

Two open source projects were just announced by the Cloud-Native Apps group: Project Photon and Project Lightwave. Both of these projects will be foundational elements for running Linux containers and supporting next-generation application architectures. This marked a big milestone in the lifecycle of VMware Cloud-Native Apps, and at first glance may seem to be a lot more relevant to application developers than the traditional vSphere audience, but there really is a great tie-in to the Software-Defined Data Center.

If you’re a vSphere administrator, an important part of your role is supporting the developers that create the apps that run on your infrastructure.  There is a shift underway with developers right now – moving from a traditional waterfall model to agile, continuous integration.  For a specific example of the change in mindset from previous software development processes, check out The Twelve-Factor App to see why the container enthusiasm starts to really make sense.

Today, customers trust their Software-Designed Data Center based on VMware infrastructure for any app.  It would be a shame if a new platform for applications came along and brought back the silos of yesteryear.  This is why vSphere admins should care about next-generation applications and corresponding infrastructure.  The container runtime becomes another essential component of the infrastructure and it should be integrated for seamless operation.    With Photon, VMware is going to make it easy to run containers alongside all of the other workloads - no silos here!

Photon is going to be available in places where developers expect to find it.  For example, many developers use HashiCorp Vagrant as an easy means of pulling down standardized VM images from a central repository.  A Photon image will be available there and elsewhere, enabling the same container runtime on laptops, in the datacenter, and in public clouds.

Administrators will like the fact that Photon has a small footprint because it is not weighed down with all of the packages typically found on a Linux system, and one can draw parallels with the VMware ESXi thin hypervisor.  Less is more when it comes to infrastructure – fewer patches, less administration, and improved SLAs are among the key benefits.

The companion open source project - Lightwave - is an authorization and authentication platform with origins from the vSphere platform.  It provides multi-master replication for scalable HA and flexible topology choices to accommodate any architecture.

There is great integration between Lightwave and Photon.  In fact, Lightwave is designed to actually run directly on Photon instances – no general-purpose OS needed. Take a look at this demo video where a new Lightwave domain is created, Photon clients are joined to the domain, and ssh logins are authenticated against directory credentials, eliminating the need to manage local user accounts.

Linux containers are all the rage right now, but it’s not a zero-sum proposition.  Containers run great on vSphere and VMware is investing accordingly.  VMware SDDC administrators can be confident that their platform is, and will be, the best for any application - with the security, manageability, and governance that enterprises need.

vSphere Replication Target Storage Consumption

vSphere Replication is an asynchronous, host-based replication feature that is included with vSphere Essentials Plus Kit and higher editions. It can be used as a standalone solution for simple, storage-agnostic, cost-effective virtual machine replication. vSphere Replication also serves as a replication component for VMware vCenter Site Recovery Manager (SRM) and VMware vCloud Air Disaster Recovery. When replication is configured for a powered on virtual machine, vSphere Replication starts replicating the files that make up the virtual machine from the source location to the target location. A question that comes up sometimes is "How much storage will be consumed by the virtual machine at the target location?" As with many questions like this, the short answer is "It depends."   :)

Continue reading

vSphere Hardening Guide 6.0 Public Beta 1 available

I’m happy to announce that the vSphere 6 Hardening Guide Public Beta 1 is now available.

The guide is being provided as Excel spreadsheet. I’m also making a PDF doc available for easier viewing. In addition,  I've also included an Excel spreadsheet of the guidelines that have moved out of the guide and into documentation. THIS IS INCOMPLETE. We are still working on some of that content. (that's why this is a beta!)

Please read the blog on the changes that have happened to the guide. LOTS of changes and the blog will explain.

vSphere 6.0 Hardening Guide – Overview of coming changes | VMware vSphere Blog - VMware Blogs

Continue reading

SDS - The Missing Link - Storage Automation for Application Service Catalogs

VMware-SDSAutomation technologies are a fundamental dependency to all aspects of the Software-Defined Data center. The use of automation technologies not only increases the overall productivity of the software-defined data center, but it can also accelerate the adoption of today’s modern operating models.

In recent years, a subset of the core pillars of the software-defined data center has experienced a great deal of improvements with the help of automation. The same can't be said about storage. The lack management flexibility and capable automation frameworks have kept the storage infrastructures from delivering operational value and efficiencies similar to the ones available with the compute and network pillars.

VMware’s software-defined storage technologies and its storage policy-based management framework (SPBM) deliver the missing piece of the puzzle for storage infrastructure in the software-defined data center.

Challenges - Old School

Traditional data centers are operated, managed, and consumed in models that are mapped to silos and statically defined infrastructure resources. However, businesses lean heavily on their technology investments to be agile and responsive in order to gain a competitive edge or reduce their overhead expenses. As such, there is a fundamental shift taking place within the IT industry, a shift that aims to change the way in which data centers are operated, managed, and consumed.

In today’s hardware-defined data centers, external storage arrays drive the de-facto standards for how storage is consumed. The limits of the data center are tied to the limits of the infrastructure. The consumers of storage resources face a number of issues since traditional storage systems are primarily deployed in silos: each with unique features and unique management model. They each have their own constructs for delivering storage (LUNs, Volumes), regularly handled through separate workflows and even different teams. Which means that the delivery of new resources and services requires significantly longer periods of time.

Critical data services are often tied to specific arrays, obstructing standardized and aligned approaches across multiple storage types. Overall, creating a consistent operational model across multiple storage systems remains a challenge. Because storage capabilities are tied to static definitions, no guarantees for performance, risks of multi-tenant impacts, all disks treated same, and OPEX tied to the highest common denominator.

storage challenges-silos

Continue reading

Virtual Volumes and the SDDC

I saw a question the other day that asked “Can someone explain what the big deal is about Virtual Volumes?”   A fair question.

The shortest, easiest answer is that VVols offer per-VM management of storage that helps deliver a software defined datacenter.

That, however, is a pretty big statement that requires some unpacking.  Rawlinson has done a great job of showcasing Virtual Volumes already, and has talked about how it simplifies storage management, puts the VMs in charge of their own storage, and gives us more fine-grained control over VM storage.  I myself will also dive into some detail on the technical capabilities in the future, but first let's take a broader look at why this really is an important shift in the way we do VM storage.

Continue reading

vSphere Virtual Volumes Webinar Series: Hitachi

VVols-HDS

On Wednesday April 15 Hitachi is hosting a one hour technical webinar event to discuss how Hitachi Storage for VMware Virtual Volumes can bring customers on a reliable enterprise journey to a software-defined, policy-controlled data center.

The webinar covers more than just the technical aspects of Virtual Volumes but also the operational value and efficiency Hitachi delivers with their unique implementation.

The technical and implementation details about Hitachi’s multi-protocol support and storage capabilities offered to virtual machines and their individual objects.

Webinar attendees will learn about:

  • The simplification of storage related operations for vSphere administrators
  • The increase in manageability for the vSphere infrastructure and greater levels of agility and efficiency driven by a policy-based management and operating model.

The event will be lead by Paul Morrissey – Director, Product Management, Storage, Virtualization & Application, Hitachi Data Systems and myself.

To register for the event by using the link below and don't miss it:

Delivering Simplified IT with VMware vSphere Virtual Volumes and Hitachi

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVols) and other Software-defined Storage technologies, as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

SRM - Array Based Replication vs. vSphere Replication

SRMwReplication

SRM supports two different replication technologies, Storage Array or Array-Based Replication and vSphere Replication. One of the key decisions when implementing SRM is which technology to use and for what VMs. The two technologies can be used together in an SRM environment though not to protect the same VM. Given that, what are the differences and why would you use one over the other? This table will provide all the answers you need:

Continue reading

Oracle Licensing Discussion - The Definitive Collateral Collection

The Oracle licensing webinar was broadcast on Mar19, 2015 and the replay is available at the link below. The webinar and the link, hosted by Database Trends and Analysis (DBTA) was delivered as a collaborative effort between VMware and a number of VMware partners. Don Sullivan moderated the event which included individual presentations of approximately 15 minutes each followed by a robust question and answer session. The presenters, who delivered, their own customized corporate presentations, included the world renowned founder of House of Brick, Dave Welch followed by "License Consulting" founder Daniel Hesselink. The Independent Oracle Users Group (IOUG) also joined us on the eve of their Flagship event "Collaborate" taking place in April at Mandalay Bay in Las Vegas. Dan Young, the President of the new IOUG VMware Special Interest Group (SIG) who is also the Director of Database Services at Indiana University also joined the conversation. An article describing the event has been included as well. The contact information for these partners is listed below as well.

The "Understanding Oracle Licensing, Certification and Support" guide, updated in March 2015 is included in this list along with the results of the IOUG membership survey from 2014 and a link to sign up for the VMware IOUG SIG.

1.   Oracle on VMware Licensing Webinar  or  Oracle on VMware Licensing Webinar Direct Link to DBTA

2.  The Article Corresponding the webinar

3.  Updated Understanding Oracle Licensing, Certification and Support VMware guide

4.  List of the Oracle licensing consulting partners House of Brick - (Dave Welch) and  License Consulting - (Daniel Hesselink)

5.  IOUG Oracle Virtualization Platform decision Survey report

6.  IOUG VMware Special Interests group(SIG) sign up