Home > Blogs > OpenStack Blog for VMware > Monthly Archives: August 2015

Monthly Archives: August 2015

VMware & Rackspace Collaborate on an Interoperable OpenStack Cloud Architecture

This week at VMworld 2015 in San Francisco, VMware and Rackspace announced a new interoperable OpenStack cloud architecture. OK, that’s a big description for something that is actually very simple in concept, and highly valuable to customers who want to quickly get up and running with production-grade OpenStack clouds without a lot of the complexity associated with OpenStack implementations.

The basic premise of this architecture is that the end game for organizations is, and should be, a focus on delivering OpenStack Infrastructure as a Service to their users. If you write your applications and infrastructure automation using standard OpenStack APIs, then the automation should work with any OpenStack cloud regardless of the underlying technologies. This promise of OpenStack has enormous value for customers. If your application and infrastructure automation are portable, then businesses can move across clouds, leverage regional OpenStack clouds to expand business in new geos and/or swap out vendors to meet their business requirements.

With the interoperable OpenStack cloud architecture, customers can start with either VMware Integrated OpenStack or Rackspace Private Cloud powered by OpenStack. Customers can build all their infrastructure automation using standard OpenStack APIs on top of these platforms. Our two companies will work to ensure this automation works with the respective OpenStack clouds. This is inline with the direction the OpenStack Foundation is taking to enable multi-cloud environments via identity federation.

We are excited to announce the interoperable OpenStack cloud architecture and look forward to engaging with customers. Here’s what the architecture looks like:

VMW+RAX Interoperable_OpenStack_Architecture

For customers that want to run a heterogeneous infrastructure underneath the OpenStack API layer, we believe the best approach is to take a multi-vendor approach to their OpenStack deployment. In this way, their environments are optimized for the underlying infrastructure, which improves operations and simplifies deployment and management of their OpenStack clouds. We are collaborating with Rackspace on exactly this type of multi-vendor architecture for OpenStack – one that removes lock-in at the infrastructure layer.

Amr

Introducing VMware Integrated OpenStack 2

It is that magical time of the year again, when VMware is honored to host more than 23,000 OpenStack_Logoattendees at VMworld 2015 in San Francisco. The event is an annual destination for organizations looking to learn more about technology, innovation and how they can be more awesome in their job.

We are excited to announce VMware Integrated OpenStack 2 just six months after we released version 1.0 for general availability. Expected to be available before the end of Q3 2015 for download, here’s what’s new in this release:

  • Kilo-based: VMware Integrated OpenStack 2.0 will be based on OpenStack Kilo release, making it current with upstream OpenStack code.
  • Seamless OpenStack Upgrade: VMware Integrated OpenStack 2.0 will introduce an Industry-first seamless upgrade capability between OpenStack releases. Customers will now be able to upgrade from V1.0 (Icehouse) to V2.0 (Kilo), and even roll back if anything goes wrong, in a more operationally efficient manner.
  • Additional Language Support: VMware Integrated OpenStack 2.0 will now available in six more languages: German, French, Traditional Chinese, Simplified Chinese, Japanese and Korean.
  • LBaaS: Load Balancing as a service will be available supported through VMware NSX.
  • Ceilometer Support: VMware Integrated OpenStack 2.0 will now support Ceilometer with Mongo DB as the Backend Database
  • App-Level Auto Scaling using Heat: Auto Scaling will enable users to set up metrics that scale up or down application components. This will enable development teams to address unpredictable changes in demand for the app services. Ceilometer will provide the alarms and triggers, Heat will orchestrate the creation (or deletion) of scale out components and LBaaS will provide load balancing for the scale out components.
  • Backup and Restore: VMware Integrated OpenStack 2.0 will include the ability to backup and restore OpenStack services and configuration data.
  • Advanced vSphere Integration: VMware Integrated OpenStack 2.0 will expose vSphere Windows Guest Customization. VMware admins will be able to specify various attributes such as ability to generate new SIDs, assign admin passwords for the VM, manage compute names etc. There will also be added support for more granular placement of VMs by leveraging vSphere features such as affinity and anti-affinity settings.
  • Qcow2 Image Support: VMware Integrated OpenStack 2.0 will support the popular qcow2 Virtual Machine image format.
  • Available through our vCloud Air Network Partners: Customers will be able to use OpenStack on top of VMware through any of the service providers in out vCloud Air Partner Network.

Feel free to join us on many of the sessions at VMworld in SF, and for details about the VMworld sessions, check here.

We also encourage you to check out all of the great information we’ve put together for VMware Integrated OpenStack.

We look forward to seeing you there.

The OpenStack @ VMware Team

VMware Integrated OpenStack Video Series: Working with Instances

In our last installment, we discussed the simplicity of the VMware Integrated OpenStack deployment process. Today, we will discuss how VMware Integrated OpenStack users can provision virtual machines. First, we need to get familiar with some OpenStack terminology:

  • Instance – a running virtual machine in your environment. The OpenStack Nova service provides users with the ability to manage hypervisors and deploy virtual machines.
  • Image – similar in concept to a VM template. The OpenStack Glance service maintains a collection of images from which users will deploy their instances.
  • Volume – this is an additional virtual disk (VMDK) that is attached to a running instance. Volumes can be added to instances ad hoc via the OpenStack Cinder service.
  • Flavor – allocation of resources (i.e. number of vCPUs, storage, RAM).
  • Security Group – rules governing network access to your deployed instance (ex: this instance may be accessed via TCP port 22 from a certain IP range).
  • Network – the VMware vSphere port group that your instance will be attached to. Your port groups are automatically created by the OpenStack Neutron service.

OpenStack emphasizes the capability for users to manage their infrastructure programmatically through REST APIs, and this is exhibited in the multiple ways that a user can deploy an instance. The Horizon GUI provides the capability to launch instances with a point-and-click interface. The Nova CLI provides users with simple commands to deploy your instances, and these commands can be combined in shell scripts.

For users who want even more control and flexibility over instance deployment, the REST APIs can be leveraged. The important thing to note is that regardless of the interface the user selects, the REST API is utilized behind the scenes. For example, if I use the nova boot CLI command, it translates my simple inputs into an HTTP request that the Nova service will understand.

If you would like to see the API code being generated by your CLI commands, you can use the “–debug” option with CLI tools (ex: nova –debug boot…). An example HTTP Request generated by the nova boot CLI command is included below:

curl -g -i -X POST https://vio-dashboard.eng.vmware.com:8774/v2/b228bcefad9f487fb6ae4821bfb90130/servers
-H "User-Agent: python-novaclient"
-H "Content-Type: application/json"
-H "Accept: application/json"
-H "X-Auth-Token: {SHA1}c1ef2534845b985dc4c52b803e357c08daea265b"
-d '{
"server": {
"name": "apitest",
"imageRef": "0723d0ac-9a08-49f5-9160-97efe05aa6ca",
"flavorRef": "2",
"max_count": 1,
"min_count": 1,
"networks": [{"uuid": "a722cb2b-f041-40b1-ad6a-74a27d30539a"}],
"security_groups": [{"name": "default"}]
}

My instance name (“apitest”) may seem too generic, and it’s possible that another user may use the same name. Not to worry, instance names do not need to be unique: OpenStack identifies all resources, including instances, by unique identifiers. In the sample code above, my source image, flavor, and network are all identified by their unique identifiers. Well, what about vCenter?  In vCenter, my virtual machine’s name includes its OpenStack identifier:

 

VMware Integrated Openstack instance uuid in vCenter

How vCenter Displays an OpenStack Instance

As we saw in the code above, the user specifies the source image, flavor, network, and security group during instance deployment. In the background, the user’s credentials and the interactions between the various OpenStack components are authenticated by the OpenStack Identity service (Keystone). The following graphic provides an illustration of these interactions:

 

VMware Integrated OpenStack Component Interaction

OpenStack Component Interaction

Check out the following video to see instance deployment in action with the Horizon GUI and the Nova CLI, :

 

Stay tuned for next week’s blog post when we discuss working with OpenStack networks! In the meantime, you can learn more on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.

OpenStack @ VMworld US 2015!

At VMworld US 2015, there are many sessions for attendees to learn more about what VMware is doing with OpenStack.

Don’t miss out on hearing about best practices for running OpenStack on the vSphere platform including lessons learned from deployments. All the OpenStack-related sessions are included at the end of this post.

We also have an updated Hands-On Lab for you to try: SPL-SDC-1620 – OpenStack with VMware vSphere and NSX and the accompanying Expert-Led Workshop: ELW-SDC-1620 – OpenStack with VMware vSphere and NSX Workshop

One more thing: VMware Integrated OpenStack will be featured in the DevOps @ VMware program on Tuesday, September 1 at 5 PM. Come check that out as well.

See you in San Francisco!

OpenStack @ VMworld sessions, panels, and group discussions

  1. INF6108 – Something Broke, What Now? Managing and Troubleshooting OpenStack Environments
  2. MGT5151 – vRealize Automation or VMware Integrated OpenStack or Both?
  3. NET5836 – OpenStack with NSX Architecture Deep Dive
  4. NET6609-GD – NSX Networking for OpenStack on vSphere
  5. SDDC4955 – VMware Integrated OpenStack (VIO) on Federation Enterprise Hybrid Cloud
  6. SDDC5094 – A Technical Deep Dive into VMware Integrated OpenStack
  7. SDDC5113 – Everything You Need to Know About VMware + OpenStack
  8. SDDC5566 – Successful DevOps for the Hybrid Cloud with vRealize Automation & VMware Integrated OpenStack
  9. SDDC5839 – vRealize Automation or OpenStack? Uncovering the Right IaaS for Your Business

VMware Integrated OpenStack Video Series: OpenStack Deployment

Today’s entry is the start of a blog series that will cover many aspects of VMware Integrated OpenStack.

OpenStack deployments usually have at least one physical server or virtual machine that is designated to be the “build server”. This build server deploys and configures the various components that make up the control plane including the Nova services that manage the hypervisor components, the Neutron networking services, and so on.

VMware Integrated OpenStack also provides a build server that is referred to as the OpenStack Management Server (OMS). The OMS is packaged in an OVA that also contains an Ubuntu VM template. During OpenStack deployments, the OMS clones the VM template to build the OpenStack control plane (ex: controller, database cluster, etc.). The following image illustrates the components that get deployed on the management cluster of your OpenStack deployment.

VMware Integrated OpenStack Control Plane

VMware Integrated OpenStack Control Plane

The OpenStack deployment process happens in two phases:

  1. The VMware Integrated OpenStack vApp deployment
  2. The OpenStack control plane deployment

Both phases of the deployment happen within the VMware vSphere Web Client so that IT administrators can use a familiar interface to deploy and manage their OpenStack installation.  The following videos demonstrate the complete VMware Integrated OpenStack deployment process.

The VMware Integrated OpenStack vApp deployment process:

 

The OpenStack control plane deployment process:

 

Stay tuned for next week’s blog post about user interactions with OpenStack! In the meantime, you can learn more on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.