Home > Blogs > OpenStack Blog for VMware > Tag Archives: VMware Integrated OpenStack

Tag Archives: VMware Integrated OpenStack

16 Partners, One Live Demo – OpenStack Barcelona Interop Challenge

 

During Wednesday morning’s keynote session at the OpenStack Summit in Barcelona, I will be on stage along with several other vendors to show off some of our latest work on interoperability between OpenStack clouds.  We will be demonstrating a single workload running successfully on over a dozen different vendors’ OpenStack products without modification, including VMware Integrated OpenStack 3.0.  The idea got started back in April at the last OpenStack Summit when our friends at IBM challenged vendors to demonstrate that their products were interoperable publicly.


VMware has long been a proponent of fostering interoperability between OpenStack clouds.  I currently co-chair the Interop Working Group (formerly known as the DefCore Committee), and VMware Integrated OpenStack 3.0 is an approved OpenStack-Powered product that is compliant with the 2016.08 interoperability guideline, the newest and strictest guideline approved by the OpenStack Foundation Board of Directors.  We also helped produce the Interop Working Group’s first ever report on interoperability issues.  So why do we care about interoperability?  Shouldn’t everything built on OpenStack behave the same anyhow?  Well, to quote the previously mentioned report on interoperability issues:

 

“OpenStack is tremendously flexible, feature-rich, powerful software that can be used to create clouds that fit a wide variety of use cases including software development, web services and e-commerce, network functions virtualization (NFV), video processing, and content delivery to name a few. Commercial offerings built on OpenStack are available as public clouds, installable software distributions, managed private clouds, appliances, and services. OpenStack can be deployed on thousands of combinations of underpinning storage, network, and compute hardware and software. Because of the incredible amount of flexibility OpenStack offers and the constraints of the many use cases it can address, interoperability between OpenStack clouds is not always assured: due to various choices deployers make, different clouds may have some inconsistent behaviors.  One of the goals of the [Interop Working Group]’s work is to create high interoperability standards so that end users of clouds can expect certain behaviors to be consistent between different OpenStack-Powered Clouds and products.”

 

Think of it this way: another amazingly flexible, powerful thing we use daily is electricity.  Electricity is pretty much the same stuff no matter who supplies it to you or what you are using it for, but the way you consume it might be different for different use cases.  The outlet I plug my laptop into at home is a different shape and supplies a different voltage than the one my electric oven is connected into since the oven needs a lot more juice to bake my cookies than my laptop does to type up a blog post.  My home’s air conditioner does not even have a plug: it is wired directly into the house’s circuit breaker.  I consume most of my electricity as a service provided by my power company, but I can also generate some of my power with solar panels I own myself as long as their outputs can are connected to my power grid.  Moreover, to power up my laptop here in Barcelona, I brought along a plug adapter since Europe has some differences in their power grid based on their set of standards and requirements.  However, even though there are some differences, there are many commonalities: electricity is delivered over metal wiring, terminated at some wall socket, most of the world uses one of a few different voltage ranges, and you pay for it based on consumption.  OpenStack is similar: An OpenStack deployment built for NFV workloads might have some different characteristics and interfaces exposed than one made as a public compute cloud.

 

What makes the Interop Challenge interesting is that it is complimentary to the work of the Interop Working Group in that it looks at interoperability in a slightly different light.  To date, the Interop Working Group has mostly focused its efforts on API-level interoperability.  It does so by ensuring that products bearing the OpenStack-Powered mark, pass a set of community-maintained Tempest tests to prove that they expose a set of capabilities (things like booting up a VM with the Nova v2 API or getting a list of available images using the Glance v2 API).  Products bearing the OpenStack-Powered logo are also required to use designated sections of upstream code, so consumers know they are getting community-developed code driving those capabilities.  While the Interop Working Group’s guidelines look primarily at the server side of things, the Interop Challenge seems to address a slightly different aspect of interoperability: workload portability.  Rather than testing a particular set of API’s, the Interop Challenge took a client-side approach by running a real workload against different clouds—in this case, a LAMP stack application with a load-balanced web server tier and a database backend tier, all deployed via Ansible.  The idea was to take a typical application with commonly-used deployment tools and prove that it “just works” across several different OpenStack clouds.

 

In other words, the guidelines produced by the Interop Working Group assure you that certain capabilities are available to end users (just as I can be reasonably confident that any hotel room I walk into will have a socket in the wall from which I can get electricity).  The Interop Challenge compliments that by looking at a more valid use case: it verifies that I can plug in my laptop and get some work done.

 

Along the way, participants also hoped to begin defining some best practices for making workloads more portable among OpenStack clouds to account for some of the differences that are a natural side effect of OpenStack’s flexibility.  For example, we found that the LAMP stack workload was more portable if we let the user specify certain attributes of the cloud he intended to use – such as the name of network the instances should be attached to, the image and flavor that should be used to boot up instances, and block device or network interface names that would be utilized by that image.   Even though we will only be showing one particular workload on stage, that one workload serves as a starting point to help flesh out more best practices in the future.

 

If you want to learn more about VMware’s work on interoperability or about VMware Integrated OpenStack, see us at the keynote or stop by our booth at the OpenStack, and if you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


*This article was written by Mark Voelker – OpenStack Architect at VMware

Tired of Waiting? Deploy OpenStack in 15 Minutes or Less

Watch this video to learn how to deploy OpenStack in Compact Management Mode in under 15 minutes


If you’re ready to try VIO, take it for a spin with the Hands on Labs that illustrates a step-by-step walkthrough of how to deploy OpenStack in Compact Management Mode in under fifteen minutes.

Deploying OpenStack challenges even the most seasoned, skilled IT organizations. With integrations, configurations, testing, re-testing, stress testing and more. For many, deploying OpenStack appears as an IT ‘Science Project’, wherein the light at the end of the tunnel dims with each passing month.

VMware Integrated OpenStack takes a different approach, reducing the redundancies and confusion of deploying OpenStack with the new Compact Management Control Pane. In the Compact Mode UI, wait minutes, not months. Enterprises seeking to evaluate OpenStack or those that are ready to build OpenStack clouds in the most cost efficient manner now have the ability to deploy in as little as 15 minutes quickly.

 

The architecture for VMware Integrated OpenStack is optimized to support compact architecture mode, reducing the need for support, overall resource costs, and the operational complexity keeping an enterprise from completing their OpenStack adoption.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.


If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


You’ll be surprised what you can accomplish in 15 minutes

Introducing Senlin – a new tool for speedy, load-balanced OpenStack clustering

 Senlin is a new OpenStack project that provides a generic clustering service for OpenStack clouds. It’s capable of managing homogeneous objects exposed by other OpenStack components, including Nova, Heat, or Cinder, making it of interest to anyone using, or thinking of using, VMware Integrated OpenStack.

VMware OpenStack architect Mark Voelker, along with VMware colleague Xinhui Li and Qiming Teng of IBM, offer a helpful introduction to Senlin in their 2016 OpenStack Summit session, now viewable here.

 

Voelker opens by reviewing the generic requirements for OpenStack clustering, which include simple manageability, expandability on demand, load-balancing, customizability to real-life use cases, and extensibility.

 

OpenStack already offers limited cluster management capabilities through Heat’s orchestration service, he notes. But Heat’s mission is to orchestrate composite cloud apps using a declarative template format through an OpenStack-native API. While functions like auto-scaling, high availability, and load balancing are complimentary to that mission, having those functions all in a single service isn’t ideal.

“We thought maybe we should think about cluster management as a first class service that everything else could tie into,” Volker recalls, which is where Senlin comes in.

 

Teng then describes Senlin’s origin, which started as an effort to build within Heat, but soon moved to offload Heat’s autoscaling capabilities into a separate project that expanded OpenStack autoscaling offerings more comprehensively, becoming OpenStack’s first dedicated clustering service.

 

Senlin is designed to be scalable, load-balanced, highly-available, and manageable, Teng explains, before outlining its server architecture and detailing the operations it supports. “Senlin can manage almost any object,” he says. “It can be another server, a Heat stack, a single volume or floating IP protocol, we don’t care. We wanted to just build a foundational service allowing you to manage any type of resource.”

To end the session, Li offers a demo of how Senlin creates a resilient, auto-scaling cluster with both high availability and load balancing in as little as five minutes.

 

If you want to learn more about clustering for OpenStack clouds created with VMware Integrated OpenStack (VIO) you can find expert assistance at our product homepage. Also check out our Hands-on Lab, or try VIO for yourself by downloading and installing VMware Integrated OpenStack direct.

Next Generation Security Services in OpenStack

OpenStack is quickly and steadily positioning itself as a great Infrastructure-as-a-Service solution for the Enterprise. Originally conceived for that proverbial DevOps Cloud use case (and as a private alternative to AWS), the OpenStack framework has evolved to add rich Compute, Network and Storage services to fit several enterprise use cases. This evolution can be evidenced by the following initiatives:

1) Higher number of commercial distributions are available today, in addition to Managed Services and/or DIY OpenStack.
2) Diverse and expanded application and OS support vs. just Cloud-Native apps (a.k.a “pets vs. cattle”).
3) Advanced network connectivity options (routable Neutron topologies, dynamic routing support, etc.).
4) More storage options from traditional Enterprise storage vendors.

This is definitely great news, but one area where OpenStack has lagged behind is security. As of today, the only robust option for application security offered in OpenStack are Neutron Security Groups. The basic idea is that OpenStack Tenants can be in control of their own firewall rules, which are then applied and enforced in the dataplane by technologies like Linux IP Tables, OVS conntrack or, as it is the case with NSX vSphere, a stateful and scalable Distributed Firewall with vNIC-level resolution operating on each and every ESXi hypervisor.

Neutron Security Groups were designed for intra and inter-tier L3/L4 protection within the same application environment (the so-called “East-West” traffic).

In addition to Neutron Security Groups, projects like Firewall-as-a-Service (FWaaS) are also trying to onboard next generation security services onto these OpenStack Clouds and there is an interesting roadmap taking form on the horizon. The future looks great, but while OpenStack gets there, what are the implementation alternatives available today? How can Cloud Architects combine the benefits of the OpenStack framework and its appealing API consumption model, with security services that provide more insight and visibility into the application traffic? In other words, how can OpenStack Cloud admins offer next generation security right now, beyond the basic IP/TCP/UDP inspection offered in Neutron?

The answer is: With VMware NSX.

NSX natively supports and embeds an in-kernel redirection technology called Network Extensibility, or NetX. Third party ecosystem vendors write solutions against this extensibility model, following a rigorous validation process, to deliver elegant and seamless integrations. Once the solution is implemented, the notion is simply beautiful: leverage the NSX policy language, the same language that made NSX into the de facto solution for micro-segmentation, to “punt” interesting traffic toward the partner solution in question. This makes it possible to have protocol-level visibility for East-West traffic. This approach also allows you to create a firewall rule-set that looks like your business and not like your network. Application attributes such as VM name, OS type or any arbitrary vCenter object can be used to define said policies, irrespective of location, IP address or network topology. Once the partner solution receives the traffic, then the security admins can apply deep traffic inspection, visibility and monitoring techniques to it.

screen-shot-2

How does all of the above relate to OpenStack, you may be wondering? Well, the process is extremely simple:

1) First, integrate OpenStack and NSX using the various up-streamed Neutron plugins, or better yet, get out-of-the-box integration by deploying VMware’s OpenStack distro, VMware Integrated OpenStack (VIO), which is free for existing VMware customers.
2) Next, integrate NSX and the Partner Solution in question following documented configuration best practices. The list of active ecosystem partners can be found here.
3) Proceed to create an NSX Security policy to classify the application traffic by using the policy language mentioned above. This approach follows a wizard-based provisioning process to select which VMs will be subject to deep level inspection with Service Composer.
4) Use the Security Partner management console to create protocol-level security policies, such as application level firewalling, web reputation filtering, malware protection, antivirus protection and many more.
5) Launch Nova instances from OpenStack without a Neutron Security Group attached to them. This step is critical. Remember that we are delegating security management to the Security Admin, not the Tenant. Neutron Security Groups do not apply in this context.
6) Test and verify that your security policy is applied as designed.

screen-shot-1

This all assumes that the security admin has relinquished control of the firewall from the Tenant and that all security operations are controlled by the firewall team, which is a very common Enterprise model.

There are some Neutron enhancements in the works, such as Flow Classifier and Service Chaining, that are looking “split” the security consumption between admins and tenants, by promoting these redirection policies to the Neutron API layer, thus allowing a Tenant (or a Security admin) to selectively redirect traffic without bypassing Neutron itself. This implementation, however, is very basic when compared to what NSX can do natively. We are actively monitoring this work and studying opportunities for future integration. In the meantime, the approach outlined above can be used to get the best of both worlds: the APIs you want (OpenStack) with the infrastructure you trust (vSphere and NSX).

In the next blog post we will show an actual working integration example with one of our Security Technology Partners, Fortinet, using VIO and NSX NetX technology.

Author: Marcos Hernandez
Principal Engineer, CCIE#8283, VCIX, VCP-NV
hernandezm@vmware.com
@netvirt

Issues With Interoperability in OpenStack & How DefCore is Addressing Them

Interoperability is built into the founding conception of OpenStack. But as the platform has gained popularity, it’s also become ever more of a challenge.

“There’s a lot of different ways to consume OpenStack and it’s increasingly important that we figure out ways to make things interoperable across all those different methods of consumption,” notes VMware’s Mark Voelker in a presentation to the most recent OpenStack Summit titled: “ (view the slide set here).

 

Voelker, a VMware OpenStack architect and co-chair of the OpenStack Foundation’s DefCore Committee, shares the stage with OpenStack Foundation interoperability engineer Chris Hoge. Together they offer an overview of the integration challenges OpenStack faces today, and point to the work DefCore is doing to help deliver on the OpenStack vision. For anyone working, or planning to work, with VMware Integrated OpenStack (VIO), the talk is a great backgrounder on what’s being done to ensure that VIO integrates as well with non-VMware OpenStack technologies as it does with VMware’s own.

Hoge begins by outlining DefCore’s origins as a working group founded to fulfill the OpenStack Foundation mandate for a “faithful implementation test suite to ensure compatibility and interoperability for products.” DefCore has since issued five guidelines that products can now be certified as following, allowing them to carry the logo.

After explaining what it takes to meet the DefCore guidelines, Hoge reviews issues that remain unresolved. “The good news about OpenStack is that it’s incredibly flexible. There are any number of ways you can configure your OpenStack Cloud. You have your choice of hypervisors, storage drivers, network drivers – it’s a really powerful platform,” he observes. But that very richness and flexibility also makes it harder to ensure that two instances of OpenStack will work well together, he explains.

 

Among areas with issues are image operations, networking, policy and configuration discovery, API iteration, provability, and project documentation, reports Voelker. Discoverability and how to map capabilities to APIs are also a major concern, as is lack of awareness about DefCore’s guidelines. “There’s still some confusion about what kind of things people should be taking into account when they are making technical choices,” Hoge adds.

The OpenStack Foundation is therefore working to raise the profile of interoperability as requirement and awareness of the meaning behind the “OpenStack Powered” logo. DefCore itself is interacting closely with developers and vendors in the community to address the integration challenges they’ve identified and enforce a measurable standard on new OpenStack contributions.

 

“Awareness is half the battle,” notes Voelker, before he and Hoge outline the conversations DefCore is currently leading, outcomes they’ve already achieved, and what DefCore is doing next – watch for a report on top interoperability issues soon, more work on testing, and a discussion on new guidelines for NFV-ready clouds.

 

If you are interested in how VMware Integrated OpenStack (VIO) conforms with DefCore standards, you can more find information and experts to contact on our product homepage. You can also check out our Hands-on Lab, or try VIO for yourself and download and install VMware Integrated OpenStack direct.

VMware Integrated OpenStack 3.0 Announced. See What’s In It

On 9/30/2016, VMware announced VMware Integrated OpenStack 3.0 at VMWorld in Las Vegas. We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the latest Mitaka release, an optimized management control plane architecture, and the ability leverage existing workloads in your OpenStack cloud.

VIO 3.0 is available for download here(Login may be required).

New features include:

  • OpenStack Mitaka Support
    VMware Integrated OpenStack 3.0 customers can leverage the great features and enhancements in the latest OpenStack release. Mitaka addresses manageability, scalability, and a greater user experience. To learn more about the Mitaka release, visit the OpenStack.org site at https://www.openstack.org/software/mitaka/
  • Easily Import Existing Workloads
    The ability to now directly import vSphere VMs into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development project or production workloads to the OpenStack Framework.
  • Compact Management Control Plane
    Building on enhancements from previous releases, organizations looking to evaluate OpenStack or to build OpenStack clouds for branch locations quickly and cost effectively can easily deploy in as little as 15 minutes. The VMware Integrated OpenStack 3.0 architecture has been optimized to support a compact architecture mode that dramatically reduces the infrastructure footprint saving resource costs and overall operational complexity.

If you are at VMWorld2016 in Las Vegas, we invite you to attend the following sessions to hear how our customers are using VMware Integrated OpenStack and learn more details about this great upcoming release.

VMware Integrated OpenStack 3.0

VMWorld 2016 VMware Integrated OpenStack Sessions:

  • MGT7752 – OpenStack in the Real World: VMware Integrated OpenStack 3.0 Customer Panel
  • MGT7671 – What’s New in VMware Integrated OpenStack Version 3.0!
  • NET8109 – Amadeus’s Journey Building a Software-Defined Data Center with VMware Integrated OpenStack and NSX
  • NET8343 – OpenStack Networking in the Enterprise: Real-Life Use Cases
  • NET8832 – The Role of VIO and NSX in Virtualizing the Telecoms Infrastructure
  • SEC9618-SPO – Deep Dive: Extending L4-L7 Security Controls for VMware NSX and VMware Integrated OpenStack (VIO) Environments with Fortinet Next Generation

Try VMware Integrated OpenStack Today

Sign up to be notified when VMWare Integrated OpenStack 3.0 is available.

OpenStack Summit 2016 Re-Cap – Experts from VMware and HedgeServ Outline the Operational Advantages of VMware Integrated OpenStack  

VMware Integrated OpenStack (VIO) offers a simple but powerful path to deploying OpenStack clouds and is a clear win for developers. But what about the operations side?

 

Presenting at the 2016 OpenStack Summit, VMware’s Santhosh Sundararaman and Isa Berisha and Thomas McAteer of HedgeServ, the #1 provider of technical management services to the hedge fund industry, make the case for VIO from an operator’s perspective.

Their session opens with HedgeServ Global Director of Platform Engineering Isa Berisha describing how the company uses OpenStack and why it picked VIO to run HedgeServ’s OpenStack deployment.

Adopting OpenStack was the company’s best option for meeting the rapidly escalating demand it was facing, Berisha explains. But his team were stymied by the limitations of the professional and managed services OpenStack deployments available to them, which were rife with issues of quality, expense, speed-of-execution, and vendor instability. Eventually they decided try VIO, despite it being new at that point and little known in the industry.

 

“We downloaded it, deployed it, and something crazy happened: it just worked,” recalls Berisha. “If you’ve tried to do your own OpenStack deployments, you can understand how that feels.”

HedgeServ now uses VIO to run up to 1,000 large window instances (averaging 64GB) managed by a team of just two. vSphere and ESXi’s speed and their ability to handle hosts of up to 10TB of RAM and hundreds of physical cores have been key elements of HedgServ’s success so far, explains senior engineer McAteer. But VIO’s storage and management capabilities have also been essential, as is its stability and the fact that it’s backed by VMware support and easily upgraded and updated live.

Simplifying operations around OpenStack and making it easier to maintain OpenStack in a production deployment has been a major focus for VMware’s VIO development team, adds VMware product manager Sundararaman in the second half of the presentation.

“There are a lot of workflows that we’ve added to make it really easy to operate OpenStack,” he says.

 

As an example, Sundararaman demos VIO’s OpenStack upgrade process, showing how it’s achieved entirely via the VIO control panel in a vSphere web client plugin. VIO uses the blue-green upgrade paradigm to stand up a completely new control plane based on the new distribution, he explains, and then migrates the data and configurations from the old to the new control plane, while leaving the deployed workloads untouched.

“We ran and upgraded VIO from 1.0 to 2.0 without talking to Santhosh,”” adds McAteer. “That’s a big deal in the OpenStack world but it just, again, worked. The blue-green nature of the upgrade path really makes that very easy.”

 

View the VMware + HedgeServ OpenStack Summit session here.

 

To try VMware Integrated OpenStack yourself, check out our free Hands-on Lab. Or take it a step further and download and install VMware Integrated OpenStack today.

 

 

OpenStack 2.5: VMware Integrated OpenStack 2.5 is GA – What’s New?

We are very excited about this newest release of VMware Integrated OpenStack, OpenStack 2.5. This release continues to advance VIO as the easiest and fastest route to build an OpenStack cloud on top of vSphere, NSX and Virtual SAN So, what’s in this release? Continue reading to learn more about the latest features in VMware Integrated OpenStack 2.5, which is available for download now.

  1. Seamlessly Leverage Existing VM Templates
  2. Smaller Management Footprint
  3. Support for vSphere Standard Edition with NSX
  4. Troubleshooting & Monitoring Out of the Box
  5. Neutron Layer 2 Gateway Support
  6. Optimized for NFV

Continue reading

OpenStack CLI Utilities On Your Windows Desktop

It seems like the majority of OpenStack users tend to work with a Linux or Mac desktop. What about the Windows users? Fortunately, the OpenStack CLI utilities we know and love (like python-novaclient, python-glanceclient, etc.) run just as well on Windows as they do on Mac and Linux. Although, there are a few differences in setting your environment variables as well as in how you format your commands. Continue reading

VMware Integrated OpenStack Video Series: Heat Orchestration

OpenStack includes an orchestration service (Heat) that allows users to define their application infrastructure via one or more template files. Users can either leverage the native OpenStack Heat Orchestration Template (HOT) format or the Amazon Web Services (AWS) CloudFormation format.

You may be wondering, “What’s the point of using Heat when I already have access to the OpenStack APIs\CLIs for automation purposes?” Well, a significant benefit of using Heat is infrastructure lifecycle management.

Let’s discuss what that means by examining the virtual infrastructure that could be used to host a multi-tier application that consists of a web server, an application server, and a database server.

Multi-Tier Application Infrastructure

Multi-Tier Application Infrastructure

It is reasonable to simply use the nova API directly to deploy three instances in the infrastructure. However, there are other application components to consider. Most likely, these instances will be on private networks (perhaps one network per application tier). The application developer also needs to account for the router to connect to the outside world, and the floating IP that will be assigned to the web server so that users can access the application.

So, with this simple application infrastructure, the number of components is already piling up:

  • Three instances
  • One router
  • Three tenant networks
  • One floating IP

Making one-off API\CLI calls to deploy these components is fine during development. However, what happens when you’re ready to go to production? What if performance tests shows that our deployment requires multiple instances at each infrastructure tier?

It would be great to have a single deployment mechanism to provision the application infrastructure from detailed, static files that leaves zero room for error. Due to the simplicity of the YAML format, your HOT files can also be used as a documentation source for IT operation runbooks. These are just a couple benefits that can come from using Heat for your application infrastructure deployments.

The following video provides a detailed walkthrough of using the OpenStack orchestration service.

 

Stay tuned for the next installment covering OpenStack security groups! In the meantime, you can learn more on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.