Home > Blogs > OpenStack Blog for VMware > Tag Archives: OpenStack Foundation

Tag Archives: OpenStack Foundation

OpenStack Boston Summit VMware Sessions Recap

Watch below to experience VMware’s Speaker Sessions at this year’s OpenStack Summit in Boston!


OpenStack & VMware Getting the Best of Both

Speaker: Andrew Pearce

Come and understand the true value to your organization of combining Openstack and VMware. In this session you will understand the value of having a defcore / Openstack powered solution to enable your developers to provision IaaS, in a way that they want, using the tools that they want. In addition you will be able to enable your operations team to continue to utilize the tools, resources and methodology that they use to ensure that your organization has a production grade environment to support your developers.Deploying Openstack, and getting the advantages of Openstack does not need to be a rip and replace strategy. See how other customers have had their cake and eat it.


OpenStack and VMware: Enterprise-Grade IaaS Built on Proven Foundation

Speakers: Xiao Hu Gao & Hari Kannan 

Running production workloads on OpenStack requires a rock solid IaaS running on a trusted infrastructure platform. Think about upgrading, patching, managing the environment, high availability, disaster recovery, security and the list goes on. VMware delivers a top-notch OpenStack distribution that allows you all of the above and much more. Come to this session to see (with a demo) how you can easily and quickly deploy OpenStack for your dev test as well as production workloads.


Is Neutron Challenging to You? Learn How VMware NSX is the Solution for Regular OpenStack Network & Security Services and Kubernetes

Speakers: Dmitri Desmidt, Yves Fauser

Neutron is challenging in many aspects. The main ones reported by OpenStack admins are: complex implementation of network and security services, high-availability, management/operation/troubleshooting, scale. Additionally, with new Kubernetes and Containers deployments, security between containers and management of container traffic is a new headache. VMware NSX offers a plugin for all Neutron OpenStack installations for ESXi and KVM hypervisors. Learn in this session with multiple live demos how VMware NSX plugin resolves all the Neutron challenges in an easy way.


 Digital Transformation with OpenStack for Modern Service Providers

Speakers: Misbah Mahmoodi, Kenny Lee

The pace of technological change is accelerating at an exponential rate. With the advent of 5G networks and IoT, Communications Service Providers success depends not only on their ability to adapt to changes quickly but to do so faster than competitors. Speed is the of the essence in developing new services, deploying them to subscribers, delivering a superior Quality of Experience, and increasing operational efficiency with lowered cost structures. For CSPs to adapt and remain competitive, they are faced with important questions as they explore the digital transformatVMwareion of their business and infrastructure, and how they can leverage NFV, and OpenStack and open hardware platforms to accelerate change and modernization.


Running Kubernates on a Thin OpenStack

Speakers: Mayan Weiss & Hari Kannan 

Kubernetes is leading the container mindshare and OpenStack community has built integrations to support it. However, running production workloads on Kubernetes is still a challenge. What if there was a production ready, multi-tenant K8s distro? Dream no more. Come to this session to see how we adapted OpenStack + K8s to provide container networking, persistent storage, RBAC, LBaaS and more on VMware SDDC.


OpenStack and OVN: What’s New with OVS 2.7

Speakers: Russel Bryant, Ben Pfaff, Justin Pettit

OVN is a virtual networking project built by the Open vSwitch community.
OpenStack can make use of OVN as its backend networking implementation
for Neutron. OVN and its Neutron integration are ready for use in OpenStack
deployments.

This talk will cover the latest developments in the OVN project and the
latest release, part of OVS 2.7. Enhancements include better performance,
improved debugging capabilities, and more flexible L3 gateways. 
We will take a look ahead the next set of things we expect to work on for
OVN, which includes logging for OVN ACLs (security groups), encrypted
tunnels, native DNS integration, and more.

We will also cover some of the performance comparison results of OVN
as compared with the original OVS support in Neutron (ML2/OVS). Finally, 
we will discuss how to deploy OpenStack with OVN or migrate an existing
deployment from ML2/OVS to OVN.


DefCore to Interop and Back Again: OpenStack Programs and Certifications Explained

Speakers: Mark Voelker & Egle Sigler

Openstack Interop (formerly DefCore) guidelines have been in place for 2 years now, and anyone wanting to use OpenStack logo must pass these guidelines. How are guidelines created and updated? How would your favorite project be added to it? How can you guarantee that your OpenStack deployment will comply with the new guidelines? In this session we will cover OpenStack Interop guidelines and components, as well as explain how they are created and updated.


Senlin: An ideal Bridge Between NFV Orchestrator and OpenStack

Speakers: Xinhui Li, Ethan Lynn, Yanyan Hu

Resource Management is a top requirement in NFV field. Usually, the Orchestrator take the responsibility of parsing a virtual network function into different virtual units (VDU) to deploy and operate over Cloud. Senlin, positioned as clustering resource manager since the born time, can be the ideal bridge between NFV orchestrator with OpenStack: it uses a consolidate model which is directly mapped to a VDU to interact with different backend services like Nova, Neutron, Cinder for compute, network and storage resources per Orchestrator’s demand; it provides rich operational functions like auto-scaling, load-balancing and auto healing. We use a popular VIMS typed VNF to illustrate how to easily deploy a VNF on OpenStack and manage it in a scalable and flexible way.


High Availability and Scalability Management of VNF

Speakers: Haiwei Xu, Xinhui Li, XueFeng Liu

Now network function virtualization (NFV) is growing rapidly and widely adopted by many telcom enterprises. In openstack Tacker takes the responsibility of building a Generic VNF Manager (VNFM) and a NFV Orchestrator (NFVO) to deploy and operate Network Services and Virtual Network Functions (VNFs) on the infrastructure platform. For the VNFs which can work as a loadbalancer or a firewall, Tacker needs to consider the availability of each VNF to ensure they are not overloaded or out of work. To prevent VNFs from being overloaded or down, Tacker need to make VNFs HA and auto-scaling. So in fact the VNFs of certain function should not be a single node, but a cluster.

That comes out a problem of cluster managing. In OpenStack environment there is a Clustering service called Senlin which provides scalability management and HA functions for the nodes, those features are exactly fit for Tacker’s requirement.

In this talk we will give you a general introduction of this feature.


How an Interop Capability Becomes Part of the OpenStack Interop Guidelines

Speakers: Rochelle Grober, Mark Voelker, Luz Cazares

OpenStack Interop Working Group (formerly DefCore) produces the OpenStack Powered (TM) Guidelines (a.k.a. Interoperability Guidelines). But, how do we decide what goes into the guideline? How do we define these so called “Capabilities”? And how does the team “score” them? Attend this session to learn what we mean by “Capability”, the requirements a capability must meet, the process the group follows to grade those capabilities… And, you know what, lets score your favorite thing live.


OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Speakers: Brad Topol, Mark Voelker, Tong Li

The OpenStack community has been driving initiatives on two sides of the interoperability coin: workload portability and API/code standards for OpenStack Powered products. The first phase of the OpenStack Interoperability Challenge culminated with a Barcelona Summit Keynote demo comprised of 16 vendors all running the same enterprise workload to illustrate that OpenStack enables workload portability across OpenStack clouds. Building on this momentum for its second phase, the multi-vendor Interop Challenge team has selected new advanced workloads based on Kubernetes and NFV applications to flush out portability issues in these commonly deployed workloads. Meanwhile, the recently formed Interop Working Group continues to roll out new Guidelines, drive new initiatives, and is considering expanding its scope to cover more vertical use cases. In this presentation, we describe the progress, challenges, and lessons learned from both of these efforts.

16 Partners, One Live Demo – OpenStack Barcelona Interop Challenge

 

During Wednesday morning’s keynote session at the OpenStack Summit in Barcelona, I will be on stage along with several other vendors to show off some of our latest work on interoperability between OpenStack clouds.  We will be demonstrating a single workload running successfully on over a dozen different vendors’ OpenStack products without modification, including VMware Integrated OpenStack 3.0.  The idea got started back in April at the last OpenStack Summit when our friends at IBM challenged vendors to demonstrate that their products were interoperable publicly.


VMware has long been a proponent of fostering interoperability between OpenStack clouds.  I currently co-chair the Interop Working Group (formerly known as the DefCore Committee), and VMware Integrated OpenStack 3.0 is an approved OpenStack-Powered product that is compliant with the 2016.08 interoperability guideline, the newest and strictest guideline approved by the OpenStack Foundation Board of Directors.  We also helped produce the Interop Working Group’s first ever report on interoperability issues.  So why do we care about interoperability?  Shouldn’t everything built on OpenStack behave the same anyhow?  Well, to quote the previously mentioned report on interoperability issues:

 

“OpenStack is tremendously flexible, feature-rich, powerful software that can be used to create clouds that fit a wide variety of use cases including software development, web services and e-commerce, network functions virtualization (NFV), video processing, and content delivery to name a few. Commercial offerings built on OpenStack are available as public clouds, installable software distributions, managed private clouds, appliances, and services. OpenStack can be deployed on thousands of combinations of underpinning storage, network, and compute hardware and software. Because of the incredible amount of flexibility OpenStack offers and the constraints of the many use cases it can address, interoperability between OpenStack clouds is not always assured: due to various choices deployers make, different clouds may have some inconsistent behaviors.  One of the goals of the [Interop Working Group]’s work is to create high interoperability standards so that end users of clouds can expect certain behaviors to be consistent between different OpenStack-Powered Clouds and products.”

 

Think of it this way: another amazingly flexible, powerful thing we use daily is electricity.  Electricity is pretty much the same stuff no matter who supplies it to you or what you are using it for, but the way you consume it might be different for different use cases.  The outlet I plug my laptop into at home is a different shape and supplies a different voltage than the one my electric oven is connected into since the oven needs a lot more juice to bake my cookies than my laptop does to type up a blog post.  My home’s air conditioner does not even have a plug: it is wired directly into the house’s circuit breaker.  I consume most of my electricity as a service provided by my power company, but I can also generate some of my power with solar panels I own myself as long as their outputs can are connected to my power grid.  Moreover, to power up my laptop here in Barcelona, I brought along a plug adapter since Europe has some differences in their power grid based on their set of standards and requirements.  However, even though there are some differences, there are many commonalities: electricity is delivered over metal wiring, terminated at some wall socket, most of the world uses one of a few different voltage ranges, and you pay for it based on consumption.  OpenStack is similar: An OpenStack deployment built for NFV workloads might have some different characteristics and interfaces exposed than one made as a public compute cloud.

 

What makes the Interop Challenge interesting is that it is complimentary to the work of the Interop Working Group in that it looks at interoperability in a slightly different light.  To date, the Interop Working Group has mostly focused its efforts on API-level interoperability.  It does so by ensuring that products bearing the OpenStack-Powered mark, pass a set of community-maintained Tempest tests to prove that they expose a set of capabilities (things like booting up a VM with the Nova v2 API or getting a list of available images using the Glance v2 API).  Products bearing the OpenStack-Powered logo are also required to use designated sections of upstream code, so consumers know they are getting community-developed code driving those capabilities.  While the Interop Working Group’s guidelines look primarily at the server side of things, the Interop Challenge seems to address a slightly different aspect of interoperability: workload portability.  Rather than testing a particular set of API’s, the Interop Challenge took a client-side approach by running a real workload against different clouds—in this case, a LAMP stack application with a load-balanced web server tier and a database backend tier, all deployed via Ansible.  The idea was to take a typical application with commonly-used deployment tools and prove that it “just works” across several different OpenStack clouds.

 

In other words, the guidelines produced by the Interop Working Group assure you that certain capabilities are available to end users (just as I can be reasonably confident that any hotel room I walk into will have a socket in the wall from which I can get electricity).  The Interop Challenge compliments that by looking at a more valid use case: it verifies that I can plug in my laptop and get some work done.

 

Along the way, participants also hoped to begin defining some best practices for making workloads more portable among OpenStack clouds to account for some of the differences that are a natural side effect of OpenStack’s flexibility.  For example, we found that the LAMP stack workload was more portable if we let the user specify certain attributes of the cloud he intended to use – such as the name of network the instances should be attached to, the image and flavor that should be used to boot up instances, and block device or network interface names that would be utilized by that image.   Even though we will only be showing one particular workload on stage, that one workload serves as a starting point to help flesh out more best practices in the future.

 

If you want to learn more about VMware’s work on interoperability or about VMware Integrated OpenStack, see us at the keynote or stop by our booth at the OpenStack, and if you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


*This article was written by Mark Voelker – OpenStack Architect at VMware

Tired of Waiting? Deploy OpenStack in 15 Minutes or Less

Watch this video to learn how to deploy OpenStack in Compact Management Mode in under 15 minutes


If you’re ready to try VIO, take it for a spin with the Hands on Labs that illustrates a step-by-step walkthrough of how to deploy OpenStack in Compact Management Mode in under fifteen minutes.

Deploying OpenStack challenges even the most seasoned, skilled IT organizations. With integrations, configurations, testing, re-testing, stress testing and more. For many, deploying OpenStack appears as an IT ‘Science Project’, wherein the light at the end of the tunnel dims with each passing month.

VMware Integrated OpenStack takes a different approach, reducing the redundancies and confusion of deploying OpenStack with the new Compact Management Control Pane. In the Compact Mode UI, wait minutes, not months. Enterprises seeking to evaluate OpenStack or those that are ready to build OpenStack clouds in the most cost efficient manner now have the ability to deploy in as little as 15 minutes quickly.

 

The architecture for VMware Integrated OpenStack is optimized to support compact architecture mode, reducing the need for support, overall resource costs, and the operational complexity keeping an enterprise from completing their OpenStack adoption.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.


If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


You’ll be surprised what you can accomplish in 15 minutes

OpenStack Summit 2016 Re-Cap – Speeding Up Developer Productivity with OpenStack and Open Source Tools

Some developers may avoid utilizing an OpenStack cloud, despite the advantages they stand to gain, because they have already established automation workflows using popular open source tools with public cloud providers.

But in a joint presentation to the 2016 OpenStack Summit, VMware Senior Technical Marketing Manager Trevor Roberts Jr.’s and VMware NSX Engineering Architect Scott Lowe explain how developers can take advantage of OpenStack clouds using the same open source tools that they already know.

Their talk runs through a configuration sequence, including image management, dev/test, and production deployment, showing how standard open source tools that developers already use for non-OpenStack deployments can run in exactly the same way with an OpenStack cloud. In this case, they discuss using Packer for image building, Vagrant and Docker Machine for software development and testing, and Terraform for production grade deployments.

“This is a way of using an existing tool that you’re already comfortable and familiar with to start consuming your OpenStack cloud,” says Roberts, before demoing an OpenStack customized image build with Packer and then using that image to create and deploy an OpenStack-provisioned instance with Vagrant.

 

Lowe next profiles Docker Machine, which provisions instances of the Docker Engine for testing, and shows how you can use Docker Machine to spin up instances of Docker inside an OpenStack cloud.

Lastly, Lowe demos Terraform, which offers an infrastructure-as-a-service/cloud services approach to production deployments on multiple platforms (it’s similar to Heat for OpenStack), creating an entire OpenStack infrastructure with a single click, including a new network and router, and launching multiple new instances with floating IP addresses for each, ready for pulling down containers as required.

 

“It’s very simple to use these development tools with OpenStack,” adds Roberts. “It’s just a matter of making sure your developers feel comfortable with the environment and letting them know there are plugins out there – you don’t have to keep using other infrastructures.”

As Roberts notes in the presentation, VMware itself is “full in on OpenStack – we contribute to projects that aren’t even in our own distribution just to help out the community.” Meanwhile, VMware’s own OpenStack solution – VMware Integrated OpenStack (VIO) – offers, a DefCore-compliant OpenStack distribution specifically configured to use open source drivers to manage VMware infrastructure technologies, further aiding the adoption process for developers already familiar with vSphere, vRealize, and NSX.

 

For more information on VIO, check out the VMware Integrated OpenStack (VIO) product homepage, or the VIO Hands-on Lab. If you hold a current license for vSphere Enterprise Plus, vSphere Operations Management, or vSphere Standard with NSX Advanced, you can download VIO for free.

OpenStack 2.5: VMware Integrated OpenStack 2.5 is GA – What’s New?

We are very excited about this newest release of VMware Integrated OpenStack, OpenStack 2.5. This release continues to advance VIO as the easiest and fastest route to build an OpenStack cloud on top of vSphere, NSX and Virtual SAN So, what’s in this release? Continue reading to learn more about the latest features in VMware Integrated OpenStack 2.5, which is available for download now.

  1. Seamlessly Leverage Existing VM Templates
  2. Smaller Management Footprint
  3. Support for vSphere Standard Edition with NSX
  4. Troubleshooting & Monitoring Out of the Box
  5. Neutron Layer 2 Gateway Support
  6. Optimized for NFV

Continue reading

VMware and OpenStack – Dawning Of A New Day

Yesterday was an exciting day for Team OpenStack at VMware. Our CEO Pat Gelsinger used the VMworld opening keynote to make two exciting announcements that surely caught the attention of developers, IT and the technology community at large. First, he announced VMware Integrated OpenStack, our new distribution of the open source OpenStack code. Then, Pat announced a new collaboration with Google, Docker and Pivotal highlighting our commitment to making sure container-based solutions run great on VMware infrastructure.

OpenStack distributions? Open source software partnerships? This doesn’t seem to fit the oversimplified story you often hear in the media about “open source vs. VMware.”

In reality, the story is more nuanced, and in fact, over the past few years, VMware has developed a track record of embracing open software frameworks where they provide clear value to our customers. To understand the bigger picture, we first need to talk about the key role open frameworks are playing in the new era of software development.

Enabling Developer Agility Through Open Frameworks

Developers of next-generation applications have embraced a fully automated model of accessing data center infrastructure via APIs. When building these new apps, they use APIs to provision their apps, APIs to scale those apps up and down, and APIs to release the resources when they are done. Ultimately, this is about enabling agility: allowing them to build and modify their applications more quickly, thereby moving their business forward faster.

To simplify their lives, developers of modern apps don’t want to deal with the data center infrastructure directly. Rather, they want to leverage a framework that layers on top of that infrastructure, and gives them a more abstract model against which they build their application. These frameworks are often open standards or based on open source, because open frameworks have an easier time establishing mindshare and creating an ecosystem of associated tools, libraries, etc. and because open frameworks carry the potential to offer significantly improved workload portability across varied types of infrastructure.

Open frameworks come in all shapes and sizes, including Java frameworks (e.g., Spring) to data analytics (e.g. Hadoop), Infrastructure-as-a-Service (e.g., OpenStack), Platform-as-a-Service (e.g. Cloud Foundry) and containers (e.g. Docker). VMware has helped lead the creation of several of these frameworks while others are examples of where we recognized good work started elsewhere and moved to add support within our solutions. Either way, if our customers see strong potential in a framework, we have taken steps to enable them through products such as the vFabric Suite (Spring), vSphere Big Data Extensions (Hadoop), Pivotal CF (Cloud Foundry), and now VMware Integrated OpenStack.

VMware’s OpenStack Involvement

Part of enabling any open framework for customers is having skin in the game. VMware is investing its own development resources to help build the OpenStack framework. VMware has been a gold member of the OpenStack foundation since 2012, we are one of the largest companies contributing to OpenStack, adding code to integrate our technologies such as VMware vSphere and VMware NSX™ and enhance the project as a whole. In fact, in the latest release of OpenStack (Icehouse), VMware was the #4 contributor to the official set of “integrated” OpenStack projects, which are the core OpenStack projects like Nova, Neutron, Cinder, Glance, Keystone, Horizon, Swift, etc. that most people recognize. While such numbers only tell part of the story, it’s an indicator that VMware is investing considerably in OpenStack integration, and that data is available for all to see here.

VMware is committed to making sure that the best way to run OpenStack is on VMware. Whether a customer wants to consume VMware technologies as components along with the open source code or a partner OpenStack distribution, or chooses to use VMware Integrated OpenStack, they will achieve a new level of agility for developers by offering them powerful, vendor-neutral OpenStack APIs on top of VMware’s enterprise-class infrastructure.

Dan Wendlandt
Director of OpenStack Product Management

Our New Blog: VMware, OpenStack and the Software-Defined Data Center

By: Amr Abdelrazik

Developers of next-generation applications have embraced a fully automated model of accessing cloud infrastructure via APIs. They use APIs to provision their apps, APIs to scale those apps up and down, and APIs to release the resources when they are done.

OpenStack_Logo

OpenStack is an open source framework that layers on top of virtual and physical infrastructure to provide a set of open, vendor neutral set of infrastructure-as-a-service (IaaS) APIs and related set of tools + services to facilitate this developer-centric access. OpenStack allows IT to deliver this public-cloud like API experience to their developers on their private cloud, without necessarily giving up control of their infrastructure to the public could.

Nobody can disagree that OpenStack does have a tidal wave of marketing momentum behind it. And for the past several years, VMware has been continuously contributing to the community, all while listening to and engaging with customers about how and why OpenStack is something they felt they needed. The answers are as varied as the technologies from which you can choose to build an OpenStack cloud.

But with all its benefits, if an OpenStack cloud is not built on top of reliable, secure, and high-performance virtual infrastructure, running it as production-grade cloud can be quite challenging. This often requires building a large team of developers and administrators with deep experience in OpenStack, Linux, and Python programming, or paying hefty sums to bring in outside consultants with that expertise. Likewise, in the absence of the right management tools, operating an OpenStack cloud can be highly labor intensive, and require an investment in custom-built tools. Both factors have limited the ability of enterprises to adopt OpenStack.

We believe VMware’s software-defined data center technologies can help accelerate OpenStack in the enterprise. VMware is committed to making it easy for IT to deploy OpenStack on VMware’s enterprise-grade compute, network, and storage technologies, and enabling customers to leverage our management tools to deliver key capabilities in areas such as troubleshooting, log management, capacity planning and billback/chargeback. The end result will be OpenStack infrastructures that give developers the tools they want, and give IT the reliable and easily managed data center infrastructure they need.

So now that you know what we’ve done so far, where are we going from here? That’s what this blog will be about. We will, of course, use this blog to communicate information about future products and plans, but we are also looking to engage with our customers and ecosystem about the developments, trends and adoption of OpenStack as a whole. We will offer you a forward-thinking vision – from inside and outside of VMware – coupled with tangible and actionable information about how to build enterprise-grade OpenStack with the help of VMware’s technologies.

To get the ball rolling, if you happen to be one of the 23,000 folks VMware is lucky enough to be hosting at VMworld 2014 next week in San Francisco, here are a couple of OpenStack sessions that you don’t want to miss:

VMworld 2014

Spotlight Session

SDDC1580-S

What You Need to Know About OpenStack + VMware

Monday, Aug. 25

11:30 a.m.

Break-Out Sessions with Deep-dive OpenStack Content:

STO1491

From Clouds to Bits: Exploring the Software Defined Storage Lifecycle

Monday, Aug. 25

2:30 p.m.

SDDC2370

Why OpenStack Runs Best with the vCloud Suite

Tuesday, Aug. 26

2:30 p.m.

NET1592

Under the Hood: Network Virtualization with OpenStack Neutron and VMware NSX

Wednesday, Aug. 27

9:30 a.m.

SDDC2198

VMware OpenStack End-to-End Demo

Wednesday, Aug. 27

2:00 p.m.

Hands-on-Lab (Available also online 24/7):

SPL-SDC-1420

OpenStack with VMware vSphere and NSX

Hands-on-Lab (Available also online)

All day in the Hands-on-Labs area

We will also have an OpenStack booth within the VMware section on the expo floor, so please do drop by and say hello!