Author Archives: Guido Areces

OpenStack Summit Barcelona 2016 Session Recap

Watch below to experience VMware’s Speaker Sessions at this year’s OpenStack Summit in Barcelona!


Rakuten and VMware: How We Got to Enterprise Grade, Production Ready OpenStack
Speaker: Chris Murray

“Our story on how we achieved our OpenStack production private cloud using VMware Integrated OpenStack, NSX, vRealize Log Insight and Operations. Supporting yesterday’s legacy ‘pet’ applications all enterprises have and todays cloud ready ‘cattle’. We share the path we took and explain the decisions we made along the way. We also look forward at what we have planned next.”


Integrated OpenStack with NSX Policy Redirection for NFV: Technical Deep Dive & Demo
Speakers: Vanessa Little, Marcos Hernandez

“Join us for a lecture on how to build efficient service chains with intelligent policy based routing in VMware Integrated Openstack with NSX. With version 2.5 of VMware Integrated OpenStack (VIO), VMware introduced the ability to integrate VMware NSX. With that integration and the NSX-V version of VMware NSX you can now take advantage of NSX-integrated security solutions which can layer in L4-L7 advanced security controls. VMware NSX and Fortinet FortiGate-VMX are closely integrated with the direct purpose of introducing L4-L7 advanced security controls for the VMware Software-Defined Data Center (SDDC). Automated deployment/orchestration, dynamic grouping, policy and policy re-direction enables granular security controls for your mission critical applications. Please join Fortinet experts for an in depth discussion of examples, implementation and use-cases on how to utilize this integrated security solution with your VMware NSX and Integrated OpenStack (VIO) environments.”


Production-Ready Clouds with VMware NSX Networking and OpenStack
Speaker: Dimitri Dismdt

“Do you have challenges with Neutron (reliability, performance, operation, flexibility)? Learn how VMware NSX delivers stable production ready networking for you OpenStack environment. VMware NSX works with and enhances Neutron form key OpenStack distributions like VMware Integrated OpenStack and more.

VMware NSX improves and enhances Neutron:

  • on the reliability side with stable and high-availability network and security services
  • on the performance side with distributed routing, DPDK support, and distributed control plane
  • on the flexibility with BGP dynamic routing support
  • on the operation side, with built-in advanced management and troubleshooting tools

Attendees will leave this session with a firm picture of existing solutions for networking VMware backends, their common features and differences.”

 


OpenStack + VMware : Deploy, Upgrade & Operate Powerful Production OpenStack Cloud in Minutes!
Speakers: Mark Voelker

“VMware has been working rigorously to address some of the most difficult challenges of OpenStack such as installation, upgrade, patching…We have solved almost all operational challenges of OpenStack. In this session, you will learn how you can build a powerful OpenStack cloud in matter of minutes and then operate the cloud with same simplicity. Even the most daunting challenge of OpenStack upgrade has been elegantly solved using blue-green paradigm, so that you can not only upgrade but also cleanly roll back at any point during upgrade. Come join us for an insightful session on how you can build and operate OpenStack private cloud in matter of minutes!”


Sharing our Success and Vision for OpenStack Private Clouds
Speakers: Pete Cruz, Santosh Suderman

“VMware and OpenStack have come a long way together. We started with the mission that “VMware products should be one of the best ways to running OpenStack private cloud”. We are happy to share our success with that mission. We will share our customer success stories, and highlight community contributions from everyone that made some of the most powerful OpenStack private cloud a reality. We will also share our vision for OpenStack + VMware as we look towards 2017 and beyond. Come join us to share our strategy, vision and be excited to build OpenStack clouds leveraging your VMware investments.”

https://www.youtube.com/watch?time_continue=1&v=R3Gn-eZxdLE

 

If you’re ready to deploy OpenStack today, download it now and get started, or  try our VMware Integrated OpenStack Hands-On-Lab, no installation required.

How To Efficiently Derive Value From VMware Integrated OpenStack

One of our recent posts on this blog covered how VMware Integrated Openstack (VIO) can be deployed in less than 15 minutes thanks to an easy and friendly wizard-driven deployment.

That post has also mentioned that recent updates have been added to VIO that focus on its ease of use and consumption, including the integration with your existing vSphere environment.

 

This article will explore in greater detail the latter topic and will focus on two features that are designed to help customers start deriving value from VIO quickly.


vSphere Templates as OpenStack Images

 

An OpenStack cloud without any image is like a physical server without an operating system – not so useful!

 

One of the first elements you want to seed your cloud with is images so users/developers can start building applications. In a private cloud environment, cloud admins will want to expose a list of standard OS (Operating System, not OpenStack…) images to be used to that end, in other words OS master images.

 

When VIO is deployed on top of an existing vSphere environment, these OS master images are generally already present in the virtualization layer as vSphere templates and a great deal of engineering hours have gone into creating and configuring those images to reflect the very own needs of a given corporate organization in terms of security, compliance or regulatory requirements – OS hardening, customization, agents installation, etc…

 

What if you were able to reuse those vSphere templates and turn them into OpenStack images and hence preserve all of your master OS configurations across all of your cloud deployments?

VIO supports this capability out of the box (see diagram below) and enables users to leverage their existing vSphere templates by adding them to their OpenStack deployment as Glance images, which can then be booted as OpenStack instances or used to create bootable Cinder volumes.

 

The beauty of this feature is that it is done without copying the template into the Glance data-store. The media only exists in one place (the original data-store where the template is stored) and we will actually create a “pointer” from the OpenStack image object towards the vSphere template thus saving us from the tedious and possibly lengthy process of copying media from one location to another (OS images tend to be pretty large in corporate environments).

 

This feature is available through the glance CLI only and here are the high-level steps that need to be performed to create an image:

– First: create an OpenStack image

– Second: note that image ID and specify a location pointing towards the vSphere template

– Third: in the images section of the Horizon dashboard for example, a new image will show up called “corporate-windows-2012-r2” from which instances can be launched.

Note: cloud admins will have to make sure those OS images have the cloud-init package installed on them before they can be fully used in the OpenStack environment. If cloud-init needs to be installed, this can be done either pre- or post- the import process into Glance.

Run the video below for a detailed tutorial on the configuration steps, including CLI commands:

Finally, here’s the section in the official configuration guide: http://tinyurl.com/hx4z4jt


Importing vSphere VMs into OpenStack

 

A frequent request from customers deploying VIO on their existing vSphere implementation is “Can I import my existing VMs into my OpenStack environment?”

 

The business rationale for this request is that IT wants to be consistent and offer a similar level of service and user experience to both the new applications deployed through the OpenStack framework as well as the existing workloads currently running under a vSphere management plane “only”. They basically want users in charge of existing applications to enjoy capabilities such as self-service, lifecycle management, automation, etc…and hence avoid creating a two-tier IT offering.

 

VIO supports this capability by allowing users to quickly import vSphere VMs into VIO and start managing them as instances through standard OpenStack APIs. This feature is also available through CLI only and leverages the newly released VMware DCLI toolset.

 

Here are the high-level steps for importing an existing VM under OpenStack:

– First, list the “Unmanaged” VMs in vCenter (ie unmanaged by VIO)

– Import one of those VMs into a specific project/tenant in OpenStack

– The system will then generate a UUID for the newly created instance and the instance will show up in Horizon where it can be managed like any other running one.

 

 

We hope you enjoyed reading this article and that those features will make you want to go ahead and discover VIO!

If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


This article was written by Hassan Hamade a Cloud Solution Architect at VMware in the EMEA SDDC technology practice team. 

16 Partners, One Live Demo – OpenStack Barcelona Interop Challenge

 

During Wednesday morning’s keynote session at the OpenStack Summit in Barcelona, I will be on stage along with several other vendors to show off some of our latest work on interoperability between OpenStack clouds.  We will be demonstrating a single workload running successfully on over a dozen different vendors’ OpenStack products without modification, including VMware Integrated OpenStack 3.0.  The idea got started back in April at the last OpenStack Summit when our friends at IBM challenged vendors to demonstrate that their products were interoperable publicly.


VMware has long been a proponent of fostering interoperability between OpenStack clouds.  I currently co-chair the Interop Working Group (formerly known as the DefCore Committee), and VMware Integrated OpenStack 3.0 is an approved OpenStack-Powered product that is compliant with the 2016.08 interoperability guideline, the newest and strictest guideline approved by the OpenStack Foundation Board of Directors.  We also helped produce the Interop Working Group’s first ever report on interoperability issues.  So why do we care about interoperability?  Shouldn’t everything built on OpenStack behave the same anyhow?  Well, to quote the previously mentioned report on interoperability issues:

 

“OpenStack is tremendously flexible, feature-rich, powerful software that can be used to create clouds that fit a wide variety of use cases including software development, web services and e-commerce, network functions virtualization (NFV), video processing, and content delivery to name a few. Commercial offerings built on OpenStack are available as public clouds, installable software distributions, managed private clouds, appliances, and services. OpenStack can be deployed on thousands of combinations of underpinning storage, network, and compute hardware and software. Because of the incredible amount of flexibility OpenStack offers and the constraints of the many use cases it can address, interoperability between OpenStack clouds is not always assured: due to various choices deployers make, different clouds may have some inconsistent behaviors.  One of the goals of the [Interop Working Group]’s work is to create high interoperability standards so that end users of clouds can expect certain behaviors to be consistent between different OpenStack-Powered Clouds and products.”

 

Think of it this way: another amazingly flexible, powerful thing we use daily is electricity.  Electricity is pretty much the same stuff no matter who supplies it to you or what you are using it for, but the way you consume it might be different for different use cases.  The outlet I plug my laptop into at home is a different shape and supplies a different voltage than the one my electric oven is connected into since the oven needs a lot more juice to bake my cookies than my laptop does to type up a blog post.  My home’s air conditioner does not even have a plug: it is wired directly into the house’s circuit breaker.  I consume most of my electricity as a service provided by my power company, but I can also generate some of my power with solar panels I own myself as long as their outputs can are connected to my power grid.  Moreover, to power up my laptop here in Barcelona, I brought along a plug adapter since Europe has some differences in their power grid based on their set of standards and requirements.  However, even though there are some differences, there are many commonalities: electricity is delivered over metal wiring, terminated at some wall socket, most of the world uses one of a few different voltage ranges, and you pay for it based on consumption.  OpenStack is similar: An OpenStack deployment built for NFV workloads might have some different characteristics and interfaces exposed than one made as a public compute cloud.

 

What makes the Interop Challenge interesting is that it is complimentary to the work of the Interop Working Group in that it looks at interoperability in a slightly different light.  To date, the Interop Working Group has mostly focused its efforts on API-level interoperability.  It does so by ensuring that products bearing the OpenStack-Powered mark, pass a set of community-maintained Tempest tests to prove that they expose a set of capabilities (things like booting up a VM with the Nova v2 API or getting a list of available images using the Glance v2 API).  Products bearing the OpenStack-Powered logo are also required to use designated sections of upstream code, so consumers know they are getting community-developed code driving those capabilities.  While the Interop Working Group’s guidelines look primarily at the server side of things, the Interop Challenge seems to address a slightly different aspect of interoperability: workload portability.  Rather than testing a particular set of API’s, the Interop Challenge took a client-side approach by running a real workload against different clouds—in this case, a LAMP stack application with a load-balanced web server tier and a database backend tier, all deployed via Ansible.  The idea was to take a typical application with commonly-used deployment tools and prove that it “just works” across several different OpenStack clouds.

 

In other words, the guidelines produced by the Interop Working Group assure you that certain capabilities are available to end users (just as I can be reasonably confident that any hotel room I walk into will have a socket in the wall from which I can get electricity).  The Interop Challenge compliments that by looking at a more valid use case: it verifies that I can plug in my laptop and get some work done.

 

Along the way, participants also hoped to begin defining some best practices for making workloads more portable among OpenStack clouds to account for some of the differences that are a natural side effect of OpenStack’s flexibility.  For example, we found that the LAMP stack workload was more portable if we let the user specify certain attributes of the cloud he intended to use – such as the name of network the instances should be attached to, the image and flavor that should be used to boot up instances, and block device or network interface names that would be utilized by that image.   Even though we will only be showing one particular workload on stage, that one workload serves as a starting point to help flesh out more best practices in the future.

 

If you want to learn more about VMware’s work on interoperability or about VMware Integrated OpenStack, see us at the keynote or stop by our booth at the OpenStack, and if you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


*This article was written by Mark Voelker – OpenStack Architect at VMware

Tired of Waiting? Deploy OpenStack in 15 Minutes or Less

Watch this video to learn how to deploy OpenStack in Compact Management Mode in under 15 minutes


If you’re ready to try VIO, take it for a spin with the Hands on Labs that illustrates a step-by-step walkthrough of how to deploy OpenStack in Compact Management Mode in under fifteen minutes.

Deploying OpenStack challenges even the most seasoned, skilled IT organizations. With integrations, configurations, testing, re-testing, stress testing and more. For many, deploying OpenStack appears as an IT ‘Science Project’, wherein the light at the end of the tunnel dims with each passing month.

VMware Integrated OpenStack takes a different approach, reducing the redundancies and confusion of deploying OpenStack with the new Compact Management Control Pane. In the Compact Mode UI, wait minutes, not months. Enterprises seeking to evaluate OpenStack or those that are ready to build OpenStack clouds in the most cost efficient manner now have the ability to deploy in as little as 15 minutes quickly.

 

The architecture for VMware Integrated OpenStack is optimized to support compact architecture mode, reducing the need for support, overall resource costs, and the operational complexity keeping an enterprise from completing their OpenStack adoption.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.


If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


You’ll be surprised what you can accomplish in 15 minutes

Apples To Oranges: Why vSphere & VIO are Best Bests for OpenStack Adoption

OpenStack doesn’t mandate defaults for compute, network and storage, which frees you to select the best technology. For many VMware customers, the best choice will be vSphere to provide OpenStack Nova compute capabilities.

 

It is commonly asserted that KVM is the only hypervisor to use in an OpenStack deployment. Yet every significant commercial OpenStack distro supports vSphere. The reasons for this broad support are clear.

Costs for commercial KVM are comparable to vSphere. In addition, vSphere has tremendous added benefits: widely available and knowledgeable staff, vastly simplified operations, and proven lifecycle management that can keep up with OpenStack’s rapid release cadence.

 

Let’s talk first about cost. Traditional, commercial KVM has a yearly recurring support subscription price. Red Hat OpenStack Platform-Standard 2 sockets can be found online at $11,611/year making the 3 year cost around $34,833[i]. VMware vSphere with Operations Management Enterprise Plus (multiplied by 2 to match Red Hat’s socket pair pricing) for 3 years, plus the $200/CPU/year VMware Integrated OpenStack SnS is $14,863[ii]. Even when a customer uses vCloud Suite Advanced, costs are on par with Red Hat. (Red Hat has often compared prices using VMware’s vCloud Suite Enterprise license to exaggerate cost differences.)

 

 

When 451 Research[iii] compared distro costs based on a “basket” of total costs in 2015 they found that commercial distros had a cost that was close to regular virtualization. And if VMware Integrated OpenStack (VIO) is the point of comparison, the costs would likely be even closer. The net-net is that cost turns out not to be a significant differentiator when it comes to commercial KVM compared with vSphere. This brings us to the significant technical and operational benefits vSphere brings to an OpenStack deployment.

 

In the beginning, it was assumed that OpenStack apps would build in the resiliency that used to be assumed from a vSphere environment, thus allowing vSphere to be removed. As the OpenStack project has matured, capabilities such as VMware vMotion and DRS (Distributed Resource Scheduler) have risen in importance to end users. Regardless of the application the stability and reliability of the underlying infrastructure matters.

 

There are two sets of reasons to adopt OpenStack on vSphere.

 

First, you can use VIO to quickly (minutes or hours instead of days or weeks) build a production-grade, operational OpenStack environment with the IT staff you already have, leveraging the battle-tested infrastructure your staff already knows and relies on. No other distro uses a rigorously tested combination of best-in-class compute (vSphere Ent+ for Nova), network (NSX for Neutron), and storage (VSAN for Cinder).

 

Second, only VMware, a long-time (since 2012), active (consistently a top 10 code contributor) OpenStack community member provides BOTH the best underlying infrastructure components AND the ongoing automation and operational tools needed to successfully manage OpenStack in production.

 

In many cases, it all adds up to vSphere being the best choice for production OpenStack.

 


[i] http://www.kernelsoftware.com/products/catalog/red_hat.html
[ii] http://store.vmware.com/store/vmware/en_US/cat/ThemeID.2485600/categoryID.66071400
[iii] https://451research.com/images/Marketing/press_releases/CPI_PR_05.01.15_FINAL.pdf


This Article was written by Cameron Sturdevant,  Product Line Manager at VMware

Introducing Senlin – a new tool for speedy, load-balanced OpenStack clustering

 Senlin is a new OpenStack project that provides a generic clustering service for OpenStack clouds. It’s capable of managing homogeneous objects exposed by other OpenStack components, including Nova, Heat, or Cinder, making it of interest to anyone using, or thinking of using, VMware Integrated OpenStack.

VMware OpenStack architect Mark Voelker, along with VMware colleague Xinhui Li and Qiming Teng of IBM, offer a helpful introduction to Senlin in their 2016 OpenStack Summit session, now viewable here.

 

Voelker opens by reviewing the generic requirements for OpenStack clustering, which include simple manageability, expandability on demand, load-balancing, customizability to real-life use cases, and extensibility.

 

OpenStack already offers limited cluster management capabilities through Heat’s orchestration service, he notes. But Heat’s mission is to orchestrate composite cloud apps using a declarative template format through an OpenStack-native API. While functions like auto-scaling, high availability, and load balancing are complimentary to that mission, having those functions all in a single service isn’t ideal.

“We thought maybe we should think about cluster management as a first class service that everything else could tie into,” Volker recalls, which is where Senlin comes in.

 

Teng then describes Senlin’s origin, which started as an effort to build within Heat, but soon moved to offload Heat’s autoscaling capabilities into a separate project that expanded OpenStack autoscaling offerings more comprehensively, becoming OpenStack’s first dedicated clustering service.

 

Senlin is designed to be scalable, load-balanced, highly-available, and manageable, Teng explains, before outlining its server architecture and detailing the operations it supports. “Senlin can manage almost any object,” he says. “It can be another server, a Heat stack, a single volume or floating IP protocol, we don’t care. We wanted to just build a foundational service allowing you to manage any type of resource.”

To end the session, Li offers a demo of how Senlin creates a resilient, auto-scaling cluster with both high availability and load balancing in as little as five minutes.

 

If you want to learn more about clustering for OpenStack clouds created with VMware Integrated OpenStack (VIO) you can find expert assistance at our product homepage. Also check out our Hands-on Lab, or try VIO for yourself by downloading and installing VMware Integrated OpenStack direct.

Issues With Interoperability in OpenStack & How DefCore is Addressing Them

Interoperability is built into the founding conception of OpenStack. But as the platform has gained popularity, it’s also become ever more of a challenge.

“There’s a lot of different ways to consume OpenStack and it’s increasingly important that we figure out ways to make things interoperable across all those different methods of consumption,” notes VMware’s Mark Voelker in a presentation to the most recent OpenStack Summit titled: “ (view the slide set here).

 

Voelker, a VMware OpenStack architect and co-chair of the OpenStack Foundation’s DefCore Committee, shares the stage with OpenStack Foundation interoperability engineer Chris Hoge. Together they offer an overview of the integration challenges OpenStack faces today, and point to the work DefCore is doing to help deliver on the OpenStack vision. For anyone working, or planning to work, with VMware Integrated OpenStack (VIO), the talk is a great backgrounder on what’s being done to ensure that VIO integrates as well with non-VMware OpenStack technologies as it does with VMware’s own.

Hoge begins by outlining DefCore’s origins as a working group founded to fulfill the OpenStack Foundation mandate for a “faithful implementation test suite to ensure compatibility and interoperability for products.” DefCore has since issued five guidelines that products can now be certified as following, allowing them to carry the logo.

After explaining what it takes to meet the DefCore guidelines, Hoge reviews issues that remain unresolved. “The good news about OpenStack is that it’s incredibly flexible. There are any number of ways you can configure your OpenStack Cloud. You have your choice of hypervisors, storage drivers, network drivers – it’s a really powerful platform,” he observes. But that very richness and flexibility also makes it harder to ensure that two instances of OpenStack will work well together, he explains.

 

Among areas with issues are image operations, networking, policy and configuration discovery, API iteration, provability, and project documentation, reports Voelker. Discoverability and how to map capabilities to APIs are also a major concern, as is lack of awareness about DefCore’s guidelines. “There’s still some confusion about what kind of things people should be taking into account when they are making technical choices,” Hoge adds.

The OpenStack Foundation is therefore working to raise the profile of interoperability as requirement and awareness of the meaning behind the “OpenStack Powered” logo. DefCore itself is interacting closely with developers and vendors in the community to address the integration challenges they’ve identified and enforce a measurable standard on new OpenStack contributions.

 

“Awareness is half the battle,” notes Voelker, before he and Hoge outline the conversations DefCore is currently leading, outcomes they’ve already achieved, and what DefCore is doing next – watch for a report on top interoperability issues soon, more work on testing, and a discussion on new guidelines for NFV-ready clouds.

 

If you are interested in how VMware Integrated OpenStack (VIO) conforms with DefCore standards, you can more find information and experts to contact on our product homepage. You can also check out our Hands-on Lab, or try VIO for yourself and download and install VMware Integrated OpenStack direct.

OpenStack Summit 2016 Re-Cap – Experts from VMware and HedgeServ Outline the Operational Advantages of VMware Integrated OpenStack  

VMware Integrated OpenStack (VIO) offers a simple but powerful path to deploying OpenStack clouds and is a clear win for developers. But what about the operations side?

 

Presenting at the 2016 OpenStack Summit, VMware’s Santhosh Sundararaman and Isa Berisha and Thomas McAteer of HedgeServ, the #1 provider of technical management services to the hedge fund industry, make the case for VIO from an operator’s perspective.

Their session opens with HedgeServ Global Director of Platform Engineering Isa Berisha describing how the company uses OpenStack and why it picked VIO to run HedgeServ’s OpenStack deployment.

Adopting OpenStack was the company’s best option for meeting the rapidly escalating demand it was facing, Berisha explains. But his team were stymied by the limitations of the professional and managed services OpenStack deployments available to them, which were rife with issues of quality, expense, speed-of-execution, and vendor instability. Eventually they decided try VIO, despite it being new at that point and little known in the industry.

 

“We downloaded it, deployed it, and something crazy happened: it just worked,” recalls Berisha. “If you’ve tried to do your own OpenStack deployments, you can understand how that feels.”

HedgeServ now uses VIO to run up to 1,000 large window instances (averaging 64GB) managed by a team of just two. vSphere and ESXi’s speed and their ability to handle hosts of up to 10TB of RAM and hundreds of physical cores have been key elements of HedgServ’s success so far, explains senior engineer McAteer. But VIO’s storage and management capabilities have also been essential, as is its stability and the fact that it’s backed by VMware support and easily upgraded and updated live.

Simplifying operations around OpenStack and making it easier to maintain OpenStack in a production deployment has been a major focus for VMware’s VIO development team, adds VMware product manager Sundararaman in the second half of the presentation.

“There are a lot of workflows that we’ve added to make it really easy to operate OpenStack,” he says.

 

As an example, Sundararaman demos VIO’s OpenStack upgrade process, showing how it’s achieved entirely via the VIO control panel in a vSphere web client plugin. VIO uses the blue-green upgrade paradigm to stand up a completely new control plane based on the new distribution, he explains, and then migrates the data and configurations from the old to the new control plane, while leaving the deployed workloads untouched.

“We ran and upgraded VIO from 1.0 to 2.0 without talking to Santhosh,”” adds McAteer. “That’s a big deal in the OpenStack world but it just, again, worked. The blue-green nature of the upgrade path really makes that very easy.”

 

View the VMware + HedgeServ OpenStack Summit session here.

 

To try VMware Integrated OpenStack yourself, check out our free Hands-on Lab. Or take it a step further and download and install VMware Integrated OpenStack today.

 

 

OpenStack Summit 2016 Re-Cap – Speeding Up Developer Productivity with OpenStack and Open Source Tools

Some developers may avoid utilizing an OpenStack cloud, despite the advantages they stand to gain, because they have already established automation workflows using popular open source tools with public cloud providers.

But in a joint presentation to the 2016 OpenStack Summit, VMware Senior Technical Marketing Manager Trevor Roberts Jr.’s and VMware NSX Engineering Architect Scott Lowe explain how developers can take advantage of OpenStack clouds using the same open source tools that they already know.

Their talk runs through a configuration sequence, including image management, dev/test, and production deployment, showing how standard open source tools that developers already use for non-OpenStack deployments can run in exactly the same way with an OpenStack cloud. In this case, they discuss using Packer for image building, Vagrant and Docker Machine for software development and testing, and Terraform for production grade deployments.

“This is a way of using an existing tool that you’re already comfortable and familiar with to start consuming your OpenStack cloud,” says Roberts, before demoing an OpenStack customized image build with Packer and then using that image to create and deploy an OpenStack-provisioned instance with Vagrant.

 

Lowe next profiles Docker Machine, which provisions instances of the Docker Engine for testing, and shows how you can use Docker Machine to spin up instances of Docker inside an OpenStack cloud.

Lastly, Lowe demos Terraform, which offers an infrastructure-as-a-service/cloud services approach to production deployments on multiple platforms (it’s similar to Heat for OpenStack), creating an entire OpenStack infrastructure with a single click, including a new network and router, and launching multiple new instances with floating IP addresses for each, ready for pulling down containers as required.

 

“It’s very simple to use these development tools with OpenStack,” adds Roberts. “It’s just a matter of making sure your developers feel comfortable with the environment and letting them know there are plugins out there – you don’t have to keep using other infrastructures.”

As Roberts notes in the presentation, VMware itself is “full in on OpenStack – we contribute to projects that aren’t even in our own distribution just to help out the community.” Meanwhile, VMware’s own OpenStack solution – VMware Integrated OpenStack (VIO) – offers, a DefCore-compliant OpenStack distribution specifically configured to use open source drivers to manage VMware infrastructure technologies, further aiding the adoption process for developers already familiar with vSphere, vRealize, and NSX.

 

For more information on VIO, check out the VMware Integrated OpenStack (VIO) product homepage, or the VIO Hands-on Lab. If you hold a current license for vSphere Enterprise Plus, vSphere Operations Management, or vSphere Standard with NSX Advanced, you can download VIO for free.

OpenStack Summit 2016 Re-Cap – A Guide to Practical OpenStack Network Virtualization using OVN

OVN (pronounced “oven”) is a rapidly growing, open source solution being developed by the Open vSwitch (OVS) community that provides network virtualization for OVS. While OVN isn’t designed to work with VMware Integrated OpenStack, it’s another OpenStack project to which VMware has been devoting time and effort, and definitely worth knowing about.


For a good sense of how OVN is progressing, check out this talk by four OVS community members at the 2016 OpenStack Summit. They explain how OVN works and why it’s worth trying.

 

VMware OVS developer Ben Pfaff kicks things off with an overview of network virtualization, emphasizing the value of being able to abstract a physical network and of making network provisioning self-service.

Fellow VMware engineer and core OVS and OVN developer Justin Pettit next outlines OVN’s capabilities and stresses its compatibility with the platforms that OVS already works with. When it comes to OpenStack, he reports, “the best integration that we have right now is with OpenStack Neutron but we plan to have it work with other CMSes . . . and you can do everything that you would want through the command line or through data base calls that you can do through Neutron.”

 

Like OVS, OVN is open source and vendor-neutral, and has quickly gained support from a diverse group of vendors including VMware, IBM, Red Hat, and eBay among others. The goal is to match OVS production quality and keep OVN’s design simple but scalable to 1,000s of hypervisors. “We hope it becomes the preferred method for most people who want to use OVS or networking in general,” Pettit says.

If successful, OVN will expand OVS, help improve Neutron’s functionality, and significantly reduce the development burden on Neutron for OVS integration. Add an improved architecture built around ‘logical flows’ and configuration coordinated through databases, and it’s set to outperform existing OVS networking plugins, Pfaff argues.

 

The same goes for security, adds Ryan Moats of IBM – OVN now uses a connection tracker, letting OVS manage state-full connections itself and speeding security group throughput significantly. Its L3 security group design also does all L3 processing in OVS, further improving performance.

The fourth speaker, Han Zhou of eBay, outlines how the group overcame a series of bottlenecks to scale the OVN control plane to 2,000 hypervisors, 20,000 VIF ports and 200 and logical switches operating at once.

The team then highlights ongoing scale improvements and profiles the OVN Neutron plugin. “We will run this in our public cloud,” says IBM’s Moats before outlining OVN deployment and what to look for in the upcoming OVN release. Finally, all four speakers invite their audience to contribute to OVN, and try it out for themselves.

 
VMware Integrated OpenStack is also available for testing in VMware’s Hands-on Lab. Or download it for a free with a current license for vSphere Enterprise Plus, vSphere Operations Management, or NSX with vSphere Standard.