Home > Blogs > OpenStack Blog for VMware > Tag Archives: Developers

Tag Archives: Developers

16 Partners, One Live Demo – OpenStack Barcelona Interop Challenge

 

During Wednesday morning’s keynote session at the OpenStack Summit in Barcelona, I will be on stage along with several other vendors to show off some of our latest work on interoperability between OpenStack clouds.  We will be demonstrating a single workload running successfully on over a dozen different vendors’ OpenStack products without modification, including VMware Integrated OpenStack 3.0.  The idea got started back in April at the last OpenStack Summit when our friends at IBM challenged vendors to demonstrate that their products were interoperable publicly.


VMware has long been a proponent of fostering interoperability between OpenStack clouds.  I currently co-chair the Interop Working Group (formerly known as the DefCore Committee), and VMware Integrated OpenStack 3.0 is an approved OpenStack-Powered product that is compliant with the 2016.08 interoperability guideline, the newest and strictest guideline approved by the OpenStack Foundation Board of Directors.  We also helped produce the Interop Working Group’s first ever report on interoperability issues.  So why do we care about interoperability?  Shouldn’t everything built on OpenStack behave the same anyhow?  Well, to quote the previously mentioned report on interoperability issues:

 

“OpenStack is tremendously flexible, feature-rich, powerful software that can be used to create clouds that fit a wide variety of use cases including software development, web services and e-commerce, network functions virtualization (NFV), video processing, and content delivery to name a few. Commercial offerings built on OpenStack are available as public clouds, installable software distributions, managed private clouds, appliances, and services. OpenStack can be deployed on thousands of combinations of underpinning storage, network, and compute hardware and software. Because of the incredible amount of flexibility OpenStack offers and the constraints of the many use cases it can address, interoperability between OpenStack clouds is not always assured: due to various choices deployers make, different clouds may have some inconsistent behaviors.  One of the goals of the [Interop Working Group]’s work is to create high interoperability standards so that end users of clouds can expect certain behaviors to be consistent between different OpenStack-Powered Clouds and products.”

 

Think of it this way: another amazingly flexible, powerful thing we use daily is electricity.  Electricity is pretty much the same stuff no matter who supplies it to you or what you are using it for, but the way you consume it might be different for different use cases.  The outlet I plug my laptop into at home is a different shape and supplies a different voltage than the one my electric oven is connected into since the oven needs a lot more juice to bake my cookies than my laptop does to type up a blog post.  My home’s air conditioner does not even have a plug: it is wired directly into the house’s circuit breaker.  I consume most of my electricity as a service provided by my power company, but I can also generate some of my power with solar panels I own myself as long as their outputs can are connected to my power grid.  Moreover, to power up my laptop here in Barcelona, I brought along a plug adapter since Europe has some differences in their power grid based on their set of standards and requirements.  However, even though there are some differences, there are many commonalities: electricity is delivered over metal wiring, terminated at some wall socket, most of the world uses one of a few different voltage ranges, and you pay for it based on consumption.  OpenStack is similar: An OpenStack deployment built for NFV workloads might have some different characteristics and interfaces exposed than one made as a public compute cloud.

 

What makes the Interop Challenge interesting is that it is complimentary to the work of the Interop Working Group in that it looks at interoperability in a slightly different light.  To date, the Interop Working Group has mostly focused its efforts on API-level interoperability.  It does so by ensuring that products bearing the OpenStack-Powered mark, pass a set of community-maintained Tempest tests to prove that they expose a set of capabilities (things like booting up a VM with the Nova v2 API or getting a list of available images using the Glance v2 API).  Products bearing the OpenStack-Powered logo are also required to use designated sections of upstream code, so consumers know they are getting community-developed code driving those capabilities.  While the Interop Working Group’s guidelines look primarily at the server side of things, the Interop Challenge seems to address a slightly different aspect of interoperability: workload portability.  Rather than testing a particular set of API’s, the Interop Challenge took a client-side approach by running a real workload against different clouds—in this case, a LAMP stack application with a load-balanced web server tier and a database backend tier, all deployed via Ansible.  The idea was to take a typical application with commonly-used deployment tools and prove that it “just works” across several different OpenStack clouds.

 

In other words, the guidelines produced by the Interop Working Group assure you that certain capabilities are available to end users (just as I can be reasonably confident that any hotel room I walk into will have a socket in the wall from which I can get electricity).  The Interop Challenge compliments that by looking at a more valid use case: it verifies that I can plug in my laptop and get some work done.

 

Along the way, participants also hoped to begin defining some best practices for making workloads more portable among OpenStack clouds to account for some of the differences that are a natural side effect of OpenStack’s flexibility.  For example, we found that the LAMP stack workload was more portable if we let the user specify certain attributes of the cloud he intended to use – such as the name of network the instances should be attached to, the image and flavor that should be used to boot up instances, and block device or network interface names that would be utilized by that image.   Even though we will only be showing one particular workload on stage, that one workload serves as a starting point to help flesh out more best practices in the future.

 

If you want to learn more about VMware’s work on interoperability or about VMware Integrated OpenStack, see us at the keynote or stop by our booth at the OpenStack, and if you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


*This article was written by Mark Voelker – OpenStack Architect at VMware

Tired of Waiting? Deploy OpenStack in 15 Minutes or Less

Watch this video to learn how to deploy OpenStack in Compact Management Mode in under 15 minutes


If you’re ready to try VIO, take it for a spin with the Hands on Labs that illustrates a step-by-step walkthrough of how to deploy OpenStack in Compact Management Mode in under fifteen minutes.

Deploying OpenStack challenges even the most seasoned, skilled IT organizations. With integrations, configurations, testing, re-testing, stress testing and more. For many, deploying OpenStack appears as an IT ‘Science Project’, wherein the light at the end of the tunnel dims with each passing month.

VMware Integrated OpenStack takes a different approach, reducing the redundancies and confusion of deploying OpenStack with the new Compact Management Control Pane. In the Compact Mode UI, wait minutes, not months. Enterprises seeking to evaluate OpenStack or those that are ready to build OpenStack clouds in the most cost efficient manner now have the ability to deploy in as little as 15 minutes quickly.

 

The architecture for VMware Integrated OpenStack is optimized to support compact architecture mode, reducing the need for support, overall resource costs, and the operational complexity keeping an enterprise from completing their OpenStack adoption.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.


If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


You’ll be surprised what you can accomplish in 15 minutes

Apples To Oranges: Why vSphere & VIO are Best Bests for OpenStack Adoption

OpenStack doesn’t mandate defaults for compute, network and storage, which frees you to select the best technology. For many VMware customers, the best choice will be vSphere to provide OpenStack Nova compute capabilities.

 

It is commonly asserted that KVM is the only hypervisor to use in an OpenStack deployment. Yet every significant commercial OpenStack distro supports vSphere. The reasons for this broad support are clear.

Costs for commercial KVM are comparable to vSphere. In addition, vSphere has tremendous added benefits: widely available and knowledgeable staff, vastly simplified operations, and proven lifecycle management that can keep up with OpenStack’s rapid release cadence.

 

Let’s talk first about cost. Traditional, commercial KVM has a yearly recurring support subscription price. Red Hat OpenStack Platform-Standard 2 sockets can be found online at $11,611/year making the 3 year cost around $34,833[i]. VMware vSphere with Operations Management Enterprise Plus (multiplied by 2 to match Red Hat’s socket pair pricing) for 3 years, plus the $200/CPU/year VMware Integrated OpenStack SnS is $14,863[ii]. Even when a customer uses vCloud Suite Advanced, costs are on par with Red Hat. (Red Hat has often compared prices using VMware’s vCloud Suite Enterprise license to exaggerate cost differences.)

 

 

When 451 Research[iii] compared distro costs based on a “basket” of total costs in 2015 they found that commercial distros had a cost that was close to regular virtualization. And if VMware Integrated OpenStack (VIO) is the point of comparison, the costs would likely be even closer. The net-net is that cost turns out not to be a significant differentiator when it comes to commercial KVM compared with vSphere. This brings us to the significant technical and operational benefits vSphere brings to an OpenStack deployment.

 

In the beginning, it was assumed that OpenStack apps would build in the resiliency that used to be assumed from a vSphere environment, thus allowing vSphere to be removed. As the OpenStack project has matured, capabilities such as VMware vMotion and DRS (Distributed Resource Scheduler) have risen in importance to end users. Regardless of the application the stability and reliability of the underlying infrastructure matters.

 

There are two sets of reasons to adopt OpenStack on vSphere.

 

First, you can use VIO to quickly (minutes or hours instead of days or weeks) build a production-grade, operational OpenStack environment with the IT staff you already have, leveraging the battle-tested infrastructure your staff already knows and relies on. No other distro uses a rigorously tested combination of best-in-class compute (vSphere Ent+ for Nova), network (NSX for Neutron), and storage (VSAN for Cinder).

 

Second, only VMware, a long-time (since 2012), active (consistently a top 10 code contributor) OpenStack community member provides BOTH the best underlying infrastructure components AND the ongoing automation and operational tools needed to successfully manage OpenStack in production.

 

In many cases, it all adds up to vSphere being the best choice for production OpenStack.

 


[i] http://www.kernelsoftware.com/products/catalog/red_hat.html
[ii] http://store.vmware.com/store/vmware/en_US/cat/ThemeID.2485600/categoryID.66071400
[iii] https://451research.com/images/Marketing/press_releases/CPI_PR_05.01.15_FINAL.pdf


This Article was written by Cameron Sturdevant,  Product Line Manager at VMware

OpenStack Summit 2016 Re-Cap – Experts from VMware and HedgeServ Outline the Operational Advantages of VMware Integrated OpenStack  

VMware Integrated OpenStack (VIO) offers a simple but powerful path to deploying OpenStack clouds and is a clear win for developers. But what about the operations side?

 

Presenting at the 2016 OpenStack Summit, VMware’s Santhosh Sundararaman and Isa Berisha and Thomas McAteer of HedgeServ, the #1 provider of technical management services to the hedge fund industry, make the case for VIO from an operator’s perspective.

Their session opens with HedgeServ Global Director of Platform Engineering Isa Berisha describing how the company uses OpenStack and why it picked VIO to run HedgeServ’s OpenStack deployment.

Adopting OpenStack was the company’s best option for meeting the rapidly escalating demand it was facing, Berisha explains. But his team were stymied by the limitations of the professional and managed services OpenStack deployments available to them, which were rife with issues of quality, expense, speed-of-execution, and vendor instability. Eventually they decided try VIO, despite it being new at that point and little known in the industry.

 

“We downloaded it, deployed it, and something crazy happened: it just worked,” recalls Berisha. “If you’ve tried to do your own OpenStack deployments, you can understand how that feels.”

HedgeServ now uses VIO to run up to 1,000 large window instances (averaging 64GB) managed by a team of just two. vSphere and ESXi’s speed and their ability to handle hosts of up to 10TB of RAM and hundreds of physical cores have been key elements of HedgServ’s success so far, explains senior engineer McAteer. But VIO’s storage and management capabilities have also been essential, as is its stability and the fact that it’s backed by VMware support and easily upgraded and updated live.

Simplifying operations around OpenStack and making it easier to maintain OpenStack in a production deployment has been a major focus for VMware’s VIO development team, adds VMware product manager Sundararaman in the second half of the presentation.

“There are a lot of workflows that we’ve added to make it really easy to operate OpenStack,” he says.

 

As an example, Sundararaman demos VIO’s OpenStack upgrade process, showing how it’s achieved entirely via the VIO control panel in a vSphere web client plugin. VIO uses the blue-green upgrade paradigm to stand up a completely new control plane based on the new distribution, he explains, and then migrates the data and configurations from the old to the new control plane, while leaving the deployed workloads untouched.

“We ran and upgraded VIO from 1.0 to 2.0 without talking to Santhosh,”” adds McAteer. “That’s a big deal in the OpenStack world but it just, again, worked. The blue-green nature of the upgrade path really makes that very easy.”

 

View the VMware + HedgeServ OpenStack Summit session here.

 

To try VMware Integrated OpenStack yourself, check out our free Hands-on Lab. Or take it a step further and download and install VMware Integrated OpenStack today.

 

 

OpenStack Summit 2016 Re-Cap – Speeding Up Developer Productivity with OpenStack and Open Source Tools

Some developers may avoid utilizing an OpenStack cloud, despite the advantages they stand to gain, because they have already established automation workflows using popular open source tools with public cloud providers.

But in a joint presentation to the 2016 OpenStack Summit, VMware Senior Technical Marketing Manager Trevor Roberts Jr.’s and VMware NSX Engineering Architect Scott Lowe explain how developers can take advantage of OpenStack clouds using the same open source tools that they already know.

Their talk runs through a configuration sequence, including image management, dev/test, and production deployment, showing how standard open source tools that developers already use for non-OpenStack deployments can run in exactly the same way with an OpenStack cloud. In this case, they discuss using Packer for image building, Vagrant and Docker Machine for software development and testing, and Terraform for production grade deployments.

“This is a way of using an existing tool that you’re already comfortable and familiar with to start consuming your OpenStack cloud,” says Roberts, before demoing an OpenStack customized image build with Packer and then using that image to create and deploy an OpenStack-provisioned instance with Vagrant.

 

Lowe next profiles Docker Machine, which provisions instances of the Docker Engine for testing, and shows how you can use Docker Machine to spin up instances of Docker inside an OpenStack cloud.

Lastly, Lowe demos Terraform, which offers an infrastructure-as-a-service/cloud services approach to production deployments on multiple platforms (it’s similar to Heat for OpenStack), creating an entire OpenStack infrastructure with a single click, including a new network and router, and launching multiple new instances with floating IP addresses for each, ready for pulling down containers as required.

 

“It’s very simple to use these development tools with OpenStack,” adds Roberts. “It’s just a matter of making sure your developers feel comfortable with the environment and letting them know there are plugins out there – you don’t have to keep using other infrastructures.”

As Roberts notes in the presentation, VMware itself is “full in on OpenStack – we contribute to projects that aren’t even in our own distribution just to help out the community.” Meanwhile, VMware’s own OpenStack solution – VMware Integrated OpenStack (VIO) – offers, a DefCore-compliant OpenStack distribution specifically configured to use open source drivers to manage VMware infrastructure technologies, further aiding the adoption process for developers already familiar with vSphere, vRealize, and NSX.

 

For more information on VIO, check out the VMware Integrated OpenStack (VIO) product homepage, or the VIO Hands-on Lab. If you hold a current license for vSphere Enterprise Plus, vSphere Operations Management, or vSphere Standard with NSX Advanced, you can download VIO for free.

VMware and OpenStack – Dawning Of A New Day

Yesterday was an exciting day for Team OpenStack at VMware. Our CEO Pat Gelsinger used the VMworld opening keynote to make two exciting announcements that surely caught the attention of developers, IT and the technology community at large. First, he announced VMware Integrated OpenStack, our new distribution of the open source OpenStack code. Then, Pat announced a new collaboration with Google, Docker and Pivotal highlighting our commitment to making sure container-based solutions run great on VMware infrastructure.

OpenStack distributions? Open source software partnerships? This doesn’t seem to fit the oversimplified story you often hear in the media about “open source vs. VMware.”

In reality, the story is more nuanced, and in fact, over the past few years, VMware has developed a track record of embracing open software frameworks where they provide clear value to our customers. To understand the bigger picture, we first need to talk about the key role open frameworks are playing in the new era of software development.

Enabling Developer Agility Through Open Frameworks

Developers of next-generation applications have embraced a fully automated model of accessing data center infrastructure via APIs. When building these new apps, they use APIs to provision their apps, APIs to scale those apps up and down, and APIs to release the resources when they are done. Ultimately, this is about enabling agility: allowing them to build and modify their applications more quickly, thereby moving their business forward faster.

To simplify their lives, developers of modern apps don’t want to deal with the data center infrastructure directly. Rather, they want to leverage a framework that layers on top of that infrastructure, and gives them a more abstract model against which they build their application. These frameworks are often open standards or based on open source, because open frameworks have an easier time establishing mindshare and creating an ecosystem of associated tools, libraries, etc. and because open frameworks carry the potential to offer significantly improved workload portability across varied types of infrastructure.

Open frameworks come in all shapes and sizes, including Java frameworks (e.g., Spring) to data analytics (e.g. Hadoop), Infrastructure-as-a-Service (e.g., OpenStack), Platform-as-a-Service (e.g. Cloud Foundry) and containers (e.g. Docker). VMware has helped lead the creation of several of these frameworks while others are examples of where we recognized good work started elsewhere and moved to add support within our solutions. Either way, if our customers see strong potential in a framework, we have taken steps to enable them through products such as the vFabric Suite (Spring), vSphere Big Data Extensions (Hadoop), Pivotal CF (Cloud Foundry), and now VMware Integrated OpenStack.

VMware’s OpenStack Involvement

Part of enabling any open framework for customers is having skin in the game. VMware is investing its own development resources to help build the OpenStack framework. VMware has been a gold member of the OpenStack foundation since 2012, we are one of the largest companies contributing to OpenStack, adding code to integrate our technologies such as VMware vSphere and VMware NSX™ and enhance the project as a whole. In fact, in the latest release of OpenStack (Icehouse), VMware was the #4 contributor to the official set of “integrated” OpenStack projects, which are the core OpenStack projects like Nova, Neutron, Cinder, Glance, Keystone, Horizon, Swift, etc. that most people recognize. While such numbers only tell part of the story, it’s an indicator that VMware is investing considerably in OpenStack integration, and that data is available for all to see here.

VMware is committed to making sure that the best way to run OpenStack is on VMware. Whether a customer wants to consume VMware technologies as components along with the open source code or a partner OpenStack distribution, or chooses to use VMware Integrated OpenStack, they will achieve a new level of agility for developers by offering them powerful, vendor-neutral OpenStack APIs on top of VMware’s enterprise-class infrastructure.

Dan Wendlandt
Director of OpenStack Product Management

Our New Blog: VMware, OpenStack and the Software-Defined Data Center

By: Amr Abdelrazik

Developers of next-generation applications have embraced a fully automated model of accessing cloud infrastructure via APIs. They use APIs to provision their apps, APIs to scale those apps up and down, and APIs to release the resources when they are done.

OpenStack_Logo

OpenStack is an open source framework that layers on top of virtual and physical infrastructure to provide a set of open, vendor neutral set of infrastructure-as-a-service (IaaS) APIs and related set of tools + services to facilitate this developer-centric access. OpenStack allows IT to deliver this public-cloud like API experience to their developers on their private cloud, without necessarily giving up control of their infrastructure to the public could.

Nobody can disagree that OpenStack does have a tidal wave of marketing momentum behind it. And for the past several years, VMware has been continuously contributing to the community, all while listening to and engaging with customers about how and why OpenStack is something they felt they needed. The answers are as varied as the technologies from which you can choose to build an OpenStack cloud.

But with all its benefits, if an OpenStack cloud is not built on top of reliable, secure, and high-performance virtual infrastructure, running it as production-grade cloud can be quite challenging. This often requires building a large team of developers and administrators with deep experience in OpenStack, Linux, and Python programming, or paying hefty sums to bring in outside consultants with that expertise. Likewise, in the absence of the right management tools, operating an OpenStack cloud can be highly labor intensive, and require an investment in custom-built tools. Both factors have limited the ability of enterprises to adopt OpenStack.

We believe VMware’s software-defined data center technologies can help accelerate OpenStack in the enterprise. VMware is committed to making it easy for IT to deploy OpenStack on VMware’s enterprise-grade compute, network, and storage technologies, and enabling customers to leverage our management tools to deliver key capabilities in areas such as troubleshooting, log management, capacity planning and billback/chargeback. The end result will be OpenStack infrastructures that give developers the tools they want, and give IT the reliable and easily managed data center infrastructure they need.

So now that you know what we’ve done so far, where are we going from here? That’s what this blog will be about. We will, of course, use this blog to communicate information about future products and plans, but we are also looking to engage with our customers and ecosystem about the developments, trends and adoption of OpenStack as a whole. We will offer you a forward-thinking vision – from inside and outside of VMware – coupled with tangible and actionable information about how to build enterprise-grade OpenStack with the help of VMware’s technologies.

To get the ball rolling, if you happen to be one of the 23,000 folks VMware is lucky enough to be hosting at VMworld 2014 next week in San Francisco, here are a couple of OpenStack sessions that you don’t want to miss:

VMworld 2014

Spotlight Session

SDDC1580-S

What You Need to Know About OpenStack + VMware

Monday, Aug. 25

11:30 a.m.

Break-Out Sessions with Deep-dive OpenStack Content:

STO1491

From Clouds to Bits: Exploring the Software Defined Storage Lifecycle

Monday, Aug. 25

2:30 p.m.

SDDC2370

Why OpenStack Runs Best with the vCloud Suite

Tuesday, Aug. 26

2:30 p.m.

NET1592

Under the Hood: Network Virtualization with OpenStack Neutron and VMware NSX

Wednesday, Aug. 27

9:30 a.m.

SDDC2198

VMware OpenStack End-to-End Demo

Wednesday, Aug. 27

2:00 p.m.

Hands-on-Lab (Available also online 24/7):

SPL-SDC-1420

OpenStack with VMware vSphere and NSX

Hands-on-Lab (Available also online)

All day in the Hands-on-Labs area

We will also have an OpenStack booth within the VMware section on the expo floor, so please do drop by and say hello!