Home > Blogs > OpenStack Blog for VMware

Introducing Senlin – a new tool for speedy, load-balanced OpenStack clustering

 Senlin is a new OpenStack project that provides a generic clustering service for OpenStack clouds. It’s capable of managing homogeneous objects exposed by other OpenStack components, including Nova, Heat, or Cinder, making it of interest to anyone using, or thinking of using, VMware Integrated OpenStack.

VMware OpenStack architect Mark Voelker, along with VMware colleague Xinhui Li and Qiming Teng of IBM, offer a helpful introduction to Senlin in their 2016 OpenStack Summit session, now viewable here.

 

Voelker opens by reviewing the generic requirements for OpenStack clustering, which include simple manageability, expandability on demand, load-balancing, customizability to real-life use cases, and extensibility.

 

OpenStack already offers limited cluster management capabilities through Heat’s orchestration service, he notes. But Heat’s mission is to orchestrate composite cloud apps using a declarative template format through an OpenStack-native API. While functions like auto-scaling, high availability, and load balancing are complimentary to that mission, having those functions all in a single service isn’t ideal.

“We thought maybe we should think about cluster management as a first class service that everything else could tie into,” Volker recalls, which is where Senlin comes in.

 

Teng then describes Senlin’s origin, which started as an effort to build within Heat, but soon moved to offload Heat’s autoscaling capabilities into a separate project that expanded OpenStack autoscaling offerings more comprehensively, becoming OpenStack’s first dedicated clustering service.

 

Senlin is designed to be scalable, load-balanced, highly-available, and manageable, Teng explains, before outlining its server architecture and detailing the operations it supports. “Senlin can manage almost any object,” he says. “It can be another server, a Heat stack, a single volume or floating IP protocol, we don’t care. We wanted to just build a foundational service allowing you to manage any type of resource.”

To end the session, Li offers a demo of how Senlin creates a resilient, auto-scaling cluster with both high availability and load balancing in as little as five minutes.

 

If you want to learn more about clustering for OpenStack clouds created with VMware Integrated OpenStack (VIO) you can find expert assistance at our product homepage. Also check out our Hands-on Lab, or try VIO for yourself by downloading and installing VMware Integrated OpenStack direct.

Next Generation Security Services in OpenStack

OpenStack is quickly and steadily positioning itself as a great Infrastructure-as-a-Service solution for the Enterprise. Originally conceived for that proverbial DevOps Cloud use case (and as a private alternative to AWS), the OpenStack framework has evolved to add rich Compute, Network and Storage services to fit several enterprise use cases. This evolution can be evidenced by the following initiatives:

1) Higher number of commercial distributions are available today, in addition to Managed Services and/or DIY OpenStack.
2) Diverse and expanded application and OS support vs. just Cloud-Native apps (a.k.a “pets vs. cattle”).
3) Advanced network connectivity options (routable Neutron topologies, dynamic routing support, etc.).
4) More storage options from traditional Enterprise storage vendors.

This is definitely great news, but one area where OpenStack has lagged behind is security. As of today, the only robust option for application security offered in OpenStack are Neutron Security Groups. The basic idea is that OpenStack Tenants can be in control of their own firewall rules, which are then applied and enforced in the dataplane by technologies like Linux IP Tables, OVS conntrack or, as it is the case with NSX vSphere, a stateful and scalable Distributed Firewall with vNIC-level resolution operating on each and every ESXi hypervisor.

Neutron Security Groups were designed for intra and inter-tier L3/L4 protection within the same application environment (the so-called “East-West” traffic).

In addition to Neutron Security Groups, projects like Firewall-as-a-Service (FWaaS) are also trying to onboard next generation security services onto these OpenStack Clouds and there is an interesting roadmap taking form on the horizon. The future looks great, but while OpenStack gets there, what are the implementation alternatives available today? How can Cloud Architects combine the benefits of the OpenStack framework and its appealing API consumption model, with security services that provide more insight and visibility into the application traffic? In other words, how can OpenStack Cloud admins offer next generation security right now, beyond the basic IP/TCP/UDP inspection offered in Neutron?

The answer is: With VMware NSX.

NSX natively supports and embeds an in-kernel redirection technology called Network Extensibility, or NetX. Third party ecosystem vendors write solutions against this extensibility model, following a rigorous validation process, to deliver elegant and seamless integrations. Once the solution is implemented, the notion is simply beautiful: leverage the NSX policy language, the same language that made NSX into the de facto solution for micro-segmentation, to “punt” interesting traffic toward the partner solution in question. This makes it possible to have protocol-level visibility for East-West traffic. This approach also allows you to create a firewall rule-set that looks like your business and not like your network. Application attributes such as VM name, OS type or any arbitrary vCenter object can be used to define said policies, irrespective of location, IP address or network topology. Once the partner solution receives the traffic, then the security admins can apply deep traffic inspection, visibility and monitoring techniques to it.

screen-shot-2

How does all of the above relate to OpenStack, you may be wondering? Well, the process is extremely simple:

1) First, integrate OpenStack and NSX using the various up-streamed Neutron plugins, or better yet, get out-of-the-box integration by deploying VMware’s OpenStack distro, VMware Integrated OpenStack (VIO), which is free for existing VMware customers.
2) Next, integrate NSX and the Partner Solution in question following documented configuration best practices. The list of active ecosystem partners can be found here.
3) Proceed to create an NSX Security policy to classify the application traffic by using the policy language mentioned above. This approach follows a wizard-based provisioning process to select which VMs will be subject to deep level inspection with Service Composer.
4) Use the Security Partner management console to create protocol-level security policies, such as application level firewalling, web reputation filtering, malware protection, antivirus protection and many more.
5) Launch Nova instances from OpenStack without a Neutron Security Group attached to them. This step is critical. Remember that we are delegating security management to the Security Admin, not the Tenant. Neutron Security Groups do not apply in this context.
6) Test and verify that your security policy is applied as designed.

screen-shot-1

This all assumes that the security admin has relinquished control of the firewall from the Tenant and that all security operations are controlled by the firewall team, which is a very common Enterprise model.

There are some Neutron enhancements in the works, such as Flow Classifier and Service Chaining, that are looking “split” the security consumption between admins and tenants, by promoting these redirection policies to the Neutron API layer, thus allowing a Tenant (or a Security admin) to selectively redirect traffic without bypassing Neutron itself. This implementation, however, is very basic when compared to what NSX can do natively. We are actively monitoring this work and studying opportunities for future integration. In the meantime, the approach outlined above can be used to get the best of both worlds: the APIs you want (OpenStack) with the infrastructure you trust (vSphere and NSX).

In the next blog post we will show an actual working integration example with one of our Security Technology Partners, Fortinet, using VIO and NSX NetX technology.

Author: Marcos Hernandez
Principal Engineer, CCIE#8283, VCIX, VCP-NV
hernandezm@vmware.com
@netvirt

Issues With Interoperability in OpenStack & How DefCore is Addressing Them

Interoperability is built into the founding conception of OpenStack. But as the platform has gained popularity, it’s also become ever more of a challenge.

“There’s a lot of different ways to consume OpenStack and it’s increasingly important that we figure out ways to make things interoperable across all those different methods of consumption,” notes VMware’s Mark Voelker in a presentation to the most recent OpenStack Summit titled: “ (view the slide set here).

 

Voelker, a VMware OpenStack architect and co-chair of the OpenStack Foundation’s DefCore Committee, shares the stage with OpenStack Foundation interoperability engineer Chris Hoge. Together they offer an overview of the integration challenges OpenStack faces today, and point to the work DefCore is doing to help deliver on the OpenStack vision. For anyone working, or planning to work, with VMware Integrated OpenStack (VIO), the talk is a great backgrounder on what’s being done to ensure that VIO integrates as well with non-VMware OpenStack technologies as it does with VMware’s own.

Hoge begins by outlining DefCore’s origins as a working group founded to fulfill the OpenStack Foundation mandate for a “faithful implementation test suite to ensure compatibility and interoperability for products.” DefCore has since issued five guidelines that products can now be certified as following, allowing them to carry the logo.

After explaining what it takes to meet the DefCore guidelines, Hoge reviews issues that remain unresolved. “The good news about OpenStack is that it’s incredibly flexible. There are any number of ways you can configure your OpenStack Cloud. You have your choice of hypervisors, storage drivers, network drivers – it’s a really powerful platform,” he observes. But that very richness and flexibility also makes it harder to ensure that two instances of OpenStack will work well together, he explains.

 

Among areas with issues are image operations, networking, policy and configuration discovery, API iteration, provability, and project documentation, reports Voelker. Discoverability and how to map capabilities to APIs are also a major concern, as is lack of awareness about DefCore’s guidelines. “There’s still some confusion about what kind of things people should be taking into account when they are making technical choices,” Hoge adds.

The OpenStack Foundation is therefore working to raise the profile of interoperability as requirement and awareness of the meaning behind the “OpenStack Powered” logo. DefCore itself is interacting closely with developers and vendors in the community to address the integration challenges they’ve identified and enforce a measurable standard on new OpenStack contributions.

 

“Awareness is half the battle,” notes Voelker, before he and Hoge outline the conversations DefCore is currently leading, outcomes they’ve already achieved, and what DefCore is doing next – watch for a report on top interoperability issues soon, more work on testing, and a discussion on new guidelines for NFV-ready clouds.

 

If you are interested in how VMware Integrated OpenStack (VIO) conforms with DefCore standards, you can more find information and experts to contact on our product homepage. You can also check out our Hands-on Lab, or try VIO for yourself and download and install VMware Integrated OpenStack direct.

VMware Integrated OpenStack 3.0 Announced Today. See What’s Coming

Today VMware announced VMware Integrated OpenStack 3.0 at VMWorld in Las Vegas. We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the latest Mitaka release, an optimized management control plane architecture, and the ability leverage existing workloads in your OpenStack cloud.

We expect VMware Integrated OpenStack 3.0 later this year. Sign up to be notified when its available. New features include:

  • OpenStack Mitaka Support
    VMware Integrated OpenStack 3.0 customers can leverage the great features and enhancements in the latest OpenStack release. Mitaka addresses manageability, scalability, and a greater user experience. To learn more about the Mitaka release, visit the OpenStack.org site at https://www.openstack.org/software/mitaka/
  • Easily Import Existing Workloads
    The ability to now directly import vSphere VMs into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development project or production workloads to the OpenStack Framework.
  • Compact Management Control Plane
    Building on enhancements from previous releases, organizations looking to evaluate OpenStack or to build OpenStack clouds for branch locations quickly and cost effectively can easily deploy in as little as 15 minutes. The VMware Integrated OpenStack 3.0 architecture has been optimized to support a compact architecture mode that dramatically reduces the infrastructure footprint saving resource costs and overall operational complexity.

If you are at VMWorld2016 in Las Vegas, we invite you to attend the following sessions to hear how our customers are using VMware Integrated OpenStack and learn more details about this great upcoming release.

VMware Integrated OpenStack 3.0

VMWorld 2016 VMware Integrated OpenStack Sessions:

  • MGT7752 – OpenStack in the Real World: VMware Integrated OpenStack 3.0 Customer Panel
  • MGT7671 – What’s New in VMware Integrated OpenStack Version 3.0!
  • NET8109 – Amadeus’s Journey Building a Software-Defined Data Center with VMware Integrated OpenStack and NSX
  • NET8343 – OpenStack Networking in the Enterprise: Real-Life Use Cases
  • NET8832 – The Role of VIO and NSX in Virtualizing the Telecoms Infrastructure
  • SEC9618-SPO – Deep Dive: Extending L4-L7 Security Controls for VMware NSX and VMware Integrated OpenStack (VIO) Environments with Fortinet Next Generation

Try VMware Integrated OpenStack Today

Sign up to be notified when VMWare Integrated OpenStack 3.0 is available.

OpenStack Summit 2016 Re-Cap – Experts from VMware and HedgeServ Outline the Operational Advantages of VMware Integrated OpenStack  

VMware Integrated OpenStack (VIO) offers a simple but powerful path to deploying OpenStack clouds and is a clear win for developers. But what about the operations side?

 

Presenting at the 2016 OpenStack Summit, VMware’s Santhosh Sundararaman and Isa Berisha and Thomas McAteer of HedgeServ, the #1 provider of technical management services to the hedge fund industry, make the case for VIO from an operator’s perspective.

Their session opens with HedgeServ Global Director of Platform Engineering Isa Berisha describing how the company uses OpenStack and why it picked VIO to run HedgeServ’s OpenStack deployment.

Adopting OpenStack was the company’s best option for meeting the rapidly escalating demand it was facing, Berisha explains. But his team were stymied by the limitations of the professional and managed services OpenStack deployments available to them, which were rife with issues of quality, expense, speed-of-execution, and vendor instability. Eventually they decided try VIO, despite it being new at that point and little known in the industry.

 

“We downloaded it, deployed it, and something crazy happened: it just worked,” recalls Berisha. “If you’ve tried to do your own OpenStack deployments, you can understand how that feels.”

HedgeServ now uses VIO to run up to 1,000 large window instances (averaging 64GB) managed by a team of just two. vSphere and ESXi’s speed and their ability to handle hosts of up to 10TB of RAM and hundreds of physical cores have been key elements of HedgServ’s success so far, explains senior engineer McAteer. But VIO’s storage and management capabilities have also been essential, as is its stability and the fact that it’s backed by VMware support and easily upgraded and updated live.

Simplifying operations around OpenStack and making it easier to maintain OpenStack in a production deployment has been a major focus for VMware’s VIO development team, adds VMware product manager Sundararaman in the second half of the presentation.

“There are a lot of workflows that we’ve added to make it really easy to operate OpenStack,” he says.

 

As an example, Sundararaman demos VIO’s OpenStack upgrade process, showing how it’s achieved entirely via the VIO control panel in a vSphere web client plugin. VIO uses the blue-green upgrade paradigm to stand up a completely new control plane based on the new distribution, he explains, and then migrates the data and configurations from the old to the new control plane, while leaving the deployed workloads untouched.

“We ran and upgraded VIO from 1.0 to 2.0 without talking to Santhosh,”” adds McAteer. “That’s a big deal in the OpenStack world but it just, again, worked. The blue-green nature of the upgrade path really makes that very easy.”

 

View the VMware + HedgeServ OpenStack Summit session here.

 

To try VMware Integrated OpenStack yourself, check out our free Hands-on Lab. Or take it a step further and download and install VMware Integrated OpenStack today.

 

 

OpenStack Summit 2016 Re-Cap – Speeding Up Developer Productivity with OpenStack and Open Source Tools

Some developers may avoid utilizing an OpenStack cloud, despite the advantages they stand to gain, because they have already established automation workflows using popular open source tools with public cloud providers.

But in a joint presentation to the 2016 OpenStack Summit, VMware Senior Technical Marketing Manager Trevor Roberts Jr.’s and VMware NSX Engineering Architect Scott Lowe explain how developers can take advantage of OpenStack clouds using the same open source tools that they already know.

Their talk runs through a configuration sequence, including image management, dev/test, and production deployment, showing how standard open source tools that developers already use for non-OpenStack deployments can run in exactly the same way with an OpenStack cloud. In this case, they discuss using Packer for image building, Vagrant and Docker Machine for software development and testing, and Terraform for production grade deployments.

“This is a way of using an existing tool that you’re already comfortable and familiar with to start consuming your OpenStack cloud,” says Roberts, before demoing an OpenStack customized image build with Packer and then using that image to create and deploy an OpenStack-provisioned instance with Vagrant.

 

Lowe next profiles Docker Machine, which provisions instances of the Docker Engine for testing, and shows how you can use Docker Machine to spin up instances of Docker inside an OpenStack cloud.

Lastly, Lowe demos Terraform, which offers an infrastructure-as-a-service/cloud services approach to production deployments on multiple platforms (it’s similar to Heat for OpenStack), creating an entire OpenStack infrastructure with a single click, including a new network and router, and launching multiple new instances with floating IP addresses for each, ready for pulling down containers as required.

 

“It’s very simple to use these development tools with OpenStack,” adds Roberts. “It’s just a matter of making sure your developers feel comfortable with the environment and letting them know there are plugins out there – you don’t have to keep using other infrastructures.”

As Roberts notes in the presentation, VMware itself is “full in on OpenStack – we contribute to projects that aren’t even in our own distribution just to help out the community.” Meanwhile, VMware’s own OpenStack solution – VMware Integrated OpenStack (VIO) – offers, a DefCore-compliant OpenStack distribution specifically configured to use open source drivers to manage VMware infrastructure technologies, further aiding the adoption process for developers already familiar with vSphere, vRealize, and NSX.

 

For more information on VIO, check out the VMware Integrated OpenStack (VIO) product homepage, or the VIO Hands-on Lab. If you hold a current license for vSphere Enterprise Plus, vSphere Operations Management, or vSphere Standard with NSX Advanced, you can download VIO for free.

OpenStack Summit 2016 Re-Cap – A Guide to Practical OpenStack Network Virtualization using OVN

OVN (pronounced “oven”) is a rapidly growing, open source solution being developed by the Open vSwitch (OVS) community that provides network virtualization for OVS. While OVN isn’t designed to work with VMware Integrated OpenStack, it’s another OpenStack project to which VMware has been devoting time and effort, and definitely worth knowing about.


For a good sense of how OVN is progressing, check out this talk by four OVS community members at the 2016 OpenStack Summit. They explain how OVN works and why it’s worth trying.

 

VMware OVS developer Ben Pfaff kicks things off with an overview of network virtualization, emphasizing the value of being able to abstract a physical network and of making network provisioning self-service.

Fellow VMware engineer and core OVS and OVN developer Justin Pettit next outlines OVN’s capabilities and stresses its compatibility with the platforms that OVS already works with. When it comes to OpenStack, he reports, “the best integration that we have right now is with OpenStack Neutron but we plan to have it work with other CMSes . . . and you can do everything that you would want through the command line or through data base calls that you can do through Neutron.”

 

Like OVS, OVN is open source and vendor-neutral, and has quickly gained support from a diverse group of vendors including VMware, IBM, Red Hat, and eBay among others. The goal is to match OVS production quality and keep OVN’s design simple but scalable to 1,000s of hypervisors. “We hope it becomes the preferred method for most people who want to use OVS or networking in general,” Pettit says.

If successful, OVN will expand OVS, help improve Neutron’s functionality, and significantly reduce the development burden on Neutron for OVS integration. Add an improved architecture built around ‘logical flows’ and configuration coordinated through databases, and it’s set to outperform existing OVS networking plugins, Pfaff argues.

 

The same goes for security, adds Ryan Moats of IBM – OVN now uses a connection tracker, letting OVS manage state-full connections itself and speeding security group throughput significantly. Its L3 security group design also does all L3 processing in OVS, further improving performance.

The fourth speaker, Han Zhou of eBay, outlines how the group overcame a series of bottlenecks to scale the OVN control plane to 2,000 hypervisors, 20,000 VIF ports and 200 and logical switches operating at once.

The team then highlights ongoing scale improvements and profiles the OVN Neutron plugin. “We will run this in our public cloud,” says IBM’s Moats before outlining OVN deployment and what to look for in the upcoming OVN release. Finally, all four speakers invite their audience to contribute to OVN, and try it out for themselves.

 
VMware Integrated OpenStack is also available for testing in VMware’s Hands-on Lab. Or download it for a free with a current license for vSphere Enterprise Plus, vSphere Operations Management, or NSX with vSphere Standard.

OpenStack Summit 2016 Re-Cap – An Introduction to OpenStack for VMware Administrators

So, you’re a VMware administrator providing IT resources to developers who want API-driven access to your compute, network, and storage infrastructure. If that’s the case, you should definitely check out the VMware Integrated OpenStack (VIO) product homepage, the VIO Hands-on Lab, or go ahead and download and install VIO yourself.

But you might first want to view VMware Senior Technical Marketing Manager, Trevor Roberts Jr.’s talk at the 2016 OpenStack Summit.

 

In “OpenStack for VMware Administrators,” Roberts offers a valuable overview of what makes VMware technologies ideal for running OpenStack workloads and explains how VIO supplies everything you need to install, upgrade, and operate an OpenStack cloud on top of the VMware technologies you already own.

He opens with a review of VMware’s longtime commitment to OpenStack development and a reminder that the cloud platform must run on some kind of virtual infrastructure that supplies the underlying hypervisor, networking, and storage. VMware’s infrastructure technologies, he notes, are just as valid an option for OpenStack clouds as any other infrastructure platform.

 

VIO is a distribution of OpenStack that enables the open source drivers for VMware infrastructure by default, reliably connecting your cloud with signature VMware products like vSphere for compute and storage, NSX for networking, and vRealize management solutions for operations.

Most crucially, Roberts argues, VIO lets you pair your current VMware solutions with your new, OpenStack cloud, saving the time and cost of adopting additional infrastructure technologies and increasing the scalability, availability, and reliability of your existing applications.

 

VMware supports customers whether they prefer a tightly-integrated approach (using VIO) or a more loosely-integrated framework (using another OpenStack solution, for example). “We want you to be successful with OpenStack on vSphere regardless of the distribution that you use.” Roberts says.

But VIO offers many distinct advantages for administrators already running VMware technology, he explains, including speed, reliability, ease of use, a single point of contact for support, and regular upgrades.

Roberts then hands over to, Ken Rugg, CEO of database-as-a-service (DBaaS) platform company Tesora, to show how VIO is extended through partnerships with specialist organizations. Tesora enables and simplifies access for to up to 15 popular databases from an OpenStack cloud via an enterprise-hardened version of OpenStack Trove.

The presentation ends with a brief demonstration of how to create a Trove database instance with Tesora – showing how the combination of Tesora + VIO makes it simple to provision a database for your application without deploying, configuring, and managing the database binaries yourself.

OpenStack Summit 2016 Re-Cap – Amadeus’ OpenStack Journey: Building a Private Cloud with VMware Integrated OpenStack and NSX.

How does a company build a private enterprise cloud using VMware Integrated OpenStack and NSX? You’ll find a great example in this 2016 OpenStack Summit presentation by VMware NSX product manager Sai Chaitanya and Arthur Knopper, associate director of the Amadeus IT Group.

The Amadeus IT Group is a multi-national IT service provider to the global travel industry with over 3 billion euros in revenue. Two years ago it embarked on a transformation project to modernize its infrastructure.

In their talk, Chaitanya and Knopper outline some of the business drivers for the project, which included readying their infrastructure to deploy next generation cloud native applications based on containers and building an entirely new, highly-reliable hotel guest reservation system using RedHat OpenShift PaaS.
Those drivers established a set of business requirements, such as speeding service delivery, instigating end-to-end automation and ensuring 99.999% service uptime, along with technical requirements that included a fault-resilient application architecture based on OpenShift and Kubernetes, and fast and automatic provisioning using OpenStack Heat.

Knopper details the variety of options (public cloud, alternative service providers etc.) that Amadeus considered for meeting their requirements. But their best option, he explains, was to build a product architecture featuring an underlying VMware infrastructure running OpenStack loads via VIO and NSX.

VMware’s technical reliability and the support it offered were crucial factors, says Knopper, as was Amadeus’ ability to leverage its existing experience with vSphere to get the project moving quickly.

 
The results have been impressive. Where it used to take weeks to bring up an application, Knopper notes, “with the solution we have at hand, this has been reduced down to around 50 minutes.” The new approach delivers the fault tolerance required and lets Amadeus deliver more frequent updates to their end users.

The talk winds up with suggestions for best practices for building private OpenStack clouds with VIO based on Amadeus’ experience, and an outline of their plans for continued technical improvement in partnership with VMware.

“What’s really important for success with OpenStack is having a clear driver for what you are trying to do, and then translating that into clear requirements,” emphasizes Chaitanya in conclusion. “Then if you have a very clear execution plan and break it into phases, your chances of success are high.”

 

To try VMware Integrated OpenStack for yourself, check out our Hands-on Lab, or download and install VMware Integrated OpenStack direct.

OpenStack Summit Barcelona 2016 Session Voting Open Until August 8!

Vote for OpenStack VMworld Sessions!The OpenStack Summit session proposals are available online for attendees to vote on. You can access this site to search for VMware-related sessions and to cast your vote.

NOTE: At this time, the Summit team has disabled direct linking to sessions. If that changes, I’ll be sure to update the list below with the direct URLs.

The following list comprises the sessions and speakers who plan on speaking at the Barcelona Summit on behalf of VMware, organized by category. If one or more of the topics catch your interest, your votes would be appreciated:

Evaluating OpenStack

OpenStack for VMware Administrators
Speaker: Trevor Roberts Jr

Case Studies

Amadeus’s journey building a Software Defined Data Center with VMware VIO and NSX
Speakers: Sai Chaitanya, Arthur Knopper

Case Study of an OpenStack Deployment in China
Speaker: Gavin Lu

Architectural Decisions

The Many Personas of Interoperability: Why Operators, Vendors, End Users, & OpenStack Devs Care
Speakers: Mark Voelker

IT Strategy

Mode 1, Mode 2, Mode 1.5, Pets, Cattle: How to tame the menagerie with OpenStack
Speakers: Santhosh Sundararaman, Giridhar Jayavelu

Storage

Tesora and VMware present – A Complete Guide to Running Your Own DBaaS using OpenStack Trove
Speaker: Arvind Soni, Doug Shelley

How To & Best Practices

Skipping OpenStack Releases: (You Don’t) Gotta Catch ‘Em All
Speakers: Mark Voelker, Sidharth Surana, Karol Stepniewski

Analyzing OpenStack Performance : A case study from large scale OpenStack testing.
Speaker: Arvind Soni

Addressing Open Issues in Container Orchestration using OpenStack
Speakers: Santhosh Sundararaman, Giridhar Jayavelu

Infrastructure Updates with OpenStack on vSphere
Speaker: Trevor Roberts Jr

Project Updates

Native HTML5 consoles for VMware
Speaker: Radoslav Gerganov

Cluster Run: Resource Pool Operations Made Easy
Speakers: Xinhui Li, Qiming Teng, Mark Voelker

Upstream Development

Lessons from the Developer Cloud – OpenStack Innovation Center Success Stories
Speakers: Antonio Ojea, Arvind Soni, Justin Shepard, Travis Broughton

Networking

Next generation security and service chaining with NSX and Fortinet
Speakers: Marcos Hernandez, Elie Bitton

Tenant Networks vs. Provider Networks in the Private Cloud Context
Speaker: Marcos Hernandez

OVN – Moving into Production
Speakers: Russell Bryant, Ben Pfaff, Justin Pettit

Telecom / NFV Operations

NFV Considerations for OpenStack on VMware Infrastructure
Speakers: Giridhar Jayavelu

VMWare Integrated Openstack with Gigaspaces Cloudify Orchestration for NFV – technical deep dive
Speakers: Vanessa Little

Design Case Study – VoLTE core solution with Cloudify and Athonet on VMware Integrated Openstack
Speakers: Vanessa Little

VNF Service Modeling and Chaining on VMware Integrated OpenStack using TOSCA/YANG
Speakers: Ran Ziv

Design Case Study – IMS Core deployment with Metaswitch and Cloudify on VMware Integrated Openstack
Speakers: Vanessa Little

How to help your networking peers roll out NFV solutions and be a hero while at it.
Speaker: Jambi Ganbar, Arvind Soni

Hands-on Workshops

Hands on Lab: Operating & Upgrading OpenStack
Speakers: Santhosh Sundararaman

Developer Tools

Speeding up Developer Productivity with OpenStack and Open Source Tools
Speakers: Trevor Roberts Jr, Scott Lowe

Products & Services

VMware NSX and Mirantis OpenStack integration
Speakers: Igo Zinovik, Dimitri Desmidt, Andrian Noga

HPE Helion Openstack and VMware NSX Networking
Speakers: Gary Kotton, Ed Bak

See you in Barcelona!