Home > Blogs > OpenStack Blog for VMware

VMware, Open Source and OpenStack

Last week at VMworld, VMware’s biggest event of the year, I attended a few sessions with various topics related to open source, and was impressed with the number of people who showed interest those sessions. Our customers are looking to leverage open source products on top of VMware technologies, and VMware is more active in the open source community than one might think.

Source: https://vmware.github.io/

We, at VMware, use open source in our products, make thousands of contributions every year to many upstream projects, and create new open source projects that are being used by many. Some of the open source projects created by VMware include:

And the list goes on. You can learn about additional projects here. VMware’s investment in open source makes a lot of sense when you think about it. First, we would like to influence and engage with our customers, who might be looking at open source projects to improve the way they do stuff (see Clarity for example). Second, we would like to improve our products and tools based on feedback and support from the community. And lastly, a lot of growth is happening at the edge of the technology and we want to leverage the opportunity.

One of the most important open source projects VMware is involved in is OpenStack. At VMworld last week, we announced our new release of VMware Integrated OpenStack, the OpenStack distribution from VMware. In the last few years we have been working hard to deliver an OpenStack distribution that would seamlessly work on VMware SDDC, without you having to spend hours on customization or professional services.

History of Working with the OpenStack Community

VMware has a history of open source contributions to the OpenStack community starting in 2010.  Initially it was via the Nicira team’s work on Open vSwitch (OVS) (Niciria was acquired by VMware).  Later, it was via other projects including Nova, Neutron, Cinder, Glance and Ceilometer. We are the #1 contributor to the Neutron project, and the #6 contributor to the Nova project. In addition, we share all the Compute, Network, and Storage drivers with the community.

Source: http://stackalytics.com/

Compliance with Interop Working Group guidelines

VMware Integrated OpenStack complies with the interoperability guidelines defined by the OpenStack Interop Working Group. This group drafts the guidelines that include a list of capabilities that a “true OpenStack” cloud must expose to end users, a list of tests they must pass in order to prove it, and a list of designated sections of the upstream codebase they must use to provide those capabilities. For example, automation tools that leverage the OpenStack APIs should work on VMware Integrated OpenStack as they would on any other OpenStack distribution. Interoperability prevents vendor lock-in because it allows you to easily switch from your current OpenStack deployment to a different vendor’s distribution.

One area where developers may have been concerned in the past is image formats, since the VMware platform currently utilizes OVA, VMDK, and ISO disk formats with Glance.  However, tools exist to convert from other formats to the formats we have adopted (for example: qemu-img to convert qcow2 to VMDK). In addition, significant community work in the area of image building with projects like Diskimage Builder and Packer enables users to auto-generate a VMware-compatible image relatively quickly.

VMWare is committed to keeping VMware Integrated OpenStack open by ensuring all its drivers are open source, ensuring vendor interoperability based on InterOp Working Group guidelines as well as being a very active participant in the OpenStack community.

VMware Integrated OpenStack 4.0: What is New

VMware announced VMware Integrated OpenStack 4.0 Data Center edition at VMworld in Las Vegas.  We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the Ocata release, with a bundled Container platform option included. For OpenStack cloud admins, the 4.0 Data Center edition is also about enhanced platform performance and manageability, increased scale and advanced networking.

 

 

 

 

 

 

 

 

 

 

New Features Include:

OpenStack Features available in Newton + Ocata:

VIO 4.0 is based on the upstream Ocata release.  Ocata is the first release in which Cells v2 is the default deployment configuration for OpenStack Nova, a single Cell is supported in Ocata.  Cell support enables future scale out of an OpenStack cloud in a more distributed fashion.  The placement service, introduced in the Newton release, is now an essential part of VIO 4.0 in determining the optimum placement of VMs. Not to be mistaken with VMware DRS, the OpenStack placement service allows a cloud admin set up pools of resources, and then set up allocations for resource providers. VM placement policies can be built on top of those resources for optimal placement of VMs (Additional blogs to follow).

New capabilities in OpenStack Horizon include: enhanced workload placements, LBaaSv2 and Heat template versions to name a few. Heat template versions provide user with a list of available template versions and functions for a particular template version.

Resource tagging, Cinder availability zones, enhanced Cinder snapshots, and Heat templates with conditions are some of the other notable enhancements available from upstream release in VIO 4.0 release.

vRealize Automation Integration

Another great example of VMware empowers customers to leverage existing investment in infrastructure management and tooling. Integration provides enterprise customers the ability to consume VIO resources with governance. Using vRA XaaS blueprints, a cloud admin can automate OpenStack user and project creation, governance based Heat template deployment or other common aspects of VIO consumption through vRA governance. Once OpenStack resources are on-boarded, vRA integration allows cloud admins and users to view the VIO Horizon dashboard directly from the vRA portal using SSO integration with vIDM.

Networking Advanced Capabilities

VIO 4.0 greatly simplifies network addressing and reachability management leveraging dynamic routing.  Instead of relying on NAT to provide address uniqueness, cloud admins can leverage Neutron address pools or get-me-a-network feature to define a scope of unique addresses spaces.  Tenants needing unique address space can allocate subnets from this pool without worrying about overlapping with another tenant.  With BGP routing, another VIO 4.0 new feature, cloud admin can enable end-to-end connectivity dynamically without managing low-level static routes.

Enhanced Neutron availability zone support allows OpenStack tenants to place NSX ESG workloads to different physical clusters, across different racks for increased availability.  Finally, Firewall-as-a-Service and guest VLAN tagging are some of the other major Neutron enhancements.

Enhanced Platform Support

We are extremely proud of multi-vCenter support in VIO 4.0.  Multi-VC support with NSX-T allows VIO customers the ability to define multiple fault/availability zones, avoiding single point of failure.  Multi-VC can also be used for scaling out VIO by adding more vCenters upon reaching concurrency or total object limits.

Enterprise workloads require both horizontal and vertical scaling.  While horizontal scaling is made simple through Heat or Terraform, vertical scaling often requires downtime/outage window.  With VIO 4.0, cloud admins can offer Glance images that support live resize: OpenStack tenants can increase CPU, Memory, and disk of their virtual machine without VM powering down.  VIO 4.0 also provides increased resiliency with vCenter HA and LVM support on the OMS server to allow flexible storage growth.

Enterprise Grade Container

Finally, VIO offers enterprise grade Kubernetes with built in security, HA and scale (up or down).  Out of box, VIO provides Cloud Admins with simplified day 1 deployment automation for Kubernetes with multi-tenancy and user management.  Once deployed, VIO Kubernetes integrates easily with SDDC vRealize suite of products solving day 2 operational challenges in container life cycle Management, monitoring and logging.  Persistent storage, load balancing and container networking powered by VMware NSX are also standard out of box.

Adopting agile processes is a key driver to help business digitally transform.  It is changing not only the way applications are coded, but also the process they are built and operated.  In the new DevOps driven era, infrastructure admins and developers are solving the same problem – faster time to value.  VIO 4.0 is the answer for any organization looking to digitally transform their business.

VIO 4.0 Data Center edition will enable DevOps teams to build and deliver:

  • Container based micro-services, in addition to traditional VM based workloads
  • End-to-end infrastructure automation leveraging existing tools
  • OpenStack deployment scale out using multi-VC, OpenStack placement API and Cells v2
  • Advanced Neutron and container networking to simplify addressing and reachability while ensuring application security
  • Solving Day 2 operational challenges in Infrastructure life cycle management

Supported by the most rock solid VMware SDDC Infrastructure, VIO enables businesses faster time to value.

Try VMware Integrated OpenStack Today

Take a free test drive, no installation required, with the VMware Integrated OpenStack Hands-on Lab.  Try out the latest VIO 4.0 HOL If you are attending VMworld Vegas or Barcelona.

 

Introducing VMware Integrated OpenStack 4.0

We’re excited to announce the new release of VMware Integrated OpenStack 4.0 today at VMworld US 2017, as part of the VMware SDDC story. You can read more about it here.

VMware Integrated OpenStack (VIO) is an OpenStack distribution supported by VMware, optimized to run on top of VMware’s SDDC infrastructure. In the past few months we have been hard at work, adding additional enterprise grade capabilities into VIO, making it even more robust, scalable and secure, yet keeping it easy to deploy, operate and use.

VMware Integrated OpenStack 4.0 is based on Ocata, and some of the highlights include:

Containers support – users can run VMs alongside containers on VIO. Out-of-the-box container support enables developers to consume Kubernetes APIs, leveraging all the enterprise grade capabilities of VIO such as multi-tenancy, persistent volumes, high availability (HA), and so on.

Integration with vRealize Automation – vRealize Automation customers can now embed OpenStack components in blueprints. They can also manage their OpenStack deployments through the Horizon UI as a tab in vRealize Automation. This integration provides additional governance as well as single-sign-on for users.

Multi vCenter support – customers can manage multiple VMware vCenters with a single VIO deployment, for additional scale and isolation.

Additional capabilities for better performance and scale, such as live resize of VMs (changing RAM, CPU and disk without shutting down the VM), Firewall as a Service (FWaaS), CPU pinning and more.

Our customers use VMware Integrated OpenStack for a variety of use cases, including:

Developer cloud – providing public cloud-like user experience to developers, as well as more choice of consumption (Web UI, CLI or API), self-service and programmable access to VMware infrastructure. With the new container management support, developers will be able to consume Kubernetes APIs.
IaaS platform for enterprise automation – adding automation and self-service provisioning on top of best-of-breed VMware SDDC.
Advanced, programmable network – leveraging network virtualization with VMware NSX for advanced network capabilities.

Our customers tell us (consistently) that VIO is easy to deploy (“it just worked!”) and manage. Since it’s deployed on top of VMware virtualization technologies, they are able to deploy and manage it by themselves, without hiring new people or professional services. Their development and DevOps teams like VIO because it gives them the agility and user experience they want, with self-service and standard OpenStack APIs.

In most cases, in a short amount of time (few weeks!) customers trust VIO enough to run their business-critical applications, such as e-commerce website or online travel system, in production.

VMware Integrated OpenStack will be available as a standalone product later this quarter. For more information go to our website, check out the product walkthrough and try out the hands-on lab.

If you are attending VMworld, please stop by our booth (#1139) to see demos and speak with OpenStack specialists. We’re looking forward to seeing you!

OpenStack Sessions at VMworld 2017 Las Vegas

Don’t Miss Out!

VMworld 2017 Las Vegas is just around the corner and we can’t wait to meet our customers and partners, and explore all the great sessions, workshops and activities planned for next week. With over 500 sessions across all categories, it may be overwhelming to understand which sessions are most beneficial for you. Here is the list of all the OpenStack related session, make sure you register and mark your calendar in advance so you don’t miss out!

In addition, make sure to stop by the VMware Integrated OpenStack (VIO) booth (#1139) to learn more and see a demo or two.

When/Where

Description

Monday, Aug 28, 11:30 a.m. – 1:00 p.m. | South Pacific Ballroom, Lower Level, HOL 5

[ELW182001U] VMware Integrated OpenStack (VIO) – Getting Started Workshop
Monday, Aug 28, 1:00 p.m. – 2:00 p.m. | Islander C, Lower Level

[MGT2609BU] VMware Integrated OpenStack: What’s New
It is not OpenStack or VMware; it is OpenStack on VMware.
Come and learn what is new in VMware Integrated OpenStack and our plans for the future of OpenStack on the software-defined data center.
Monday, Aug 28, 2:00 p.m. – 3:00 p.m. | Islander F, Lower Level

[LDT1844BU] Open Source at VMware: A Key Ingredient to Our Success and Yours
Open-source components are part of practically every software product or service today. VMware products are no exception. And increasingly, IT departments are presented with many application roll-out requests that include large open-source components as part of the infrastructure on which they rely. From OpenStack to Docker to Kubernetes and beyond, open source is a reality of the enterprise environment. VMware is investing in open source both as a user of many components (and contributor to many of those projects) and as a creator of many successful open-source projects such as Open vSwitch, Harbor, Clarity, and many more. This session will talk about the what, the why, and the how of our engagement in open source: our vision and strategy and why all this is critically important for our customers.
Monday, Aug 28, 3:15 p.m. – 4:00 p.m. | Meet the Experts, 2nd floor foyer, Table #5 Wednesday, Aug 30, 2:15 p.m. – 3:00 p.m. | Meet the Experts, 2nd floor foyer, Table #5 Thursday, Aug 31, 11:45 a.m. – 12:30 p.m. | Meet the Experts, 2nd floor foyer, Table #5

[MTE4733U] Implementing OpenStack with VIO
Meet Xiao Gao, VMware Integrated OpenStack expert. Bring your questions!
Tuesday, Aug 29, 12:15 p.m. – 1:00 p.m. | Meet the Experts, 2nd floor foyer, Table #7

[MTE4803U] OpenStack in the Enterprise with Marcos Hernandez
Speak with Expert Marcos Hernandez about the benefits of running OpenStack in private Cloud environments.
Tuesday, Aug 29, 4:00 p.m. – 5:00 p.m. | Oceanside D, Level 2

[MGT1785PU] OpenStack in the Real World: VMware Integrated OpenStack Customer Session
More and more customers are looking to leverage OpenStack to add automation and provide open API to their application development teams. In this session, VMware Integrated OpenStack customers will share their OpenStack journey and the benefits VMware Integrated OpenStack provides to development teams and IT.
Tuesday, Aug 29, 4:00 p.m. – 4:15 p.m. | VMvillage – VMTN Community Theater

[VMTN6664U] Networking and Security Challenges in OpenStack
CloudsDecided it’s time to implement OpenStack to build your Cloud? Have you tested in the lab, evaluated the various distributions available, and hired a specialized team for OpenStack? However, when it arrives the time to put into production Neutron is not integrating with your physical network? If the above story closely resembles what you have been facing, this TechTalk is critical for you to understand the challenges of Networking and Security with any OpenStack distribution and what solutions are missing for your Cloud to fully works. NOTE: Community TechTalk taking place in VMvillage.
Tuesday, Aug 29, 5:30 p.m. – 6:30 p.m. | Mandalay Bay Ballroom B, Level 2

[NET1338BU] VMware Integrated OpenStack and NSX Integration Deep Dive
OpenStack offers a very comprehensive set of Network and Security workflows provided by a core project called Neutron. Neutron can leverage VMware NSX as a backend to bring advanced services to the applications owned by OpenStack. In this session we will cover the use cases for Neutron, and the various topologies available in OpenStack with NSX, with a focus on security. We will walk you through a number of design considerations leveraging Neutron Security Groups and the NSX Stateful Distributed Firewall integration, along with Service Chaining in NSX for Next Generation Security Integration, all available today.
Wednesday, Aug 30, 8:00 a.m. – 9:00 a.m. | Surf A, Level 2

[FUT3076BU] Simplifying Your Open-Source Cloud With VMware
Open source or VMware? Clearly, you can’t have both, right? Wrong. As open-source, cloud-based solutions continue to evolve, IT leaders are challenged with the adoption and implementation of large-scale deployments such as OpenStack and network function virtualization from both a business and technical perspective. Learn how VMware’s solutions can simplify existing open-source innovation, resulting in new levels of operations, standardization (app compatibility), and delivery of enterprise support.
Wednesday, Aug 30, 2:00 p.m. – 3:00 p.m. | Surf A, Level 2

[FUT1744BU] The Benefits of VMware Integrated OpenStack for Your NFV Platform
Communication Service Providers (CSPs) embracing network functions virtualization (NFV) are building platforms with three imperatives in mind: service agility, service uptime and platform openness. These capabilities require the cloud platform they choose to be able to easily model, deploy and modify a service, to run it on a tightly-integrated robust virtual infrastructure and migrate the service horizontally across cloud platforms when/if needed. Come to this session to learn about VIO, a VMware-supported OpenStack (OS) distribution, at the heart of the VMware NFV platform and how it can help CSPs meet those requirements. We will look in detail at the role of VIO as virtual infrastructure manager as well as its native integration with the other components of the VMware software-defined data center architecture (vSphere, NSX and VSAN).
Thursday, Aug 31, 10:45 a.m. – 11:30 a.m. | Meet the Experts, 2nd floor foyer, Table #8

[MTE4832U] How VMware IT Operates VMware integrated OpenStack
with Cloud Architect Chris Mutchler
Learn from VMware IT’s implementation of VMware’s Integrated OpenStack.

VMware Integrated OpenStack Glance Image Best Practices

A production cloud isn’t very efficacious unless users have the ability to run virtual machine images required by their application.  A cloud image is a single file that contains a virtual disk that has an operating system.  For many organizations, the simplest way to obtain a virtual machine image is to download a prebuilt base cloud image with a pre-packaged version of cloud-init to support user-data injection.  Once downloaded, an organization would leverage tools such as Packer to further customize and harden on top of the base image before rolling to production.  Most operating system projects and vendors maintain official images for direct download.  Openstack.org maintains a list of most commonly used images here.

 

 

 

 

 

 

 

 

 

Recently we received some queries about the proper way to import prebuilt QCOW2 native cloud images into VMware Integrated OpenStack.  Images import correctly, but would not successfully boot.  Common symptoms are “no Operating System found” messages generated by the virtual machine’s BIOS, the guest OS hanging during the boot cycle, or DHCP failure when trying to acquire an IP address.  After further analysis, problems were either caused by older upstream tooling or simple adjustments required in the cloud image to match the vSphere environment.  Specifically:

  • Some storage vendors need StreamOptimized image format.
  • Guest Images are attempting to write boot log to ttyS0, but the serial interface is not available on the VM.
  • Defects in earlier versions of the qemu-img tool while creating streamOptimized images.
  • DHCP binding failure caused by Predictive Network Interface Naming.

To overcome these issues, we came up with the following set of best practices to help you simplify the image import process.  I thought it would be a good idea to share our recommendations so others can avoid running into similar issues.

1). VIO 3.x and earlier, serial console output is not enabled.  When booting an image that requires serial console support, use libguestfs to edit the grub.cfg and remove all references to “console=ttyS0”.  Libguestfs provides a suite of tools for accessing and editing VM disk images.  Once installed the “guest mount” command-line tool can be used to mount qcow2 based images.  By default, the disk image mounts in read-write mode.  More info on Libguestfs here.

# guestmount -a xxx-cloudimg-amd64.img -m /dev/sda1 /mnt

# vi /mnt/boot/grub/grub.cfg

# umount mnt

See below screen Capture:

 

 

 

 

 

 

2). VMware vSAN requires all images to be in streamOptimized format.  When converting to VMDK format, use the –o flag to specify the subformat as streamOptimized:

# qemu-img convert -f qcow2 -O vmdk -o subformat=streamOptimized -o adapter_type=lsilogic xxx-server-cloudimg-amd64.img xxx-server-cloudimg-amd64.vmdk ; printf ‘\x03’ | dd conv=notrunc of=xxx-server-cloudimg-amd64.vmdk bs=1 seek=$((0x4))

A few additional items to call out:

  • “lsilogic” is the recommended adapter type.  Although it is possible to set the adapter type during image upload into glance, we recommend as a good practice to always set the adapter type as part of the image conversion process.
  • Older versions of the qemu-img tool contain a bug that causes problems with the streamOptimized subformat.  The following command can be run after converting an image to correct the problem: printf ‘\x03’ | dd conv=notrunc of=xxx-server-cloudimg-amd64.vmdk bs=1 seek=$((0x4)).   It is harmless to execute the printf even if you’re using a version of qemu-tools that has the fix: all the command does is set the VMDK version to “3” which correct version of qemu-img will already have done.  If you are not sure what version of qemu-tools you have, apply the printf command.

3). In the case of CentOS, Udev rule ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules as part of the image bundle is ignored during CentOS image boot up and Predictable Network Interface Naming is enabled as a result.  Our recommendation is to disable predictive naming using grub.  You can find more information on my previous blog.

4). Finally, with Cirros QOCW image, preserve the adapter type as ‘ide’ during the QCOW2 to VMDK conversion process.  There’s currently an upstream bug open.

# qemu-img convert –f qcow2 –O vmdk /var/www/images/cirros-0.3.5-x86_64-disk.img /var/www/images/cirros-0.3.5-x86_64-disk.idk.vmdk

qemu-img defaults to IDE if no adapter type is specified.

Once converted, you can look into the image metadata and validate information such as disk and image type before uploading into Glance image repository.  Image metadata can be viewed by display the first 20 lines of the VMDK

# cat xxx-server-cloudimg-amd64.vmdk | head -20

You can add the newly converted image into glance using OpenStack CLI or Horizon.  Set the public flag when ready for end user consumption.

OpenStack CLI:

# openstack image create –disk-format vmdk –public –file ./xxx-server-cloudimg-amd64.vmdk –property vmware_adaptertype=’lsiLogic’ –property vmware_disktype=’streamOptimized’ <Image display name>

Horizon:

 

 

 

 

 

 

 

 

 

 

Your cloud is as useful as the application and virtual machine images you can support.  By following above simple best practice guidelines, you will deliver a better user experience to your end users by offering more Virtual machine varieties with significantly reduced lead time.

Visit us at VMworld in Las Vegas; we have a large number of Demo and speaking sessions planned:

MGT2609BU:  VMware Integrated OpenStack 4.0: What’s New
MGT1785BU:  OpenStack in the Real World: VMware Integrated OpenStack Customer Panel
NET1338BU:  VMware Integrated OpenStack and NSX Integration Deep Dive
FUT3076BU:  Simplifying Your Open-Source Cloud With VMware
LDT2834BU:  Running Hybrid Applications: Mainframes to Containers
SPL182001U:  VMware Integrated OpenStack (VIO) – Getting Started
ELW182001U: VMware Integrated OpenStack (VIO) – Getting Started
SPL188602U: vCloud Network Functions Virtualization – Advanced Topics
LDT1844BU: Open Source at VMware: A Key Ingredient to Our Success and Yours

OpenStack Boston Summit VMware Sessions Recap

Watch below to experience VMware’s Speaker Sessions at this year’s OpenStack Summit in Boston!


OpenStack & VMware Getting the Best of Both

Speaker: Andrew Pearce

Come and understand the true value to your organization of combining Openstack and VMware. In this session you will understand the value of having a defcore / Openstack powered solution to enable your developers to provision IaaS, in a way that they want, using the tools that they want. In addition you will be able to enable your operations team to continue to utilize the tools, resources and methodology that they use to ensure that your organization has a production grade environment to support your developers.Deploying Openstack, and getting the advantages of Openstack does not need to be a rip and replace strategy. See how other customers have had their cake and eat it.


OpenStack and VMware: Enterprise-Grade IaaS Built on Proven Foundation

Speakers: Xiao Hu Gao & Hari Kannan 

Running production workloads on OpenStack requires a rock solid IaaS running on a trusted infrastructure platform. Think about upgrading, patching, managing the environment, high availability, disaster recovery, security and the list goes on. VMware delivers a top-notch OpenStack distribution that allows you all of the above and much more. Come to this session to see (with a demo) how you can easily and quickly deploy OpenStack for your dev test as well as production workloads.


Is Neutron Challenging to You? Learn How VMware NSX is the Solution for Regular OpenStack Network & Security Services and Kubernetes

Speakers: Dmitri Desmidt, Yves Fauser

Neutron is challenging in many aspects. The main ones reported by OpenStack admins are: complex implementation of network and security services, high-availability, management/operation/troubleshooting, scale. Additionally, with new Kubernetes and Containers deployments, security between containers and management of container traffic is a new headache. VMware NSX offers a plugin for all Neutron OpenStack installations for ESXi and KVM hypervisors. Learn in this session with multiple live demos how VMware NSX plugin resolves all the Neutron challenges in an easy way.


 Digital Transformation with OpenStack for Modern Service Providers

Speakers: Misbah Mahmoodi, Kenny Lee

The pace of technological change is accelerating at an exponential rate. With the advent of 5G networks and IoT, Communications Service Providers success depends not only on their ability to adapt to changes quickly but to do so faster than competitors. Speed is the of the essence in developing new services, deploying them to subscribers, delivering a superior Quality of Experience, and increasing operational efficiency with lowered cost structures. For CSPs to adapt and remain competitive, they are faced with important questions as they explore the digital transformatVMwareion of their business and infrastructure, and how they can leverage NFV, and OpenStack and open hardware platforms to accelerate change and modernization.


Running Kubernates on a Thin OpenStack

Speakers: Mayan Weiss & Hari Kannan 

Kubernetes is leading the container mindshare and OpenStack community has built integrations to support it. However, running production workloads on Kubernetes is still a challenge. What if there was a production ready, multi-tenant K8s distro? Dream no more. Come to this session to see how we adapted OpenStack + K8s to provide container networking, persistent storage, RBAC, LBaaS and more on VMware SDDC.


OpenStack and OVN: What’s New with OVS 2.7

Speakers: Russel Bryant, Ben Pfaff, Justin Pettit

OVN is a virtual networking project built by the Open vSwitch community.
OpenStack can make use of OVN as its backend networking implementation
for Neutron. OVN and its Neutron integration are ready for use in OpenStack
deployments.

This talk will cover the latest developments in the OVN project and the
latest release, part of OVS 2.7. Enhancements include better performance,
improved debugging capabilities, and more flexible L3 gateways. 
We will take a look ahead the next set of things we expect to work on for
OVN, which includes logging for OVN ACLs (security groups), encrypted
tunnels, native DNS integration, and more.

We will also cover some of the performance comparison results of OVN
as compared with the original OVS support in Neutron (ML2/OVS). Finally, 
we will discuss how to deploy OpenStack with OVN or migrate an existing
deployment from ML2/OVS to OVN.


DefCore to Interop and Back Again: OpenStack Programs and Certifications Explained

Speakers: Mark Voelker & Egle Sigler

Openstack Interop (formerly DefCore) guidelines have been in place for 2 years now, and anyone wanting to use OpenStack logo must pass these guidelines. How are guidelines created and updated? How would your favorite project be added to it? How can you guarantee that your OpenStack deployment will comply with the new guidelines? In this session we will cover OpenStack Interop guidelines and components, as well as explain how they are created and updated.


Senlin: An ideal Bridge Between NFV Orchestrator and OpenStack

Speakers: Xinhui Li, Ethan Lynn, Yanyan Hu

Resource Management is a top requirement in NFV field. Usually, the Orchestrator take the responsibility of parsing a virtual network function into different virtual units (VDU) to deploy and operate over Cloud. Senlin, positioned as clustering resource manager since the born time, can be the ideal bridge between NFV orchestrator with OpenStack: it uses a consolidate model which is directly mapped to a VDU to interact with different backend services like Nova, Neutron, Cinder for compute, network and storage resources per Orchestrator’s demand; it provides rich operational functions like auto-scaling, load-balancing and auto healing. We use a popular VIMS typed VNF to illustrate how to easily deploy a VNF on OpenStack and manage it in a scalable and flexible way.


High Availability and Scalability Management of VNF

Speakers: Haiwei Xu, Xinhui Li, XueFeng Liu

Now network function virtualization (NFV) is growing rapidly and widely adopted by many telcom enterprises. In openstack Tacker takes the responsibility of building a Generic VNF Manager (VNFM) and a NFV Orchestrator (NFVO) to deploy and operate Network Services and Virtual Network Functions (VNFs) on the infrastructure platform. For the VNFs which can work as a loadbalancer or a firewall, Tacker needs to consider the availability of each VNF to ensure they are not overloaded or out of work. To prevent VNFs from being overloaded or down, Tacker need to make VNFs HA and auto-scaling. So in fact the VNFs of certain function should not be a single node, but a cluster.

That comes out a problem of cluster managing. In OpenStack environment there is a Clustering service called Senlin which provides scalability management and HA functions for the nodes, those features are exactly fit for Tacker’s requirement.

In this talk we will give you a general introduction of this feature.


How an Interop Capability Becomes Part of the OpenStack Interop Guidelines

Speakers: Rochelle Grober, Mark Voelker, Luz Cazares

OpenStack Interop Working Group (formerly DefCore) produces the OpenStack Powered (TM) Guidelines (a.k.a. Interoperability Guidelines). But, how do we decide what goes into the guideline? How do we define these so called “Capabilities”? And how does the team “score” them? Attend this session to learn what we mean by “Capability”, the requirements a capability must meet, the process the group follows to grade those capabilities… And, you know what, lets score your favorite thing live.


OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Speakers: Brad Topol, Mark Voelker, Tong Li

The OpenStack community has been driving initiatives on two sides of the interoperability coin: workload portability and API/code standards for OpenStack Powered products. The first phase of the OpenStack Interoperability Challenge culminated with a Barcelona Summit Keynote demo comprised of 16 vendors all running the same enterprise workload to illustrate that OpenStack enables workload portability across OpenStack clouds. Building on this momentum for its second phase, the multi-vendor Interop Challenge team has selected new advanced workloads based on Kubernetes and NFV applications to flush out portability issues in these commonly deployed workloads. Meanwhile, the recently formed Interop Working Group continues to roll out new Guidelines, drive new initiatives, and is considering expanding its scope to cover more vertical use cases. In this presentation, we describe the progress, challenges, and lessons learned from both of these efforts.

Making OpenStack Neutron Better for Everyone

This blog post was created by Scott Lowe, VMware Engineering Architect in the Office of the CTO. Scott is an SDN expert and a published author. You can find more information about him at http://blog.scottlowe.org/

Additional comments and reviews: Xiao Gao, Gary Kotton and Marcos Hernandez.


In any open source project, there’s often a lot of work that has to happen “in the background,” so to speak, out of the view of the users that consume that open source project. This work often involves improvements in the performance, modularity, or supportability of the project without the addition of new features or new functionality. Sometimes this work is intended to help “pay technical debt” that has accumulated over the life of the project. As a result, users of the project may remain blissfully unaware of the significant work involved in such efforts. However, the importance of these “invisible” efforts cannot be understated.

One such effort within the OpenStack community is called neutron-lib (more information is available here). In a nutshell, neutron-lib is about two things:

  1. It aims to build a common networking library that Neutron and all Neutron sub-projects can leverage, with the eventual goal of breaking all dependencies between sub-projects.
  2. Pay down accumulated technical debt in the Neutron project by refactoring and enhancing code as it is moved to this common library.

To a user—using that term in this instance to refer to anyone using the OpenStack Neutron code—this doesn’t result in visible new features or functionality. However, this is high-priority work that benefits the entire OpenStack community, and benefits OpenStack overall by enhancing the supportability and stability of the code base over the long term.

Why do we bring this up? Well, it’s recently come to my attention that people may be questioning VMware’s commitment to the OpenStack projects. Since they don’t see new features and new functionality emerging, users may think that VMware has simply moved away from OpenStack.

Nothing could be further from the truth. VMware is deeply committed to OpenStack, often in ways, like the neutron-lib effort, that are invisible to users of OpenStack. It can be easy at times to overlook a vendor’s contributions to an open source project when those contributions don’t directly result in new features or new functionality. Nevertheless, these contributions are critically important for the long-term success and viability of the project. It’s not glorious work, but it’s important work that benefits the OpenStack community and OpenStack users.

Being a responsible member of an open source community means not only doing the work that garners lots of attention, but also doing the work that needs to be done. Here at VMware, we’re striving to be responsible members of the OpenStack community, tackling efforts, in conjunction and close cooperation with the community, that not only benefit VMware but that benefit the OpenStack community, the ecosystem, and the users.

In a future post, I’ll focus on some of the contributions VMware is making that will result in new functionality or new features. Until then, if you’d like more information, please visit http://www.vmware.com/products/openstack.html or contact us and follow us on Twitter @VMware_OS

Finally, don’t forget to visit our booth at the OpenStack Summit in Boston, May 8-12 2017.

How to Deal with DHCP Failure Caused by Consistent Network Device Naming (VIO)

 

VMW-Integrated OpenStack-Gray.jpg

 

 

 

 

 

 

 

 

 

While testing out the latest CentOS 7 QCOW2 cloud image, we ran into an issue where the guest operating system wasn’t able to obtain a DHCP IP address after successful boot.  After some troubleshooting, we quickly realized the NIC name was assigned based on predictive consistent network device name (CNDN). You can read more about CNDN from here.  Network script required to bring up the network interface was missing from /etc/sysconfig/network-scripts, only default ifcfg-eth0 script was present. The network interface remained in DOWN status since interface script wasn’t available.  Therefore, the Linux dhclient therefore couldn’t bind to the interface, hence the DHCP failure.

Fixing the symptom we simply edited and renamed the interface script to reflect the predictive name, then restart networking.  But since this problem will show up again when booting a new VM,  we need a permanent fix in the image template.

It turns out predictive naming was intended to be disabled in the CentOS 7 Cloud Image based on the udev role below:

Screen Shot 2017-04-14 at 2.41.06 PM

 

 

The system ignored this setting during bootup and predictive naming was enabled as a result.

There are multiple ways to workaround this:

Solution 1 – Update Default GRUB to Disable CNDN:

1). To restore the old naming convention, you can edit the /etc/default/grub file and add net.ifnames=0 and biosdevname=0 at the end of the GRUB_CMDLINE_LINUX variable:

Example:   GRUB_CMDLINE_LINUX=”rd.lvm.lv=centos/swap vconsole.keymap=us crashkernel=auto rd.lvm.lv=centos/root vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0″

2) Review the new configuration by printing output to STDOUT

# grub2-mkconfig

3) Update the grub2 configuration after review:

# grub2-mkconfig -o /boot/grub2/grub.cfg

 

Solution 2: Enable Network Manager

1) Install Network Manager:

# yum install NetworkManager

2) Start Network Manager

# service NetworkManager start

3) Run chkconfig to ensure Network Manager starts after system reboot

# chkconfig NetworkManager on

Solution 3: Create Customer Udev Rule

We will create an udev rule to override the unintended predictive name.

1) Create a new 80-net-name-slot.rules in /etc/udev/rules.d/

# touch /etc/udev/rules.d/80-net-name-slot.rules

2). Add below line to the new 80-net-name-slot.rules:

NAME==””, ENV{ID_NET_NAME_SLOT}!=””, NAME=”eth0″

Final Implementation

All three solutions solved the problem.  Approach #1 involves updating GRUB config, so handle with care. Solution #2 is a very hands-off approach allowing Network Manager to control interface states.   Most sysadmins have a love/hate relationship with NetworkManager however. NetworkManager simplifies management of WiFI interfaces but can lead to unpredictable behavior in interface states. Most common concerns are interfaces brought up by NetworkManager when it should be down as sysadmin are not ready to turn up those NIC yet. OpenStack community had reported cloud-init timing related issues as well, although we didn’t have any problems enabling it on the Cloud Centos 7 image.  Solution #3 needs to align with overall deployment requirements in a Multi-NIC environment.

In reality,  CNDN was designed to solve NIC naming issues in a physical server environment.  It stopped being useful with virtual workloads.  Most of the cloud workloads deploy with a single NIC.  The NIC is always eth0.  Consequently, disabling CNDN makes sense, solution #1 is what we recommend.

Once CentOS VM image is in the desirable state, create a snapshot, then refer to the OpenStack documentation to upload into glance.  A shortcut to validate the new image,  instead of creating a snapshot, download and upload back into glance, it is perfectly fine to boot VM directly from a snapshot.   Please refer to VIO documentation for recommended steps.

Be sure to test this out on your VMware Integrated OpenStack setup today.  If you don’t have VIO yet, try it on our VMware Integrated OpenStack Hands-On-Lab , no installation required.

OpenStack Summit:

We will be at the OpenStack Summit in Boston. If you are attending the conference, swing by the VMware booth or attend one of our many sessions:

OpenStack and VMware – Use the Right Foundation for Containers

Digital Transformation with OpenStack for Modern Service Providers

Is Neutron challenging to you – Learn how VMware NSX is the solution for regular OpenStack Network & Security services and Kubernetes

OpenStack and OVN – What’s New with OVS 2.7 

DefCore to Interop and back again: OpenStack Programs and Certifications Explained

Senlin, an ideal bridge between NFV Orchestrator and OpenStack 

High availability and scalability management of VNF

How an Interop Capability becomes part of the OpenStack Interop Guidelines

OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Lightning Talk:

Openstack and VMware getting the best of both. 

Demos:

Station 1: VMware NSX & VMware Integrated OpenStack

Station 2: NFV & VMware Integrated OpenStack

 

VMware Integrated OpenStack 3.1 GA. What’s New!

VMware announced general availability (GA) of VMware Integrated OpenStack 3.1 on Feb 21 2017. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Mitaka release and streamlined user experience with Single Sign-On support with VMware Identity Manager.   For OpenStack Cloud Admins, the 3.1 release is also about enhanced integrations that allows Cloud Admins to further take advantage of the battle tested vSphere Infrastructure & Operations tooling providing enhanced security, OpenStack API performance monitoring,  brownfield workload migration, and seamless upgrade between central and distributed OpenStack management control planes.

images

 

 

 

 

VIO 3.1 is available for download here.  New features include:

  • Support for the latest versions of VMware products. VMware Integrated OpenStack 3.1 supports and is fully compatible with VMware vSphere 6.5, VMware NSX for vSphere 6.3, and VMware NSX-T 1.1.   To learn more about vSphere 6.5, visit here, vSphere 6.3 and NSXT, visit here.
  • NSX Policy Support in Neutron. NSX administrators can define security policies, shared by the OpenStack Cloud Admin with cloud users. Users can either create their own rules, bounded with the predefined ones that can’t be overridden, or only use the predefined, depending on the policy set by the OpenStack Cloud Admin.  NSX Provider policy feature allows Infrastructure Admins to enable enhanced security insertion and assurance all workloads are developed and deployed based on standard IT security policies.
  • New NFV Features. Further expanding on top of VIO 3.0 capability to leverage existing workloads in your OpenStack cloud, you can now import vSphere VMs with NSX network backing into VMware Integrated OpenStack.  The ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development projects or production workloads to the OpenStack Framework.  VM Import steps can be found here.  In addition full passthrough support by using VMware DirectPath I/O is supported.
  • Seamless update from compact mode to HA mode. If you are updating from VMware Integrated OpenStack 3.0 that is deployed in compact mode to 3.1, you can seamlessly transition to an HA deployment during the update. Upgrade docs can be found here.
  • Single Sign-On integration with VMware Identity Manager. You can now streamline authentication for your OpenStack deployment by integrating it with VMware Identity Manager.  SSO integration steps can be found here.
  • Profiling enhancements.  Instead of writing data into Ceilometer, OpenStack OSprofiler can now leverage vRealize Log Insight to store profile data. This approach provides enhanced scalability for OpenStack API performance monitoring. Detailed steps on enabling OpenStack Profiling can be found here.

Try VMware Integrated OpenStack Today

 

 

Take Advantage of Nova Flavor Extra-specs and vSphere QoS to Make Delivering SLAs Much Simpler

Resource and over-subscription management are always the most challenging tasks facing a Cloud Admin. To deliver a guaranteed SLA, one method OpenStack Cloud Admins have used is to create separate compute aggregates with different allocation / over-subscription ratios. Production workloads that require guaranteed CPU, memory, or storage would be placed into a non-oversubscribed aggregate with 1:1 over-subscription, dev workloads may be placed into a best effort aggregate with N:1 over-subscription. While this simplistic model accomplishes its purpose of an SLA guarantee on paper, it comes with a huge CapEx and/or high overhead for capacity management / augmentation. Worst yet, because host aggregate level over-subscription in OpenStack is simply static metadata consumed by the nova scheduler during VM placement, not real time VM state or consumption, huge resource imbalances within the compute aggregate and noisy neighbor issues within a nova compute host are common occurrences.

New workloads can be placed on a host running close to capacity (real time consumption), while remaining hosts are running idle due to differences in application characteristics and usage pattern. Lack of automated day 2 resource re-balance(management) further exacerbates the issue. To provide white glove treatment to critical tenants and workloads, Cloud Admins must deploy additional tooling to discover basic VM to Hypervisor mapping based on OpenStack project IDs. This is both expensive and ineffective in meeting SLAs.

Over-subscription works if resource consumption can be tracked and balanced across a compute cluster. Noisy neighbor issues can be solved only if the underlying infrastructure supports quality of service (QoS).  By leveraging OpenStack Nova flavor extra-spec extensions along with vSphere industry proven per VM resource reservation allocation (expressed using shares, limits and reservations), OpenStack Cloud Admins can deliver enhanced QoS while maintaining uniform consumption across a compute cluster.  It is possible to leverage Image metadata to deliver QoS as well, this blog will focus on Nova flavor extra-spec.

The VMware Nova flavor extension to OpenStack was first introduced upstream in Kilo and is officially supported in VIO release 2.0 and above. Additional requirements are outlined below:

  • Requires VMware Integrated OpenStack version 2.0.x or greater
  • Requires vSphere version 6.0 or greater
  • Network Bandwidth Reservation requires NIOC version 3
  • VMware Integrated OpenStack access as a cloud administrator

Resource reservations can be set for following resource categories:

  • CPU (MHz)
  • Memory (MB)
  • Disk IO (IOPS)
  • Network Bandwidth (Mbps)

Within each resource category, Cloud Admin has the option to set:

  • Limit – Upper bound, not to exceed limit resource utilization
  • Reservation – Guaranteed minimum reservation
  • Share Level – The allocation level. This can be ‘custom’, ‘high’ ‘normal’ or ‘low’.
  • Shares Share –  In the event that ‘custom’ is used, this is the number of shares.

Complete Nova flavor extra-spec details and deployment options can be found here.  vSphere Resource Management capabilities and configuration guidelines is a great reference as well and can be found here.

Let’s look at an example using Hadoop to demonstrate VM resource management with flavor extra-specs. Data flows from Kafka into HDFS, every 30 minutes there’s a batch job to consume the newly ingested data.  Exact details of the Hadoop workflow are outside the scope of this blog. If you are not familiar with Hadoop, some details can be found here.  Resources required for this small scale deployment are outlined below:

Node Type

 

Core

(reserved – Max)

 

Memory

(reserved – Max)

 

Disk

 

Network Limit

Master / Name Node

4 16 G 70 G 500 Mbps
Data Node 4 16 G 70 G

1000 Mbps

Kafka 0.4-2 2-4 G 25 G

100 Mbps

Based on above requirements, Cloud Admin needs to create Nova flavors to match maximum CPU / Memory / Disk requirements for each Hadoop component.  Most of OpenStack Admins should be very familiar with this process.

flavor-create

 

 

 

 

Based on the reservation amount, attach corresponding nova extra specs to each flavor:

extra-spec

 

 

Once extra specs are mapped, confirm setting using the standard nova flavor-show command:

summary-flavor

 

 

 

 

 

In just three simple steps, resource reservation settings are complete.   Any new VM consumed using new flavors from OpenStack (API, command line or Horizon GUI) will have resource requirements passed to vSphere (VMs can be migrated using the nova rebuild VM feature).

Instead of best effort, vSphere will guarantee resources based on nova flavor extra-spec definition. Specific to our example, 4 vCPU / 16G / Max 1G network throughput will be reserved for each DataNode, NameNode with 4 vCPU / 16G / Max 500M throughput and Kafka nodes will have 20% vCPU / 50% Memory reserved. Instances boot into “Error” state if requested resources are not available, ensuring existing workload application SLAs are not violated. You can see that the resource reservation created by the vSphere Nova driver are reflected in the vCenter interface:

Name Node CPU / Memory:

hadoop_nn_cpu_mem

 

 

 

 

 

 

 

 

Name Node Network Bandwidth:

hadoop_nn_network

 

 

 

 

 

 

 

 

 

 

Data Node CPU / Memory:

dn_cpu_mem

 

 

 

 

 

 

 

 

Data Node Network Bandwidth:

dn_network

 

 

 

 

 

 

 

Kafka Node CPU / Memory:

kafka_cpu_mem

 

 

 

 

 

 

 

 

Kafka Network Bandwidth:

kafka_network

 

 

 

 

 

 

 

vSphere will enforce strict admission control based on real time resource allocation and load. New workloads will be admitted only if SLA can be honored for new and existing applications. Once a workload is deployed, in conjunction with vSphere DRS, workload rebalance can happen automatically between hypervisors to ensure optimal host utilization (future blog) and avoid any noisy neighbor issues. Both features are available out of box, no customization is required.

By taking advantage of vSphere VM resource reservation capabilities, OpenStack Cloud Admins can finally enjoy the superior capacity and over-subscription capabilities a cloud environment offers. Instead of deploying excess hardware, Cloud Admins can control when and where additional hardware are needed based on real time application consumption and growth. Ability to consolidate, simplify, and control your infrastructure will help to reduced power, space, and eliminate the need for any out of box customization in tooling or operational monitoring. I invite you to test out nova-extra spec in your VIO environment today or encourage your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation is required.