Home > Blogs > OpenStack Blog for VMware > Tag Archives: OpenStack

Tag Archives: OpenStack

Barbican Consumption and Operational Maintenance

VMware Integrated OpenStack(VIO) announced the official support for Barbican, OpenStack secrets manager,  in version 5.1.  With Barbican, cloud Operators can offer Key Management as a service by leveraging Barbican API and command line(CLI) to manage X.509 certificates, keys, and passwords. Basic Barbican workflow is relatively simple –  invoke the secrets-store plugin to encrypt a secret on the store and decrypt a secret on retrieval. In addition to generic secrets management, some OpenStack projects integrate with Barbican natively to provide enhanced security on top of its base offering.  This blog will introduce Barbican consumption and operation maintenance through the use of Neutron Load Balancer as a Service (LBaaS).

Understanding Policies

Barbican scopes the ownership of a secret at the OpenStack project level.  For each API call, OpenStack will check to ensure the project ID of the token matches the project ID stored as the secret owner.  Further, Barbican uses roles and policies to determine access to secrets. Following roles are defined in Barbican::

  • Admin – Project administrator. This user has full access to all resources owned by the project for which the admin role is scoped.
  • Creator – Users with this role are allowed to create and delete resources.  Users with this role cannot delete other user’s resources managed within same project. They are also allowed full access to existing secrets owned by the project in scope.
  • Observer – Users with this role are allowed access to existing resources but are not allowed to upload new secrets or delete existing secrets.
  • Audit – Users with this role are only allowed access to the resource metadata. So users with this role are unable to decrypt secrets

VIO 5.1 ships with “admin” and “creator” role out of the box.  A project member must be assigned with the creator role to consume barbican.  Based on the above roles, Barbican defines a set of rules or policies for access control. Only operations specified by the matching rule will be permitted.

While the policy framework works well, but secrets management is never one size fits all, and there are limitations with the policy framework if fine-grain control is required.  Scenarios such as grant specific user access to a particular secret or upload a secret for which only the uploader has access needs OpenStack ACLs.  Please refer to ACL API User Guide for full details.

Supported Plugin

The Barbican key manager service leverages secret-store plugins to allow authorized users to store secrets.   VIO 5.1 supports two type of plugins, simple crypto and KMIP enabled. Only a single plugin can be active for a VIO deployment.  Secret stores can be software-based, such as a software token, or hardware devices such as a hardware security module (HSM).

Simple crypto plugin

The simple crypto plugin uses a single symmetric key, stored locally on the VIO controller in the /etc/barbican/barbican.conf file to encrypt and decrypt secrets.  This plugin also leverages local Barbican database and stores user secrets as encrypted blobs in the local database.    The reliance on local text file and database for storage is considered insecure, and therefore upstream community considers simple crypto plugin to be suitable for development and testing workloads only.

Secret store KMIP plugins

The KMIP plugin stores secrets securely in an external KMIP-enabled device. The Barbican database, instead of storing encrypted secrets, maintain location references of secrets for later retrieval. Client certificate-based authentication is the recommended approach to integrate the plugin with the KMIP enabled device.

A cloud operator must use the VIOCLI to specify a plugin:

KMIP:

sudo viocli barbican –secret-store-plugin KMIP \

–host kmip-server –port kmip-port  \

–ca-certs ca-cert-file [–certfile local-cert-file –keyfile local-key-file –user kmip-user –password kmip-password]

Simple Crypto:

sudo viocli barbican –secret-store-plugin simple_crypto

Example Barbican Consumption:

One of the most commonly requested use case specific to VIO is Barbican integration with Neutron LBaaS to offer HTTPS offload.  This is a five step process, we will review each step in detail.

  1. Install KMIP server (Greenfield only)
  2. Integrate KMIP using VIOCLI
  3. ACL update
  4. Workflow to create secret
  5. Workflow to create LBaaSv2

Please note, you must leverage OpenStack API or CLI for step #4.  Horizon support for Barbican is not available.

Install KMIP server

Production Barbican deployment requires a KMIP server.  In a greenfield deployment, Dell EMC CloudLink is a popular solution VMware vSAN customers leverage to enable vSAN storage encryption.  CloudLink includes both a key management server (KMS) as well as the ability to control, monitor and encrypt secrets across a hybrid cloud environment. Additional details on CloudLink is available from VMware solution exchange.

Integrate KMIP using VIOCLI

To integrate with CloudLink KMS or any other KMIP based secret store, simply login into the VIO OMS server and issue the following VIOCLI command;

Configure Barbican to use the KMIP plugin.

viocli barbican –secret-store-plugin KMIP

–user viouser \

–password VMware**** \

–host <KMIP host IP> \

–ca-certs /home/viouser/viouser_key_cert/ca.pem \

–certfile /home/viouser/viouser_key_cert/cert.pem \

–keyfile /home/viouser/viouser_key_cert/key.pem –port 5696

Successful completion of VIOCLI command performs following set of actions:

  • Neutron.conf update to include Barbican specific service_auth account.
  • Barbican environment specific information provided via VIOCLI
  • Barbican service endpoints definition on the HAproxy

ACL updates based on consumption

Neutron LBaaS relies on a Barbican service account to read and push certificates and keys stored in the Barbican containers to a load balancer.  The Barbican service user is an admin member of the service project, part of the OpenStack Local domain. Default Barbican security policy does not allow admin or member of one project to access secrets stored in a different project. In order for Barbican service user to access and push certificate and keys, tenant users must grant access to the service account.  There are two ways to allow access:

Option 1:

1). Tenant creator gives Barbican service user access using the OpenStack ACL command.  Cloud administrator needs to supply the UUID of the Barbican service account.

openstack acl user add -u <barbican_service_account UUID > $(openstack secret list | awk ‘/ cert1 / {print $2}’)

Repeat this command with each certificate, key, and container you want to provide Neutron access to.

Option 2:

2.) If cloud administrators are comfortable providing Neutron with access to secrets without users granting access to individual objects, cloud administrators may elect to modify the Barbican policy file. Implementing this policy change means that tenants won’t need to add the Neutron barbican service_user to every object, which makes the process of creating TERMINATED_HTTPS listeners easier. Administrators should understand and be comfortable with the security implications of this action before implementing this approach. To perform the policy change, use a custom-playbook to change the following line in the Barbican policy.json file:

From:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read”,

To:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read or role:admin”,

Please refer to my previous blog on custom-playbook.

Workflow to Create Secret:

This step assumes you have pre-created certificates and keys.  If you have not created keys and certificates before, please refer to this blog for details.  To follow steps outlined below, make sure to name your output files accordingly (server.crt and server.key).  To upload a certificate:

openstack secret store –name=’certificate’ \

–payload=”$(cat server.crt)” \

–secret-type=passphrase

Most of options are fairly self-explanatory, passphrase indicates a plain text.  Repeat the same command for keys:

openstack secret store –name=’private_key’ \

–payload=”$(cat server.key)” \

–secret-type=passphrase

you can confirm by listing out all secrets:

 

 

 

 

Final, create a TLS container pointing to both private key and certificate secrets:

openstack secret container create –name=’tls_container’ –type=’certificate’ \

                   –secret=”certificate=$(openstack secret list | awk ‘/ certificate / {print $2}’)” \

                   –secret=”private_key=$(openstack secret list | awk ‘/ private_key / {print $2}’)”

Workflow to create LBaaSv2

With Barbican service up and running,  ACL configured to allow retrieval of secret keys, let’s start to create a Load balancer and upload a certificate and key from the KMS server.  Load balancer creation workflow does not change with Barbican.  When creating a listener, be sure to specify TERMINATED_HTTPS as the protocol, and URL of the TLS container stored in Barbican.

Please note:  

  1. If you are testing Barbican against NSX-T, NSX-MGR must be running at least version 2.2 or higher.
  2. Example assumes pre-created test VMs,  t1 router, logical switch and subnets.
  • Create TLS enabled LB:

neutron lbaas-loadbalancer-create \

$(neutron subnet-list | awk ‘/ {subnet name} / {print $2}’) \

–name lb

 

 

 

 

 

 

  • Create listener with TLS

neutron lbaas-listener-create –loadbalancer lb1 \

–protocol-port 443 \

–protocol TERMINATED_HTTPS \

–name listener1 \

–default-tls-container=$(openstack secret list | awk ‘/ tls_container / {print $2}’)

 

 

 

 

 

 

  • Create pool:

neutron lbaas-pool-create \

–name pool1 \

–protocol HTTP \

–listener listener1 \

–lb-algorithm ROUND_ROBIN

  • Add members:

neutron lbaas-member-create pool1 \

–address <address1> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)

neutron lbaas-member-create pool1 \

–address <address2> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)

 

 

 

 

 

 

 

 

 

 

 

 

you can associate a floating IP address with the Loadbalancer VIP for services requiring external access.

 

 

 

 

 

 

 

 

To test out the new LB service, simply curl the URL using the floating IP:

viouser@oms:~$  curl -k https://192.168.120.130

 

 

VMware Integrated OpenStack DNSaaS – Designate Deepdive

Written by Xiao Gao, with valuable feedbacks from Damon Li.

DNS is essential to nearly every cloud application, and it is almost unimaginable to launch any cloud service without a robust DNS implementation.  Booting a VM in OpenStack takes seconds.  However, for most OpenStack operators, once a VM is booted, the first step towards production is to manually create a ticket to register the IP address with the corporate DNS.  This registration process can often take few days.  Why give users the power to boot VM only to have them submit an IT support ticket for DNS entry?  Delegating the responsibility for maintaining DNS records to the application owners, entirely based on self-service(DNSaaS), reduces the load on IT teams and gives users the power to do what they want.   DNS is so fundamental to any application lifecycle, and it should just happen.

One of the most requested features from the VIO user community has been self-service for DNS records, and we are proud to deliver OpenStack Designate as part of VIO 5.0.  OpenStack Designate is the OpenStack equivalent of AWS route 53.  There are three ways to consume Designate in VIO 5.0:

Designate Architecture

Architecturally Designate consists of the following components:

  • API service – It is the consumption layer of Designate.  API service is also responsible for validation of API input.
  • Sink – Sink is a notification event listener.   It also generates simple DNS forward lookup A record based on Nova and Neutron notification events.
  • Central Process – Business logic handler.  Central Process is responsible for the user and permission validation.  It also manages access to the Designate database and request dispatch.
  • Pool Manager – Manages the states of the DNS servers. The Pool manager divides DNS servers into ‘Pools’ (PowerDNS, BIND9, etc.) so that zones within Designate can split across different sets of backend servers. The Pool Manager is also responsible for making sure that backend DNS servers are in sync with the Designate database.
  • Designate-zone-manager – A zone shard is a collection of zones allocated based on the first three characters of zone UUID.  The Zone Manager handles all tasks relating to the zone shard.
  • Backend – Software plugins for common DNS servers ((PowerDNS, BIND9, etc.).  Pool Manager will load corresponding plugins based on type of backends.
  • MiniDNS – Serves information from the database to DNS servers via notify and zone transfer.  Also, most importantly, do not expose MiniDNS to end user consumption.

Component Mapping in VIO

Below table highlights mapping of Designate services to VIO control VM(s):

Designate Consumption

VIO 5.0 supports Bind9, PowerDNS (version 4+) and Infoblox back-ends.  Since DNS is so foundational, there’s no such thing as greenfield for Designate.  Once Designate is enabled, there are few strategies to insert Designate into your private cloud.  Graham Hayes (OpenStack PTL) and a few others gave an excellent talk on this topic.  You can find their talk here:

https://www.youtube.com/watch?v=tD_XlSfnGZ0

General recommendations are to start small and expand.  One common multi-phase approach is:

  • Phase I – Delegate – Pick a subzone and delegate it to Designate. In the delegation phase, type of backend can be different from existing the production platform.
  • Phase II – Integrate – Deploy a second pool that mirrors the production DNS, and migrate a subset of users/projects.
  • Phase III – Converge – Migrate production zones into Designated owned DNS servers.

Configure Designate

To get started with Phase I is super simple.  At a high-level, follow below steps:

1). login vSphere UI, select Home > VMware Integrated OpenStack > Manage > Setting > Configure Designate 

Note: Backend DNS server must be accessible from VIO control plane.

2).  Configure your DNS server.  VIO QA team had certified BIND / PowerDNS / Infoblox

3). Consume – VIO currently does not support provider network based DNS registration, only floating IP.  A recordset is created during floating association of a VM and deleted after a disassociation.  Customer running BGPaaS / no-nat can leverage static records to insert entries into DNS:

openstack recordset create –records <IP> –type A <domain>

I have created a video recording to demonstrate basic consumption of Designate.

 

There is not a one size fits all solution to Phase II and III.  Each organization may adopt different implementation strategies based on operational processes and application availability unique to their organization.  Some organizations may never implement beyond phase I.  You are not alone.  We recommend you to consult with VMware PSO team to work out the most optimal implementation based on your unique requirements.  Also, we welcome your input.  Feel free to share your experience on our VIO community page, or leave a note at the end of this blog with your thoughts.

VMware Integrated OpenStack 5.0: What’s New

VMware today announced VMware Integrated OpenStack (VIO) 5.0. We are truly excited about our latest OpenStack distribution as VMware is one of the first companies to support and provide enhanced stability on top of the newest OpenStack Queens Release.  Available in both Carrier and Data Center Editions, VIO 5.0 enables customers to take advantage of advancements in Queens to support mission-critical workloads, and adds support for the latest versions of VMware products including vSphere, vSAN, and NSX.

For our Telco/NFV customers, VIO 5.0 is about delivering scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). Also for NFV operators, VIO 5.0 will help fast track a path towards Edge computing with VIO-in-a-box, secure multi-tenant isolation and accelerated network performance using the enhanced NSX-T VDS (N-VDS).  For VIO Datacenter customers, advanced security, simplified user experience, and advanced networking with DNSaaS have been on top of the wish list for many VIO customers.  We are super excited to bring those features in VIO 5.0.

VIO 5.0 NFV Feature Details:

Advanced Kubernetes Support:

Enhanced Kubernetes support:  VIO 5.0 ships with Kubernetes version 1.9.2.  In addition to the latest upstream K8S release, integration with latest NSX-T 2.2 release is also included. VIO Kubernetes customers can leverage the same Enhanced N-VDS via Multus CNI plugin to achieve significant improvements in container response time, reduced network latencies and breakthrough network performance.

Heterogeneous Cluster using Node Group:  Now you can have different types of worker nodes in the same cluster. Extending the cluster node profiles feature introduced in VIO 4.1, a cluster can now have multiple node groups, each mapping to a single node profile. Instead of building isolated special purpose Kubernetes clusters, a cloud admin can introduce a new node group(s) to accommodate heterogeneous applications such as machine learning, artificial intelligence, and video encoding.  If resource usage exceeds the node group limit, VIO 5.0 supports cluster scaling at a node group level.  With node groups, cloud admins can address cluster capacity based on application requirements, allowing the most efficient use of available resources.

Enhanced Cluster Manageability:  vkube heal and ssh allow you to directly ssh into any of the nodes of a given cluster and to recover a failed cluster nodes based on ETCD state or cluster backup in the case of complete failure.

Advanced Networking:

 N-VDS:  Also Known as NSX-T VDS in Enhanced Data-path mode.  Enhanced, because N-VDS runs in DPDK mode and allows containers and VMs to achieve significant improvements in response time, reduced network latencies and breakthrough network performance.  With performance(s) similar to SR-IOV, while maintaining the operational simplicity of virtualized NICs, NFV customers can have their cake and eat it too

NSX-V Search domain:  A new configuration setting in the NSX-V will enable the admin to configure a global search domain. Tenants will use this search domain if there is no other search domain set on the subnet.

NSX-V Exclusive DHCP server per Project:  Instead of shared DHCP edge based on subnet across multiple projects.  Exclusive DHCP edge provides the ability to assign dedicated DHCP servers per network segment. Exclusive DHCP server will provide better tenant isolation, also allowing an Admin to determine customer impact concerning maintenance windows, etc.

NSX-T availability zone (AZ):  An availability zone is used to make network resources highly available by group network nodes that run services like DHCP, L3, NAT, and others. Users can associate applications with an availability zone for high availability.  In previous releases neutron AZ was supported against NSX-V, we are extending this support to the T as well.

Security and Metering:

Keystone Federation:   Federated Identity provides a way to securely use existing credentials to access cloud resources such as servers, volumes, and databases, across multiple endpoints across multiple authorized clouds using a single set of credentials.  VIO5 supports Keystone to Keystone (K2K) federation by designating a central Keystone instance as an Identity Provider (IdP), interfacing with LDAP or an upstream SAML2 IdP.  Remote Keystone endpoints are configured as Service Providers (SP), propagating authentication requests to the central Keystone.  As part of Keystone Federation enhancement, we will also support 3rd party IdP in addition to the existing support for vIDM.

Gnocchi:   Gnocchi is the project name of a TDBaaS (Time Series Database as a Service) project that was initially created under the Ceilometer umbrella. Rather than storing raw data points, it aggregates them before storing them.  Because Gnocchi computes all the aggregations at ingestion, data retrieval is exceptionally speedy.  Gnocchi resolves performance bottlenecks in Ceilometer’s legacy architecture by providing an extremely robust foundation for the metric storage required for billing and monitoring.  The legacy Ceilometer API service has been deprecated by upstream and is no longer available in Queens.  Instead, the Ceilometer API and functionality has been broken out into the Aodh, Panko, and Gnocchi services, all of which are fully supported in VIO 5.0.

Default Drop Policy:   Enable this feature to ensure that traffic to a port that has no security groups and has port security enabled will always discard.

End to End Encryption:  The cloud admin now has the option to enable API encryption for internal API calls in addition to the existing encryption on public OpenStack endpoints.  When enabled, all internal OpenStack API calls will be sent over HTTPS using strong TLS 1.2 encryption.  Encryption on internal endpoints helps avoid man-in-the-middle attacks if the management network is compromised.

Performance and Manageability:

VIO-in-a-box:  Also known as the “Tiny” deployment. Instead of separate physical clusters for management and compute, VMware Integrated OpenStack control and data plane can now consolidate on a single physical server.   This drastically reduces the footprint of a deployment and is ideal for Edge Computing scenarios where power and space is a concern.  VIO-in-a-box can be preconfigured manually or fully automated with OMS API.

Hardware Acceleration:  GPUs are synonymous with artificial intelligence and machine learning.  vGPU support gives OpenStack operators the same benefits for graphics-intensive workloads as traditional enterprise applications: specifically resource consolidation, increased utilization, and simplified automation. The video RAM on the GPU is carved up into portions.  Multiple VM instances can be scheduled to access available vGPUs.  Cloud admins determine the amount of vGPU each VM can access based on VM flavors.  There are various ways to carve vGPU resources. Refer to the NVIDIA GRID vGPU user guide for additional detail on this topic.  

OpenStack at Scale:  VMware Integrated OpenStack 5.0 features improved scale, having been tested and validated to run 500 hosts and 15,000 VMs in a region. This release will also introduce support for multiple regions at once as well as monitoring and metrics at scale.

Elastic TvDC:  A Tenant Virtual Datacenter (TvDC) can extend across multiple clusters in VIO 5.0.  Extending on support of single cluster TvDC’s introduced in VIO 4.0, VIO 5.0 allows a TvDC to span across multiple clusters.  Cloud admins can create several resource pools across multiple clusters assigning the same name, project-id, and unique provider-id. When tenants launch a new instance, the OpenStack scheduler and placement engine will schedule VM request to any of the resource pools mapped to the TvDC.

VMware at OpenStack Summit 2018:

VMware is a Premier Sponsor of OpenStack Summit 2018 which runs May 21-24 at the Vancouver Convention Centre in Vancouver, BC, Canada. If you are attending the Summit in person, we invite you to stopped by VMware’s booth (located at A16) for feature demonstrations of VMware Integrated OpenStack 5 as well as VMware NSX and VMware vCloud NFV.  Hands on training is also available (RSVP required).   Complete schedule of VMware breakout sessions, lightening talks and training presentations can be found here.

Leverage OpenStack for your Production Workloads

In my previous blog I wrote about VMware’s involvement in open source. The proliferation of open source projects in recent years has influenced how people think about technology, and how technology is being adopted in organizations, for a few reasons. First, open source is more accessible – developers can download projects from github to their laptops and quickly start using them. Second, open source delivers cutting edge capabilities, and companies leverage that to increase the pace of innovation. Third, developers love the idea that they can influence, customize and fix the code of the tools they’re using.  Many companies are now adopting the “open source first” strategy with the hope that they will not only speed up innovation but also cut costs, as open source is free.

However, while developers increasingly adopt open source, it often doesn’t come easy to DevOps and IT teams, who carry the heavy burden of bringing applications from developer laptop to production. These teams got to think about stability, performance, security, upgrades, patching and the list goes on. In those cases, enterprises are often happy to pay for an enterprise-grade version of the product, for which all those things mentioned are already taken care of.

When applications are ready to move to production…

OpenStack is a great example. Many organizations are keen to run their applications on top of an open source platform, also known to be the industry standard. But that doesn’t come without deployment and manageability challenges. That’s where VMware provides more value to customers.

VMware Integrated OpenStack (VIO) makes it easier for IT to deploy and run an OpenStack cloud on top of their existing VMware infrastructure. Combining VIO with the enterprise-grade capabilities of the VMware stack provides customers with the most reliable and production ready OpenStack solution. There are three key reasons for this statement: a) VMware provides best-of-breed, production ready OpenStack-compatible infrastructure; b) VIO is fully tested for both – business continuity and compatibility; and c) VMware delivers capabilities for day 2 operations. Let me go into details for each of the three.

Best-of-breed OpenStack-compatible infrastructure

First, VMware Integrated OpenStack is optimized to run on top of VMware Software Defined Data Center (SDDC), leveraging all the enterprise-grade capabilities of VMware technologies such as high availability, scalability, security and so on.

  • vSphere for Nova Compute: VIO takes advantage of vSphere capabilities such as Dynamic Resource Scheduling (DRS) to achieve optimal VM density and vMotion to protect tenant workloads against failures.
  • VMware NSX for Neutron: advanced networking services with massive scale and throughput, and with rich set of capabilities such as private networks, floating IPs, logical routing, load balancing, security groups and micro-segmentation.
  • VMware vSAN/3rd party storage for Cinder/Glance: VIO works with any vSphere-validated storage (we have the largest hardware compatibility list in the industry). VIO also brings Advanced Storage Policies through VMware vSAN.

Battle hardened and tested

OpenStack can be deployed on many combinations of storage, network, and compute hardware and software, and from multiple vendors. Testing all combinations is a challenge and often times customers who choose the DIY route will have to test their combination of hardware and software for production workloads. VMware Integrated OpenStack, on the other hand, is battle-hardened and tested against all VMware virtualization technologies to ensure the best possible user experience from deployment to management (upgrades, patching, etc.) to usage. In addition, VMware provides the broadest hardware compatibility coverage in the industry today (that has been tested in production environments).

Key capabilities for Day-2 Operations

VMware Integrated OpenStack brings operations capabilities to OpenStack users.  For example, built-in command line interface (CLI) tools enable you to troubleshoot and monitor your OpenStack deployment and the status of OpenStack services. Pre-defined workflows automate common OpenStack operations such as adding/removing capacity, configuration changes, and patching.

In addition, out-of- the-box integrations with vRealize Operations, vRealize Log Insight, and vRealize Business for Cloud provide monitoring, troubleshooting, and cost visibility for your OpenStack infrastructure.

Finally, to add to all of this, another benefit is that our customers only have one vendor and support number to call to in case of a problem. No finger pointing, no need to handle different support plans. Easy!

To learn more, visit the VIO web page and product feature walkthrough.

Introducing VMware Integrated OpenStack 4.0

We’re excited to announce the new release of VMware Integrated OpenStack 4.0 today at VMworld US 2017, as part of the VMware SDDC story. You can read more about it here.

VMware Integrated OpenStack (VIO) is an OpenStack distribution supported by VMware, optimized to run on top of VMware’s SDDC infrastructure. In the past few months we have been hard at work, adding additional enterprise grade capabilities into VIO, making it even more robust, scalable and secure, yet keeping it easy to deploy, operate and use.

VMware Integrated OpenStack 4.0 is based on Ocata, and some of the highlights include:

Containers support – users can run VMs alongside containers on VIO. Out-of-the-box container support enables developers to consume Kubernetes APIs, leveraging all the enterprise grade capabilities of VIO such as multi-tenancy, persistent volumes, high availability (HA), and so on.

Integration with vRealize Automation – vRealize Automation customers can now embed OpenStack components in blueprints. They can also manage their OpenStack deployments through the Horizon UI as a tab in vRealize Automation. This integration provides additional governance as well as single-sign-on for users.

Multi vCenter support – customers can manage multiple VMware vCenters with a single VIO deployment, for additional scale and isolation.

Additional capabilities for better performance and scale, such as live resize of VMs (changing RAM, CPU and disk without shutting down the VM), Firewall as a Service (FWaaS), CPU pinning and more.

Our customers use VMware Integrated OpenStack for a variety of use cases, including:

Developer cloud – providing public cloud-like user experience to developers, as well as more choice of consumption (Web UI, CLI or API), self-service and programmable access to VMware infrastructure. With the new container management support, developers will be able to consume Kubernetes APIs.
IaaS platform for enterprise automation – adding automation and self-service provisioning on top of best-of-breed VMware SDDC.
Advanced, programmable network – leveraging network virtualization with VMware NSX for advanced network capabilities.

Our customers tell us (consistently) that VIO is easy to deploy (“it just worked!”) and manage. Since it’s deployed on top of VMware virtualization technologies, they are able to deploy and manage it by themselves, without hiring new people or professional services. Their development and DevOps teams like VIO because it gives them the agility and user experience they want, with self-service and standard OpenStack APIs.

In most cases, in a short amount of time (few weeks!) customers trust VIO enough to run their business-critical applications, such as e-commerce website or online travel system, in production.

VMware Integrated OpenStack will be available as a standalone product later this quarter. For more information go to our website, check out the product walkthrough and try out the hands-on lab.

If you are attending VMworld, please stop by our booth (#1139) to see demos and speak with OpenStack specialists. We’re looking forward to seeing you!

OpenStack Sessions at VMworld 2017 Las Vegas

Don’t Miss Out!

VMworld 2017 Las Vegas is just around the corner and we can’t wait to meet our customers and partners, and explore all the great sessions, workshops and activities planned for next week. With over 500 sessions across all categories, it may be overwhelming to understand which sessions are most beneficial for you. Here is the list of all the OpenStack related session, make sure you register and mark your calendar in advance so you don’t miss out!

In addition, make sure to stop by the VMware Integrated OpenStack (VIO) booth (#1139) to learn more and see a demo or two.

When/Where

Description

Monday, Aug 28, 11:30 a.m. – 1:00 p.m. | South Pacific Ballroom, Lower Level, HOL 5

[ELW182001U] VMware Integrated OpenStack (VIO) – Getting Started Workshop
Monday, Aug 28, 1:00 p.m. – 2:00 p.m. | Islander C, Lower Level

[MGT2609BU] VMware Integrated OpenStack: What’s New
It is not OpenStack or VMware; it is OpenStack on VMware.
Come and learn what is new in VMware Integrated OpenStack and our plans for the future of OpenStack on the software-defined data center.
Monday, Aug 28, 2:00 p.m. – 3:00 p.m. | Islander F, Lower Level

[LDT1844BU] Open Source at VMware: A Key Ingredient to Our Success and Yours
Open-source components are part of practically every software product or service today. VMware products are no exception. And increasingly, IT departments are presented with many application roll-out requests that include large open-source components as part of the infrastructure on which they rely. From OpenStack to Docker to Kubernetes and beyond, open source is a reality of the enterprise environment. VMware is investing in open source both as a user of many components (and contributor to many of those projects) and as a creator of many successful open-source projects such as Open vSwitch, Harbor, Clarity, and many more. This session will talk about the what, the why, and the how of our engagement in open source: our vision and strategy and why all this is critically important for our customers.
Monday, Aug 28, 3:15 p.m. – 4:00 p.m. | Meet the Experts, 2nd floor foyer, Table #5 Wednesday, Aug 30, 2:15 p.m. – 3:00 p.m. | Meet the Experts, 2nd floor foyer, Table #5 Thursday, Aug 31, 11:45 a.m. – 12:30 p.m. | Meet the Experts, 2nd floor foyer, Table #5

[MTE4733U] Implementing OpenStack with VIO
Meet Xiao Gao, VMware Integrated OpenStack expert. Bring your questions!
Tuesday, Aug 29, 12:15 p.m. – 1:00 p.m. | Meet the Experts, 2nd floor foyer, Table #7

[MTE4803U] OpenStack in the Enterprise with Marcos Hernandez
Speak with Expert Marcos Hernandez about the benefits of running OpenStack in private Cloud environments.
Tuesday, Aug 29, 4:00 p.m. – 5:00 p.m. | Oceanside D, Level 2

[MGT1785PU] OpenStack in the Real World: VMware Integrated OpenStack Customer Session
More and more customers are looking to leverage OpenStack to add automation and provide open API to their application development teams. In this session, VMware Integrated OpenStack customers will share their OpenStack journey and the benefits VMware Integrated OpenStack provides to development teams and IT.
Tuesday, Aug 29, 4:00 p.m. – 4:15 p.m. | VMvillage – VMTN Community Theater

[VMTN6664U] Networking and Security Challenges in OpenStack
CloudsDecided it’s time to implement OpenStack to build your Cloud? Have you tested in the lab, evaluated the various distributions available, and hired a specialized team for OpenStack? However, when it arrives the time to put into production Neutron is not integrating with your physical network? If the above story closely resembles what you have been facing, this TechTalk is critical for you to understand the challenges of Networking and Security with any OpenStack distribution and what solutions are missing for your Cloud to fully works. NOTE: Community TechTalk taking place in VMvillage.
Tuesday, Aug 29, 5:30 p.m. – 6:30 p.m. | Mandalay Bay Ballroom B, Level 2

[NET1338BU] VMware Integrated OpenStack and NSX Integration Deep Dive
OpenStack offers a very comprehensive set of Network and Security workflows provided by a core project called Neutron. Neutron can leverage VMware NSX as a backend to bring advanced services to the applications owned by OpenStack. In this session we will cover the use cases for Neutron, and the various topologies available in OpenStack with NSX, with a focus on security. We will walk you through a number of design considerations leveraging Neutron Security Groups and the NSX Stateful Distributed Firewall integration, along with Service Chaining in NSX for Next Generation Security Integration, all available today.
Wednesday, Aug 30, 8:00 a.m. – 9:00 a.m. | Surf A, Level 2

[FUT3076BU] Simplifying Your Open-Source Cloud With VMware
Open source or VMware? Clearly, you can’t have both, right? Wrong. As open-source, cloud-based solutions continue to evolve, IT leaders are challenged with the adoption and implementation of large-scale deployments such as OpenStack and network function virtualization from both a business and technical perspective. Learn how VMware’s solutions can simplify existing open-source innovation, resulting in new levels of operations, standardization (app compatibility), and delivery of enterprise support.
Wednesday, Aug 30, 2:00 p.m. – 3:00 p.m. | Surf A, Level 2

[FUT1744BU] The Benefits of VMware Integrated OpenStack for Your NFV Platform
Communication Service Providers (CSPs) embracing network functions virtualization (NFV) are building platforms with three imperatives in mind: service agility, service uptime and platform openness. These capabilities require the cloud platform they choose to be able to easily model, deploy and modify a service, to run it on a tightly-integrated robust virtual infrastructure and migrate the service horizontally across cloud platforms when/if needed. Come to this session to learn about VIO, a VMware-supported OpenStack (OS) distribution, at the heart of the VMware NFV platform and how it can help CSPs meet those requirements. We will look in detail at the role of VIO as virtual infrastructure manager as well as its native integration with the other components of the VMware software-defined data center architecture (vSphere, NSX and VSAN).
Thursday, Aug 31, 10:45 a.m. – 11:30 a.m. | Meet the Experts, 2nd floor foyer, Table #8

[MTE4832U] How VMware IT Operates VMware integrated OpenStack
with Cloud Architect Chris Mutchler
Learn from VMware IT’s implementation of VMware’s Integrated OpenStack.

OpenStack Boston Summit VMware Sessions Recap

Watch below to experience VMware’s Speaker Sessions at this year’s OpenStack Summit in Boston!


OpenStack & VMware Getting the Best of Both

Speaker: Andrew Pearce

Come and understand the true value to your organization of combining Openstack and VMware. In this session you will understand the value of having a defcore / Openstack powered solution to enable your developers to provision IaaS, in a way that they want, using the tools that they want. In addition you will be able to enable your operations team to continue to utilize the tools, resources and methodology that they use to ensure that your organization has a production grade environment to support your developers.Deploying Openstack, and getting the advantages of Openstack does not need to be a rip and replace strategy. See how other customers have had their cake and eat it.


OpenStack and VMware: Enterprise-Grade IaaS Built on Proven Foundation

Speakers: Xiao Hu Gao & Hari Kannan 

Running production workloads on OpenStack requires a rock solid IaaS running on a trusted infrastructure platform. Think about upgrading, patching, managing the environment, high availability, disaster recovery, security and the list goes on. VMware delivers a top-notch OpenStack distribution that allows you all of the above and much more. Come to this session to see (with a demo) how you can easily and quickly deploy OpenStack for your dev test as well as production workloads.


Is Neutron Challenging to You? Learn How VMware NSX is the Solution for Regular OpenStack Network & Security Services and Kubernetes

Speakers: Dmitri Desmidt, Yves Fauser

Neutron is challenging in many aspects. The main ones reported by OpenStack admins are: complex implementation of network and security services, high-availability, management/operation/troubleshooting, scale. Additionally, with new Kubernetes and Containers deployments, security between containers and management of container traffic is a new headache. VMware NSX offers a plugin for all Neutron OpenStack installations for ESXi and KVM hypervisors. Learn in this session with multiple live demos how VMware NSX plugin resolves all the Neutron challenges in an easy way.


 Digital Transformation with OpenStack for Modern Service Providers

Speakers: Misbah Mahmoodi, Kenny Lee

The pace of technological change is accelerating at an exponential rate. With the advent of 5G networks and IoT, Communications Service Providers success depends not only on their ability to adapt to changes quickly but to do so faster than competitors. Speed is the of the essence in developing new services, deploying them to subscribers, delivering a superior Quality of Experience, and increasing operational efficiency with lowered cost structures. For CSPs to adapt and remain competitive, they are faced with important questions as they explore the digital transformatVMwareion of their business and infrastructure, and how they can leverage NFV, and OpenStack and open hardware platforms to accelerate change and modernization.


Running Kubernates on a Thin OpenStack

Speakers: Mayan Weiss & Hari Kannan 

Kubernetes is leading the container mindshare and OpenStack community has built integrations to support it. However, running production workloads on Kubernetes is still a challenge. What if there was a production ready, multi-tenant K8s distro? Dream no more. Come to this session to see how we adapted OpenStack + K8s to provide container networking, persistent storage, RBAC, LBaaS and more on VMware SDDC.


OpenStack and OVN: What’s New with OVS 2.7

Speakers: Russel Bryant, Ben Pfaff, Justin Pettit

OVN is a virtual networking project built by the Open vSwitch community.
OpenStack can make use of OVN as its backend networking implementation
for Neutron. OVN and its Neutron integration are ready for use in OpenStack
deployments.

This talk will cover the latest developments in the OVN project and the
latest release, part of OVS 2.7. Enhancements include better performance,
improved debugging capabilities, and more flexible L3 gateways. 
We will take a look ahead the next set of things we expect to work on for
OVN, which includes logging for OVN ACLs (security groups), encrypted
tunnels, native DNS integration, and more.

We will also cover some of the performance comparison results of OVN
as compared with the original OVS support in Neutron (ML2/OVS). Finally, 
we will discuss how to deploy OpenStack with OVN or migrate an existing
deployment from ML2/OVS to OVN.


DefCore to Interop and Back Again: OpenStack Programs and Certifications Explained

Speakers: Mark Voelker & Egle Sigler

Openstack Interop (formerly DefCore) guidelines have been in place for 2 years now, and anyone wanting to use OpenStack logo must pass these guidelines. How are guidelines created and updated? How would your favorite project be added to it? How can you guarantee that your OpenStack deployment will comply with the new guidelines? In this session we will cover OpenStack Interop guidelines and components, as well as explain how they are created and updated.


Senlin: An ideal Bridge Between NFV Orchestrator and OpenStack

Speakers: Xinhui Li, Ethan Lynn, Yanyan Hu

Resource Management is a top requirement in NFV field. Usually, the Orchestrator take the responsibility of parsing a virtual network function into different virtual units (VDU) to deploy and operate over Cloud. Senlin, positioned as clustering resource manager since the born time, can be the ideal bridge between NFV orchestrator with OpenStack: it uses a consolidate model which is directly mapped to a VDU to interact with different backend services like Nova, Neutron, Cinder for compute, network and storage resources per Orchestrator’s demand; it provides rich operational functions like auto-scaling, load-balancing and auto healing. We use a popular VIMS typed VNF to illustrate how to easily deploy a VNF on OpenStack and manage it in a scalable and flexible way.


High Availability and Scalability Management of VNF

Speakers: Haiwei Xu, Xinhui Li, XueFeng Liu

Now network function virtualization (NFV) is growing rapidly and widely adopted by many telcom enterprises. In openstack Tacker takes the responsibility of building a Generic VNF Manager (VNFM) and a NFV Orchestrator (NFVO) to deploy and operate Network Services and Virtual Network Functions (VNFs) on the infrastructure platform. For the VNFs which can work as a loadbalancer or a firewall, Tacker needs to consider the availability of each VNF to ensure they are not overloaded or out of work. To prevent VNFs from being overloaded or down, Tacker need to make VNFs HA and auto-scaling. So in fact the VNFs of certain function should not be a single node, but a cluster.

That comes out a problem of cluster managing. In OpenStack environment there is a Clustering service called Senlin which provides scalability management and HA functions for the nodes, those features are exactly fit for Tacker’s requirement.

In this talk we will give you a general introduction of this feature.


How an Interop Capability Becomes Part of the OpenStack Interop Guidelines

Speakers: Rochelle Grober, Mark Voelker, Luz Cazares

OpenStack Interop Working Group (formerly DefCore) produces the OpenStack Powered (TM) Guidelines (a.k.a. Interoperability Guidelines). But, how do we decide what goes into the guideline? How do we define these so called “Capabilities”? And how does the team “score” them? Attend this session to learn what we mean by “Capability”, the requirements a capability must meet, the process the group follows to grade those capabilities… And, you know what, lets score your favorite thing live.


OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Speakers: Brad Topol, Mark Voelker, Tong Li

The OpenStack community has been driving initiatives on two sides of the interoperability coin: workload portability and API/code standards for OpenStack Powered products. The first phase of the OpenStack Interoperability Challenge culminated with a Barcelona Summit Keynote demo comprised of 16 vendors all running the same enterprise workload to illustrate that OpenStack enables workload portability across OpenStack clouds. Building on this momentum for its second phase, the multi-vendor Interop Challenge team has selected new advanced workloads based on Kubernetes and NFV applications to flush out portability issues in these commonly deployed workloads. Meanwhile, the recently formed Interop Working Group continues to roll out new Guidelines, drive new initiatives, and is considering expanding its scope to cover more vertical use cases. In this presentation, we describe the progress, challenges, and lessons learned from both of these efforts.

Making OpenStack Neutron Better for Everyone

This blog post was created by Scott Lowe, VMware Engineering Architect in the Office of the CTO. Scott is an SDN expert and a published author. You can find more information about him at http://blog.scottlowe.org/

Additional comments and reviews: Xiao Gao, Gary Kotton and Marcos Hernandez.


In any open source project, there’s often a lot of work that has to happen “in the background,” so to speak, out of the view of the users that consume that open source project. This work often involves improvements in the performance, modularity, or supportability of the project without the addition of new features or new functionality. Sometimes this work is intended to help “pay technical debt” that has accumulated over the life of the project. As a result, users of the project may remain blissfully unaware of the significant work involved in such efforts. However, the importance of these “invisible” efforts cannot be understated.

One such effort within the OpenStack community is called neutron-lib (more information is available here). In a nutshell, neutron-lib is about two things:

  1. It aims to build a common networking library that Neutron and all Neutron sub-projects can leverage, with the eventual goal of breaking all dependencies between sub-projects.
  2. Pay down accumulated technical debt in the Neutron project by refactoring and enhancing code as it is moved to this common library.

To a user—using that term in this instance to refer to anyone using the OpenStack Neutron code—this doesn’t result in visible new features or functionality. However, this is high-priority work that benefits the entire OpenStack community, and benefits OpenStack overall by enhancing the supportability and stability of the code base over the long term.

Why do we bring this up? Well, it’s recently come to my attention that people may be questioning VMware’s commitment to the OpenStack projects. Since they don’t see new features and new functionality emerging, users may think that VMware has simply moved away from OpenStack.

Nothing could be further from the truth. VMware is deeply committed to OpenStack, often in ways, like the neutron-lib effort, that are invisible to users of OpenStack. It can be easy at times to overlook a vendor’s contributions to an open source project when those contributions don’t directly result in new features or new functionality. Nevertheless, these contributions are critically important for the long-term success and viability of the project. It’s not glorious work, but it’s important work that benefits the OpenStack community and OpenStack users.

Being a responsible member of an open source community means not only doing the work that garners lots of attention, but also doing the work that needs to be done. Here at VMware, we’re striving to be responsible members of the OpenStack community, tackling efforts, in conjunction and close cooperation with the community, that not only benefit VMware but that benefit the OpenStack community, the ecosystem, and the users.

In a future post, I’ll focus on some of the contributions VMware is making that will result in new functionality or new features. Until then, if you’d like more information, please visit http://www.vmware.com/products/openstack.html or contact us and follow us on Twitter @VMware_OS

Finally, don’t forget to visit our booth at the OpenStack Summit in Boston, May 8-12 2017.

How to Deal with DHCP Failure Caused by Consistent Network Device Naming (VIO)

 

VMW-Integrated OpenStack-Gray.jpg

 

 

 

 

 

 

 

 

 

While testing out the latest CentOS 7 QCOW2 cloud image, we ran into an issue where the guest operating system wasn’t able to obtain a DHCP IP address after successful boot.  After some troubleshooting, we quickly realized the NIC name was assigned based on predictive consistent network device name (CNDN). You can read more about CNDN from here.  Network script required to bring up the network interface was missing from /etc/sysconfig/network-scripts, only default ifcfg-eth0 script was present. The network interface remained in DOWN status since interface script wasn’t available.  Therefore, the Linux dhclient therefore couldn’t bind to the interface, hence the DHCP failure.

Fixing the symptom we simply edited and renamed the interface script to reflect the predictive name, then restart networking.  But since this problem will show up again when booting a new VM,  we need a permanent fix in the image template.

It turns out predictive naming was intended to be disabled in the CentOS 7 Cloud Image based on the udev role below:

Screen Shot 2017-04-14 at 2.41.06 PM

 

 

The system ignored this setting during bootup and predictive naming was enabled as a result.

There are multiple ways to workaround this:

Solution 1 – Update Default GRUB to Disable CNDN:

1). To restore the old naming convention, you can edit the /etc/default/grub file and add net.ifnames=0 and biosdevname=0 at the end of the GRUB_CMDLINE_LINUX variable:

Example:   GRUB_CMDLINE_LINUX=”rd.lvm.lv=centos/swap vconsole.keymap=us crashkernel=auto rd.lvm.lv=centos/root vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0″

2) Review the new configuration by printing output to STDOUT

# grub2-mkconfig

3) Update the grub2 configuration after review:

# grub2-mkconfig -o /boot/grub2/grub.cfg

 

Solution 2: Enable Network Manager

1) Install Network Manager:

# yum install NetworkManager

2) Start Network Manager

# service NetworkManager start

3) Run chkconfig to ensure Network Manager starts after system reboot

# chkconfig NetworkManager on

Solution 3: Create Customer Udev Rule

We will create an udev rule to override the unintended predictive name.

1) Create a new 80-net-name-slot.rules in /etc/udev/rules.d/

# touch /etc/udev/rules.d/80-net-name-slot.rules

2). Add below line to the new 80-net-name-slot.rules:

NAME==””, ENV{ID_NET_NAME_SLOT}!=””, NAME=”eth0″

Final Implementation

All three solutions solved the problem.  Approach #1 involves updating GRUB config, so handle with care. Solution #2 is a very hands-off approach allowing Network Manager to control interface states.   Most sysadmins have a love/hate relationship with NetworkManager however. NetworkManager simplifies management of WiFI interfaces but can lead to unpredictable behavior in interface states. Most common concerns are interfaces brought up by NetworkManager when it should be down as sysadmin are not ready to turn up those NIC yet. OpenStack community had reported cloud-init timing related issues as well, although we didn’t have any problems enabling it on the Cloud Centos 7 image.  Solution #3 needs to align with overall deployment requirements in a Multi-NIC environment.

In reality,  CNDN was designed to solve NIC naming issues in a physical server environment.  It stopped being useful with virtual workloads.  Most of the cloud workloads deploy with a single NIC.  The NIC is always eth0.  Consequently, disabling CNDN makes sense, solution #1 is what we recommend.

Once CentOS VM image is in the desirable state, create a snapshot, then refer to the OpenStack documentation to upload into glance.  A shortcut to validate the new image,  instead of creating a snapshot, download and upload back into glance, it is perfectly fine to boot VM directly from a snapshot.   Please refer to VIO documentation for recommended steps.

Be sure to test this out on your VMware Integrated OpenStack setup today.  If you don’t have VIO yet, try it on our VMware Integrated OpenStack Hands-On-Lab , no installation required.

OpenStack Summit:

We will be at the OpenStack Summit in Boston. If you are attending the conference, swing by the VMware booth or attend one of our many sessions:

OpenStack and VMware – Use the Right Foundation for Containers

Digital Transformation with OpenStack for Modern Service Providers

Is Neutron challenging to you – Learn how VMware NSX is the solution for regular OpenStack Network & Security services and Kubernetes

OpenStack and OVN – What’s New with OVS 2.7 

DefCore to Interop and back again: OpenStack Programs and Certifications Explained

Senlin, an ideal bridge between NFV Orchestrator and OpenStack 

High availability and scalability management of VNF

How an Interop Capability becomes part of the OpenStack Interop Guidelines

OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Lightning Talk:

Openstack and VMware getting the best of both. 

Demos:

Station 1: VMware NSX & VMware Integrated OpenStack

Station 2: NFV & VMware Integrated OpenStack

 

VMware Integrated OpenStack 3.1 GA. What’s New!

VMware announced general availability (GA) of VMware Integrated OpenStack 3.1 on Feb 21 2017. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Mitaka release and streamlined user experience with Single Sign-On support with VMware Identity Manager.   For OpenStack Cloud Admins, the 3.1 release is also about enhanced integrations that allows Cloud Admins to further take advantage of the battle tested vSphere Infrastructure & Operations tooling providing enhanced security, OpenStack API performance monitoring,  brownfield workload migration, and seamless upgrade between central and distributed OpenStack management control planes.

images

 

 

 

 

VIO 3.1 is available for download here.  New features include:

  • Support for the latest versions of VMware products. VMware Integrated OpenStack 3.1 supports and is fully compatible with VMware vSphere 6.5, VMware NSX for vSphere 6.3, and VMware NSX-T 1.1.   To learn more about vSphere 6.5, visit here, vSphere 6.3 and NSXT, visit here.
  • NSX Policy Support in Neutron. NSX administrators can define security policies, shared by the OpenStack Cloud Admin with cloud users. Users can either create their own rules, bounded with the predefined ones that can’t be overridden, or only use the predefined, depending on the policy set by the OpenStack Cloud Admin.  NSX Provider policy feature allows Infrastructure Admins to enable enhanced security insertion and assurance all workloads are developed and deployed based on standard IT security policies.
  • New NFV Features. Further expanding on top of VIO 3.0 capability to leverage existing workloads in your OpenStack cloud, you can now import vSphere VMs with NSX network backing into VMware Integrated OpenStack.  The ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development projects or production workloads to the OpenStack Framework.  VM Import steps can be found here.  In addition full passthrough support by using VMware DirectPath I/O is supported.
  • Seamless update from compact mode to HA mode. If you are updating from VMware Integrated OpenStack 3.0 that is deployed in compact mode to 3.1, you can seamlessly transition to an HA deployment during the update. Upgrade docs can be found here.
  • Single Sign-On integration with VMware Identity Manager. You can now streamline authentication for your OpenStack deployment by integrating it with VMware Identity Manager.  SSO integration steps can be found here.
  • Profiling enhancements.  Instead of writing data into Ceilometer, OpenStack OSprofiler can now leverage vRealize Log Insight to store profile data. This approach provides enhanced scalability for OpenStack API performance monitoring. Detailed steps on enabling OpenStack Profiling can be found here.

Try VMware Integrated OpenStack Today