VMware today announced VMware Integrated OpenStack (VIO) 5.0. We are truly excited about our latest OpenStack distribution as VMware is one of the first companies to support and provide enhanced stability on top of the newest OpenStack Queens Release. Available in both Carrier and Data Center Editions, VIO 5.0 enables customers to take advantage of advancements in Queens to support mission-critical workloads, and adds support for the latest versions of VMware products including vSphere, vSAN, and NSX.
For our Telco/NFV customers, VIO 5.0 is about delivering scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). Also for NFV operators, VIO 5.0 will help fast track a path towards Edge computing with VIO-in-a-box, secure multi-tenant isolation and accelerated network performance using the enhanced NSX-T VDS (N-VDS). For VIO Datacenter customers, advanced security, simplified user experience, and advanced networking with DNSaaS have been on top of the wish list for many VIO customers. We are super excited to bring those features in VIO 5.0.
VIO 5.0 NFV Feature Details:
Advanced Kubernetes Support:
Enhanced Kubernetes support: VIO 5.0 ships with Kubernetes version 1.9.2. In addition to the latest upstream K8S release, integration with latest NSX-T 2.2 release is also included. VIO Kubernetes customers can leverage the same Enhanced N-VDS via Multus CNI plugin to achieve significant improvements in container response time, reduced network latencies and breakthrough network performance. We also support using Red Hat Enterprise Linux as the K8S cluster image.
Heterogeneous Cluster using Node Group: Now you can have different types of worker nodes in the same cluster. Extending the cluster node profiles feature introduced in VIO 4.1, a cluster can now have multiple node groups, each mapping to a single node profile. Instead of building isolated special purpose Kubernetes clusters, a cloud admin can introduce a new node group(s) to accommodate heterogeneous applications such as machine learning, artificial intelligence, and video encoding. If resource usage exceeds the node group limit, VIO 5.0 supports cluster scaling at a node group level. With node groups, cloud admins can address cluster capacity based on application requirements, allowing the most efficient use of available resources.
Enhanced Cluster Manageability: vkube heal and ssh allow you to directly ssh into any of the nodes of a given cluster and to recover a failed cluster nodes based on ETCD state or cluster backup in the case of complete failure.
N-VDS: Also Known as NSX-T VDS in Enhanced Data-path mode. Enhanced, because N-VDS runs in DPDK mode and allows containers and VMs to achieve significant improvements in response time, reduced network latencies and breakthrough network performance. With performance(s) similar to SR-IOV, while maintaining the operational simplicity of virtualized NICs, NFV customers can have their cake and eat it too
NSX-V Search domain: A new configuration setting in the NSX-V will enable the admin to configure a global search domain. Tenants will use this search domain if there is no other search domain set on the subnet.
NSX-V Exclusive DHCP server per Project: Instead of shared DHCP edge based on subnet across multiple projects. Exclusive DHCP edge provides the ability to assign dedicated DHCP servers per network segment. Exclusive DHCP server will provide better tenant isolation, also allowing an Admin to determine customer impact concerning maintenance windows, etc.
NSX-T availability zone (AZ): An availability zone is used to make network resources highly available by group network nodes that run services like DHCP, L3, NAT, and others. Users can associate applications with an availability zone for high availability. In previous releases neutron AZ was supported against NSX-V, we are extending this support to the T as well.
Security and Metering:
Keystone Federation: Federated Identity provides a way to securely use existing credentials to access cloud resources such as servers, volumes, and databases, across multiple endpoints across multiple authorized clouds using a single set of credentials. VIO5 supports Keystone to Keystone (K2K) federation by designating a central Keystone instance as an Identity Provider (IdP), interfacing with LDAP or an upstream SAML2 IdP. Remote Keystone endpoints are configured as Service Providers (SP), propagating authentication requests to the central Keystone. As part of Keystone Federation enhancement, we will also support 3rd party IdP in addition to the existing support for vIDM.
Gnocchi: Gnocchi is the project name of a TDBaaS (Time Series Database as a Service) project that was initially created under the Ceilometer umbrella. Rather than storing raw data points, it aggregates them before storing them. Because Gnocchi computes all the aggregations at ingestion, data retrieval is exceptionally speedy. Gnocchi resolves performance bottlenecks in Ceilometer’s legacy architecture by providing an extremely robust foundation for the metric storage required for billing and monitoring. The legacy Ceilometer API service has been deprecated by upstream and is no longer available in Queens. Instead, the Ceilometer API and functionality has been broken out into the Aodh, Panko, and Gnocchi services, all of which are fully supported in VIO 5.0.
Default Drop Policy: Enable this feature to ensure that traffic to a port that has no security groups and has port security enabled will always discard.
End to End Encryption: The cloud admin now has the option to enable API encryption for internal API calls in addition to the existing encryption on public OpenStack endpoints. When enabled, all internal OpenStack API calls will be sent over HTTPS using strong TLS 1.2 encryption. Encryption on internal endpoints helps avoid man-in-the-middle attacks if the management network is compromised.
Performance and Manageability:
VIO-in-a-box: Also known as the “Tiny” deployment. Instead of separate physical clusters for management and compute, VMware Integrated OpenStack control and data plane can now consolidate on a single physical server. This drastically reduces the footprint of a deployment and is ideal for Edge Computing scenarios where power and space is a concern. VIO-in-a-box can be preconfigured manually or fully automated with OMS API.
Hardware Acceleration: GPUs are synonymous with artificial intelligence and machine learning. vGPU support gives OpenStack operators the same benefits for graphics-intensive workloads as traditional enterprise applications: specifically resource consolidation, increased utilization, and simplified automation. The video RAM on the GPU is carved up into portions. Multiple VM instances can be scheduled to access available vGPUs. Cloud admins determine the amount of vGPU each VM can access based on VM flavors. There are various ways to carve vGPU resources. Refer to the NVIDIA GRID vGPU user guide for additional detail on this topic.
OpenStack at Scale: VMware Integrated OpenStack 5.0 features improved scale, having been tested and validated to run 500 hosts and 15,000 VMs in a region. This release will also introduce support for multiple regions at once as well as monitoring and metrics at scale.
Elastic TvDC: A Tenant Virtual Datacenter (TvDC) can extend across multiple clusters in VIO 5.0. Extending on support of single cluster TvDC’s introduced in VIO 4.0, VIO 5.0 allows a TvDC to span across multiple clusters. Cloud admins can create several resource pools across multiple clusters assigning the same name, project-id, and unique provider-id. When tenants launch a new instance, the OpenStack scheduler and placement engine will schedule VM request to any of the resource pools mapped to the TvDC.
VMware at OpenStack Summit 2018:
VMware is a Premier Sponsor of OpenStack Summit 2018 which runs May 21-24 at the Vancouver Convention Centre in Vancouver, BC, Canada. If you are attending the Summit in person, we invite you to stopped by VMware’s booth (located at A16) for feature demonstrations of VMware Integrated OpenStack 5 as well as VMware NSX and VMware vCloud NFV. Hands on training is also available (RSVP required). Complete schedule of VMware breakout sessions, lightening talks and training presentations can be found here.