VMware announced general availability (GA) of VMware Integrated OpenStack (VIO) 4.1 on Jan 18th, 2018. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Ocata release and support for the latest versions of VMware products, across vSphere, vSAN, and NSX V|T (including NSX-T LBaaSv2). For OpenStack Cloud Admins, the 4.1 release is also about enhanced control. Control over API throughput, virtual machine bandwidth (QoS), deployment form factor, and user management across multiple LDAP domains. For Kubernetes Admins, 4.1 is about enhanced tooling. Tooling that enables Control plane backup and recovery, integration with Helm and Heapster allowing for simplified application deployment and monitoring, and centralized log forwarding. Finally, VIO deployment automation has never been more straightforward using newly documented OMS API.
4.1 Feature Details:
- Support for the latest versions of VMware products – VIO 4.1 supports and is fully compatible with VMware vSphere 6.5 U1, vSAN 6.6.1, VMware NSX for vSphere 6.3.5, and VMware NSX-T 2.1. To learn more about vSphere 6.5 U1, visit here, NSX-V 6.3.5 and NSX-T 2.1, visit here.
- Public OMS API – Management server APIs that can be used to automate deployment and lifecycle management of VMware Integrated OpenStack is available for general consumption. Users can perform tasks such as provision OpenStack cluster, start/stop the cluster, gather support bundles, etc using the OMS public API. Users can also leverage Swagger UI to check and validate API availability and specs,
API Base URL: https://[oms_ip]:8443/v1
Swagger UI: https://[oms_ip]:8443/swagger-ui.html
Swagger Docs: https://[oms_ip]:8443/v2/api-docs
- HAProxy rate limiting – Cloud Admin has the option to enable API rate limiting for public facing API access. If received API rate exceeds the configured rate, clients receive a 429 error with a Retry-After header that indicates a wait duration. Update the custom.yml deployment configuration file to enable HAproxy Rate limiting feature.
- Neutron QoS – Before VIO 4.1, Nova image or flavor extra-spec controlled network QoS against the vCenter VDS. With VIO 4.1, Cloud administrator can leverage Neutron QoS to create the QoS profile and map to a port(s) or logical switch. Any virtual machine associated with the port or logical switch will inherit the predefined bandwidth policy.
- Native NSX-T Load Balancer as a Service (LBaaS) – Before VIO 4.1, NSX-T customers had to implement BYO Nginx or third party LB for application load balancing. With VIO 4.1, NSX-T LBaaSv2 can be provisioned using both Horizon or Neutron LBaaS API. Each load balancer must map to an NSX-T Tier 1 logical router (LR). Missing LR or LR without a valid uplink is not a supported topology.
- Multiple domain LDAP backend – VMware Integrated OpenStack 4.1 supports SQL plus one or more domains as an identity source. Up to a maximum of 10 domains, each domain can belong to a different authentication backend. Cloud administrators can create/update/delete domains and grant / revoke Domain administrator users. Domain administrator is a local administrator, delegated to manage resources such as user, quotas, and projects for a specific domain. VIO 4.1 Support both AD and OpenDirectory as authentication backends.
4.1 NFV and Kubernetes Features:
- VIO-in-a-box – AKA Tiny deployment. Instead of separate physical clusters for management and compute, VIO deployment can now consolidate on a single physical server. VIO-in-a-box drastically reduces the footprint and is suitable for environments which do not have high availability requirements nor large workloads. VIO-in-a-box can be preconfigured manually or fully automated with OMS API. Shipped as a single RU appliance to any manned or unmanned Data Center where space, capacity, availability of onsite support are biggest concerns.
- VM Import – Further expanding on VM import capabilities, you can now import vSphere VM with multiple disks and NICs. Any VMDK not classified as VM root disk imports as cinder-volume(s). Existing networks import as provider network with access restricted only to the given tenant. Ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs is the foundation we are setting for future sophisticated use cases around availability. Refer to here for VM import instructions.
- CPU policy for latency sensitivity workflows – Latency Sensitive workflows often require dedicated reservations of CPU, memory, and network. In 4.1, we introduced CPU policy configuration using Nova flavor extra spec ‘hw:cpu_policy”. Setting of this policy will determine vCPU mapping to an instance,
- Networking passthrough – Traditionally Nova flavor or image extra-specs defined the workflow for hardware passthrough, without direct involvement of Neutron. VIO 4.1 introduces Neutron-based network passthrough device configuration. The Neutron based approach allows a Cloud administrators to control and manage network settings such as MAC, IP, and QoS of a passthrough network device. Although both options will continue to be available, going forward commendation is to leverage the neutron workflow for network and nova extra-specs for all other hardware passthrough devices. Refer to Upstream and VMware documentation for details.
- Enhanced Kubernetes support – VIO 4.1 ships with Kubernetes version 1.8.1. In addition to the latest upstream release, integration with widely adopted application deployment and monitoring tools are standard out of the box, Helm and Heapster. VIO 4.1 with NSX-T2.1.will allow you to consume Kubernetes network security policy as well.
- VIO Kubernetes support bundle – Opening support tickets couldn’t be simpler with VIOK support bundle. Using a single line command, specify the start and end date, VIO Kubernetes will capture logs from all components required to diagnosis tenant impacting issues within the specified time range.
- VIO Kubernetes Log Insight integration – Cloud administrator can specify FQDN of the log Insight as the logging server. Current release supports a single logging server.
- VIO Kubernetes control plane backup / restore – Kubernetes admin can perform cluster level back-ups from the VIOK management VM. Each successful backup produces a compressed tar backup file.