Home > Blogs > VMware Telco Cloud Blog > Author Archives: mmahmoodi

Author Archives: mmahmoodi

Introducing VMware Integrated OpenStack 7.0

VMware recently announced VMware Integrated OpenStack 7.0. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow.We are truly excited about our latest OpenStack distribution as this release enables customers to take advantage of advancements in the upstream Train release and is fully compatible with VMware vSphere 7.0 and NSX-T Data Center 3.0.

For our Telco customers, VMware Integrated OpenStack 7.0 offers a 5G ready platform that meets the demands for their 5G core and other workloads. VMware Integrated OpenStack 7.0 is built with VMware Tazu Kubernetes Grid underpinning as it’s control plane, providing resilience in addition to availability. Upgrading directly from VMware Integrated OpenStack 5.1 or VMware Integrated OpenStack 6.0 to VMware Integrated OpenStack 7.0 is seamless with zero data plane downtime.  We are super excited to bring these features in VMware Integrated OpenStack 7.0!

VMware Integrated OpenStack 7.0 Feature Details:

Feature Enhancements:

OpenStack Release:

  • Alignment with upstream OpenStack Train release

Seamless Integration with VMware SDDC:

  • Interop with the latest vSphere 7.0, vSAN and NSX-T 3.0
  • vRealize Operations Management Pack
  • vRealize Log Insight

Nova:

  • Selective vCPU Pinning: Previously, VM’s set to high latency sensitivity required full CPU reservation and all vCPU’s were pinned to physical cores. In VMware Integrated OpenStack 7.0, you can specify which vCPU’s need to be pinned to physical cores. This feature supports mixing high performing applications and non-critical applications while providing better CPU resource utilization and improves consolidation ratios and virtualization ROI for Telco NFV workloads.

Neutron:

  • Support NSX-T 3.0 Policy API for L3: enhances the intent-based API and policy UI to retrieve runtime information on the gateways. 
  • Network Trunk services: The trunk service plugin is enabled by default with Neutron NSX-T Policy Plugin and enables VNFs to connect to many networks at once through a single vNIC, and dynamically connect or disconnect from networks. It provides containers with network isolation inside Nova instances and allows:
  • Multiple networks to be connected to an instance using a single virtual NIC
  • Multiple networks can be presented to an instance by connecting it to a single port
  • IPv6 Support:
    • Full IPv6 data plane support
    • FWaaS: Leverages Tier-1 Edge Firewall and Fully compatible with Neutron FWaaS API spec and supports FWaaS v2 API in VMware Integrated OpenStack 7.0
    • LBaaS:. Since the Queens OpenStack release cycle neutron-lbaas and neutron-lbaas-dashboard were deprecated. To align with the OpenStack community in VMware Integrated OpenStack 7.0 LBaaS service has been migrated from Neutron LBaaS to Octavia while preserving the LBaaS v2 API.
  • SRIOV Network Redundancy Support: Provides High Availability for VM connectivity through SRIOV networks by ensuring virtual functions are scheduled to separate physical functions

 

  • EVPN and L3 Multicast: Configured through NSX-T (transparent to VIO)
  • vRF Lite Support: Support for vRF Lite in NSX-T 3.0 provides multiple tenant data plane isolation through Virtual Routing Forwarding (VRF) in Tier-0 gateway with separate routing table, NAT, and firewall within each VRF eliminating the need for Tier0 deployments for multiple tenants.

VMware Integrated OpenStack Management Cluster update:

Management API: VMware Integrated OpenStack 7.0 introduces Public API to automate VMware Integrated OpenStack management through API. Fully open Kubernetes based lifecycle management APIs with CRD extension and provides public API for managing day-1 deployment and day-2 management of VMware Integrated OpenStack services.

Python3:

  • ​All OpenStack components run on Python 3

VMware Integrated OpenStack Control Plane Update: The dedicated Kubernetes cluster underpinning the VMware Integrated OpenStack control plane has been upgraded to Tanzu Kubernetes Grid (Kubernetes 1.17.2).

Platform Lifecycle Automation:

  • Skip-release upgrades:
    • Support upgrade from 6.0 to 7.0
    • Support upgrade from 5.1 to 7.0 directly
  • Patch management:
    • Patches are delivered as container images rolled out via Kubernetes
    • viocli command set for patch management

VMware Integrated OpenStack LCM UI Enhancement:

  • Admins can see each service’s desired v.s. observed state
  • Easier to create Neutron availability zones directly from UI

VMware Integrated OpenStack CLI updates:  Enhanced CLI includes:

  • Ability to start/stop for each single services and automatically propagates changes using a single command on Custom Resource(CR)
  • More secure backups
  • Bash completion and CLI shortcuts added to LCM

VMware Integrated OpenStack 7.0 Max Configuration Data: VMware configuration maximums are published to https://configmax.vmware.com/

 

VMware vCloud Director 9.7 Appliance Installation

Starting with version 9.7, the vCloud Director appliance includes an embedded PostgreSQL database with а high availability (HA) function. Whereas vCloud Director on Linux uses an external database that needs to be installed and configured before you install vCloud Director on Linux.

You can create a vCloud Director server group by deploying one or more instances of the vCloud Director appliance with first member as a primary cell and a subsequent member as a standby or vCD application cell.  You deploy the vCloud Director appliance by using the vSphere Client (HTLM5), the vSphere Web Client (Flex), or VMware OVF Tool.

This blog post describes deploying vCloud Director server group using VMware OVF Tool. You can use the Deploy OVF from vCenter to deploy the appliances using single OVA file that displays 5 different deployment configurations to choose from: Primary node (small and large), Standby node (small and large) and vCD Cell Application node. The large vCloud Director primary appliance size is suitable for production systems, while the small is suitable for lab or test systems.

For more details visit vCloud Director Installation, Configuration, and Upgrade Guide

Note **

  • Mixed vCloud Director installations on Linux and vCloud Director appliance deployments in one server group are unsupported.
  • The vCloud Director appliance does not support external databases.

 To create deployment with a database HA cluster, we deploy one instance of the vCloud Director appliance as a primary cell, and two instances as standby cells (In this example we deploy only one standby cell). However if you have a production environment we recommend 2 standby cells deployment. vCD application cells connect to the database in the primary cell.

 

Recommended datastore for the appliance deployment is a vSAN datastore with redefined policy.

Next regarding network configuration, starting with version 9.7, the vCloud Director appliance is deployed with two networks, eth0 and eth1, so that you can isolate the HTTP traffic from the database traffic. Different services listen on one or both of the corresponding network interfaces. First interface eth0 is primarily used for services such as http – ports 80, 443, console proxy – port 8443, jmx – ports 61611, 61616. Second interface(eth1) is used for services such as database communication (port 5432).  Both interfaces are used for services such as ssh, management UI, etc.  Ensure you define these interface subnets based on your testbed requirements. Here we use IP addresses in same subnet for both the interfaces.

Note **

After you deploy the vCloud Director appliance, you cannot change the eth0 and eth1 network IP addresses or the hostname of the appliance. If you want the vCloud Director appliance to have different addresses or hostname, you must deploy a new appliance.

To see if the deployments succeeded, you can use the eth0 IP address of the primary cell to login to the Admin provider portal or you can check status of the cell by logging into the appliance management user interface at https://vcd_ip_address:5480.

In order to deploy standby cells, use same steps to deploy an appliance with deployment configuration as Standby small. Once deployed, embedded databases are configured in a replication mode with the primary database.

Note**

After the initial standby appliance deployment, the replication manager begins synchronizing its database with the primary appliance database. During this time, the vCloud Director database and therefore the vCloud Director UI are unavailable.

Once deployment is successful, you can verify the Database High Availability Cluster status by logging to either of cells admin portal or appliance management user interface.

What to do next:

If a standby cell is not in a running state, deploy a new standby cell.

If the primary cell is not in a running state, Recover from a Primary Database Failure in a High Availability Cluster.

Detailed step by step demo videos can be found on our Telco YouTube Channel

 

VMware vCloud Director Integration with vCenter and NSX

Once the vCloud Director appliance is deployed and configured. Register vCenter with vCloud Director from vCloud provider admin portal.

Step 1: First we login to the vCloud Director service provider admin portal at https:// VCD FQDN_or_IP_address /provider with administrator credentials >> vSphereResources >> vCenters >> Add vCenter Server

Step2: Based on your environment you can skip or add NSX-V Manager instance that is associated with vCenter Server and complete the registration

Step3: Once vCenter is registered you can register the NSX-T separately as it is independent of vCenter. You may register the NSX-T from admin portal >> >> vSphereResources >> NSX-T Managers >> Register NSX-T Manager

Once registration is complete you perform various operations to utilize the NSX-T features. For example, network pool creation is the same as with VXLAN. You can also import all the networks defined in NSX-T.

 

Now you can create Provider VDC (PVDC) which as a collection of compute, memory, and storage resources from vCenter Server instance that relies on NSX Data Center for vSphere or NSX-T Data Center for network resources. In addition, a Provider VDC provides resources to organization VDCs. A provider VDC can be created using the vCloud Director API, a powerful and easy to use solution for getting information about organizations, VDCs, networking and vApps. In this blog post Chrome “postman” REST api client is used for making the vCloud Director API calls.

 

Step 1: Get the “x-vcloud-authorization” token value which is required for further REST API calls

Authentication.

Step2: Retrieve the NSX-T Manager registered to this cloud.

Step 3: Retrieve the list of vCenter servers registered to this cloud.

Step 4: Retrieve the resources on a vCenter Server

Step 5: Retrieve a list of resource pools from a vCenter Server

Step 6:  Create a Provider VDC backed by NSX-T Data Center.

Step7: Finally verify that Provider VDC from the vCloud Director Service Provider Admin portal.

Next you can create organization VDCs and External networks-based on your environment.

Detailed step by step demos can be found on our Telco YouTube Channel

Multi-Tenancy with vCloud Director and NSX-T

This blog post walks through the steps on how to achieve secure multi-tenancy with vCloud Director and NSX-T.  The below reference topology is used to show the network resource isolation. For example, as shown below we will create 2 Tenants, Tenant A with two VMs and Tenant B with one VM.

Network isolation is achieved with the advanced networking capabilities of NSX-T Data Center that provides a fully-isolated and secure traffic paths across workloads and tenant switch and routing fabric. As described in Multi-Tenancy Design Objectives, NSX-T Data Center introduces a two-tiered routing architecture enabling the management of networks at the provider (Tier-0) and tenant (Tier-1) tiers. As shown in reference topology above, a provider routing tier is attached to the physical network for North-South traffic, while the tenant routing context can connect to the provider Tier-0 and manage East-West communications. In vCloud Director, each Organization VDC will have a single Tier-1 distributed router that provides the intra-tenant routing capabilities.

 

Step1: From vCloud Director Admin Portal create two Organizations one for each Tenant, Tenant A and Tenant B.

Step 2: Create two Organization VDCs one for each Tenant, Tenant A and Tenant B using the wizard as follows:

Step 3: Create two Logical switches using overlay networks and two uplink logical switches using VLAN on NSX-T one for each Tenants, Tenant A and Tenant B.

Step 4: Create two Tier-0 routers on NSX-T one for each Tenants, Tenant A (High-availability Mode as Active-Active) and Tenant B (High-availability Mode as Active-Standby).

 

Step 5: Create two Tier-1 routers on NSX-T one for each Tenants, Tenant A & Tenant B.

Step 6: Create uplink router ports on NSX-T for each of the Tier-0 routers, for both Tenants, Tenant A and Tenant B virtual machines to connect using the uplink logical switches created earlier.

Step 7:  Enable Route-Redistribution and create a new redistribution-criteria to allow the T0 & T1 sources for each of the Tier-0 routers, for both Tenants, Tenant A and Tenant B.

Step 8: Create downlink ports for each of the Tier-1 routers which will be used as gateway for both Tenants, Tenant A and Tenant B virtual machines using the logical switches created earlier.

Step 9: From the vCloud Director Tenant portals of each Tenants import the logical networks corresponding to each Tenant created in NSX-T and add static IP Pools in that subnet.

Step 10: Create a new vApp for Tenant A by adding two virtual machines for each Tenants as per reference topology.

Step 11: Add the networks imported from NSX-T into vApp.

Step 12: For each VM in vApp, edit the Network settings for VM-1 in Tenant A to select the newly added network and Static IP pool we created earlier.

Step 13: Power on the vApp and repeat steps 9 -12 for Tenant B.

Step 14: Now verify the connectivity between virtual machines in Tenant-A. Results show a successful ping between VM-1 and VM-2 in Tenant-A.

Step 15: Now verify the connectivity between virtual machines in Tenant-A and Tenant-B. Results show that ping between VMs in Tenant-A and VM in Tenant-B fails confirming secure multi-tenancy between the Tenants.

Detailed step by step demos can be found on the Telco YouTube channel:

 

Load Balancing vCloud Director with NSX

This blog post is to show how to load balance vCloud Director cells with NSX-T. To use logical load balancers, you must start by configuring a load balancer instance which is deployed into the NSX-T Edge Cluster. You can configure load balancer in different sizes that determines the number of virtual servers, server pools, and pool members the load balancer can support.

For more details refer to  Scaling Load Balancer Resources.

Create a Logical Switch

From NSX-T Manager UI create a VLAN Logical Switch, and provide the VLAN id. In this example, VLAN 0. Advance Networking & Security > Networking > Switches >Add New Logical Switch

Create Tier-1 Router

On NSX-T UI, deploy a new standalone Tier-1 Router

Add a new logical Router Port on the newly created Tier-1 Router as Centralized Service Port, connecting it to the logical switch which we created earlier.

Under the Subnets, Add an IP address and subnet which will be used as the load balancer virtual IP address in later steps.

Add Load Balancer

Now we can create load balancer instance and associate the virtual servers with it. Create the LB instance on the Tier 1 Gateway which routes to your VCD cell network. Make sure the Tier 1 Gateway runs on an Edge node with the proper size (see the doc link before).

Advance Networking & Security > Networking > Load Balancers > Add

In this example we use following:

  • Name: VCD_LB
  • Size: small

First, we need to attach the Tier-1 router created in previous step.

Load Balancers > VCD_LB Overview > Attachment > Edit

Add Active Monitor

Next, we configure an active health monitor which will perform health checks on load balancer pool members according to the active health monitor parameters.

Create new monitor in Advance Networking & Security > Networking > Load Balancers > Monitors > Add New Active Health Monitor

  • Health Check Protocol: LbTcpMonitor
  • Monitoring Port: 443
  • default Interval, Fall Count, Rise Count, Timeout Period

Add Server Pools

Create a server pool and vCloud Director cells as the pool members. NSX-T Server Pools are used to handle traffic for use by the virtual server.

Create new Server Pool in Advance Networking & Security > Networking > Load Balancers > Server Pools > Add New Server Pool

  • Load Balancing Algorithm: Round Robin
  • TCP Monitoring: Disabled(defaults)
  • SNAT Translation: Auto Map
  • Pool members: Add vCloud Director Cells IPs
  • Health Monitor: Created in above step

Add Virtual Servers

Create new virtual server in Advance Networking & Security > Networking > Load Balancers > Virtual Servers > Add New Virtual Server

  • Application Type: Layer 4 TCP
  • Application Profile: nsx-tcp profile
  • Virtual Server Identifier: IP address of Tier 1 logical port defined above
  • Port: 443, 80
  • Protocol: TCP
  • Server Pool: Created above
  • Load Balancing Profile: nsx-default source-ip persistence profile

Attach the load balancer created above to this virtual server. Advance Networking & Security > Networking > Load Balancers > Virtual Servers >LB-VirtualServer>Loab Balancers>Attach

Verify that the Operational Status is Up

In this network topology the load balancer virtual IP and vCloud Director cells IP are in the same subnet and reachable from outside world. If you have an internal IP you need to set up NAT such that load balancer virtual servers are available both from outside (Tier-0 Gateway) and internal networks.

To ensure that the vCloud Director cells connect to the public load balancer virtual IP, URL needs to be configured in vCloud Director public addresses.

Now you may enter the “load balanced URL” to access the vCloud Director provider admin portal for successful verification of the configuration.

You can view this demo on our VMware Telco YouTube channel:

 

Link

VMware Integrated OpenStack 6.0: What’s New

 

 

 

VMware recently announced VMware Integrated OpenStack (VIO) 6.0. We are truly excited about our latest OpenStack distribution as this release enables customers to take advantage of advancements in the upstream Stein release including support for Cinder generic volume groups, improved admin panels, and security improvements throughout the stack.

For our Telco customers, VIO 6.0 delivers scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). VIO 6.0 is built with a Kubernetes-managed high availability control plane built on top of VMware SDDC, providing resilience in addition to availability. Upgrades from VIO 5.1 to VIO 6.0 is seamless with zero data plane downtime. We are super excited to bring these features in VIO 6.0!

 

VIO 6.0 Feature Details:

Advanced Kubernetes Support:

Kubernetes-powered OpenStack control plane: VIO 6.0 is now intent based running on top of a dedicated Kubernetes cluster. The intent-based design allows the VIO control plane to self-heal from failures. Since OpenStack services are now deployed as pods in Kubernetes, the new intent-based control plane also allows VIO components to be horizontally scaled up or down seamlessly and independently of one another. The new control plane architecture achieves a lower out-of-the-box footprint while allowing cloud admins to easily expand capacity.

Tenant-facing Kubernetes: In addition to providing a Kubernetes managed control plane, VIO provides tenant facing clusters powered by Essential PKS. A reference implementation using Heat is available for download from github. The Heat stack is intended to accelerate Essential PKS on VIO. The Heat stacks are open source software and provide support for either native integration with NSX-T using NCP (NSX Container Plugin) or Calico (v3.7) networking.

Feature Enhancements:

Cinder:

  • New multi-attach feature allows a volume to be attached to multiple instances at once
  • vSphere First Class Disk (FCD) support:
    • FCD does away with the need for shadow VM’s to house unattached Cinder volumes
    • Faster than VMDK driver for most operations
    • Compliments the existing VMDK driver: FCD can be enabled as an optional secondary backend
    • Users can create traditional VMDK or FCD volumes using volume types

IPv4 / IPv6 and Dual Stack Support:

  • Dual stack IPv4/IPv6 for Nova instances, Neutron security groups & routers
  • Support IPv6 with NSX-T 2.5 and NSX-P Neutron plugin
  • IPv6 addressing modes: static, SLAAC
  • Static IPv6 routing on Neutron routers
  • IPv6 support for FWaaS

Keystone:

  • Federation support using JSON Web Tokens (JWT)

OpenStack at Scale:

VIO 6.0 features seamlessly scaling OpenStack services to meet changes in demand and load. VIO 6 supports horizontally scaling the controllers as well as the pods that are run in those controllers. An out of the box compact deployment uses only one controller and an out of the box HA deployment uses three controllers, but users can scale up to a maximum of 10 controllers with a few clicks or CLI commands. This provides increased flexibility for cloud admins to right-size their deployments according to their needs. The ability to scale controller nodes provides for simple expansion of capacity for higher-load environments. VIO 6.0 supports scale out of individual OpenStack services by increasing pod replica counts. OpenStack services can be scaled out with just few clicks from the UI or a command from CLI without affecting other services or causing data plane downtime.

Essential PKS on OpenStack:

  • Provides a hybrid VM and container platform that combines best of breed components.
  • OpenStack and Kubernetes APIs for workloads, cluster and resource lifecycle management
  • Ability to deploy Essential PKS with OpenStack Heat for a native OpenStack experience and repeatable cluster creation
  • OpenStack multi-tenancy for more secure separation of container workloads
  • VMware Certified Kubernetes distribution, support and reference architecture with Essential PKS

Enhanced Management Tools:

  • viocli rewritten in Golang, new enhancements added
  • Bash completion and CLI shortcuts added to Life Cycle Manager
  • HTML5 WebUI:
    • No dependency on vCenter Web Plugin
    • Native Clarity theme provides a congruent user experience for VMware admins

Photon 3:

  • VIO control plane VM’s now use VMware Photon OS 3, a lightweight, secure, container-optimized Linux distribution backed by VMware
  • Containers are also built on Photon OS 3 base images

Industry-standard API’s:

  • Proprietary OMS API’s replaced with standard Kubernetes API’s and extensions
  • Many parts of VIO can optionally be managed with kubectl commands in addition to viocli
  • Cluster API responsible for additional VM bringup/management

 

Automated VIO Backups: Cloud administrators can schedule backups of the VIO control plane to a vSphere Content Library

Lifecycle Management: VIO provides lifecycle management for OpenStack components including deployment, patching, upgrade (with NO data plane downtime) and rich day-2 operations via Kubernetes deployment rollouts.

Versioning:  VIO 6.0 comes with built-in versioning of configuration and config version control for control plane configuration changes.

Clarity Theme:  VIO 6.0 Horizon now ships with a Clarity theme. The VIO 6.0 life cycle manager web interface also uses Clarity to familiar look and feel for vSphere administrators.

OpenStack Helm:  VIO 6.0 uses OpenStack-Helm for deploying OpenStack components. OpenStack-Helm is an OpenStack community project that provides deployment tooling for OpenStack based on the Helm package manager, which is a CNCF project.

OpenStack-Helm provides benefits such as:

  • Better management of loosely coupled OpenStack services
  • Better service dependencies handling
  •  Declarative and easy to operate
  • Enhanced rolling update workflow and rollback

Monitoring:

Assurance & Intelligent Operations:
Service impact and root-cause analysis with Smart Assurance
Operational Monitoring and Intelligence with vROps OpenStack Dashboards and Container monitoring
vRealize Log Insight integration to get insight into the OpenStack day to day operations.
Visibility across physical and virtual OpenStack networks
Automated approach to operational intelligence to reduce service impact and OpEx

VIO 6.0 Demos:
Below are a list of videos that provides a step by step walkthrough of deployment, upgrade, 360 degree visibility and Deployment of VMware Essential PKS on top of VIO 6.0.

1. VMware Integrated OpenStack Deployment: This demo video shows the step by step deployment of Virtual Appliance on your vCenter Server instance and deployment of OpenStack by using the Integrated OpenStack.

2. VMware Integrated OpenStack Upgrade 5.1 to 6.0: Upgrading VIO 5.1 deployment to VIO 6.0 allows you to take advantage of new features and functions, plus it also ensures you zero downtime.

3. VMware Essential PKS on top of VMware Integrated OpenStack 6.0: VIO provides pre-built open-source Heat stacks to help deploy Essential PKS on top of VIO for individual tenants of the cloud. Using the Heat orchestration engine simplifies and speeds up the deployment of Kubernetes as well as managing the lifecycle of Kubernetes (e.g. scaling out, tearing down clusters, etc), and does so using an orchestrator that OpenStack users will already be familiar with. This demo video shows step by step process to deploy Essential PKS on top of VIO.

 

4. 360 Degree Visibility: This demo video shows the integration of VIO 6.0 with vRealize Operations Manager and vRealize Log Insight. vRealize Operations provides a comprehensive dashboard for monitoring the Health, Risk, and Efficiency of your entire SDDC infrastructure. vRealize Operations OpenStack Management Pack offers OpenStack the ability to monitor and troubleshoot VMware Integrated OpenStack or other OpenStack distributions. vRealize Log Insight extends analytics capabilities to unstructured data and log management, giving you operational intelligence and deep enterprise-wide visibility across all tiers of your IT infrastructure and applications.

Accelerate the move to 5G and Edge on VMware Integrated OpenStack

 

Last week at VMworld, we announced in a press release the launch of VMware Integrated OpenStack v6.0. In this PR, we highlighted the expansion of VMware’s Telco and Edge Cloud portfolio to drive real-time intelligence for telco networks, as well as improved automation and security for Telco, Edge and IoT applications.  A key element of our telco portfolio includes the availability of VMware Integrated OpenStack and how VMware continues to invest in OpenStack-managed virtualized telco clouds enabling Communications Service Providers to deploy networks and services across multiple clouds (Private, Telco, Edge and Public) and have consistent operations and management across clouds.

Today, we are excited to announce that VIO 6.0 is now officially available for our Communications Service Provider and Enterprise customers to download and install.  We have added several new capabilities in the latest release, giving CSPs the fastest path to deploy services on OpenStack.

VIO based on Stein

VMware continues its leadership in being one of the first commercial OpenStack distributions to support Stein and is fully tested and validated with OpenStack 2018.11. With the latest Stein release, VMware continues to deliver core functionality with strengthened support of container and networking capabilities to support key CSP use cases including NFV, Edge and supporting the network evolution to 5G.

Cloud Native brings greater efficiency and higher resiliency for 5G networks

5G networks are being with the premise that it will be cloud native.  A cloud native architecture accelerates CSPs time to deploy and scale services as well as provide greater resiliency and flexibility in the network with the ability to rapidly instantiate new services and applications based on real-time customer demand.

The latest release of VMware Integrated OpenStack 6.0 includes support for VMware Essential PKS, which provides CSPs access to the latest release of upstream Kubernetes, supported by VMware, giving CSPs a flexible secure cloud platform that will allow them to build, run, and manage next generation container based applications and services on any cloud.

As CSPs evolve their network architectures from 4G to 5G, maintaining a hybrid network with both VM and Container-based workloads will be required.  With VIO 6.0, CSPs will be able to deploy and manage both environments using a common platform.

As part of VMware Essential PKS, CSPs have access to the VMware’s Kubernetes architect team who can guide CSPs through every step of their cloud native journey and ensure they build a platform that supports network operations at massive scale.

Virtual Cloud Networking Scale and Performance with VIO support of NSX-T Data Center

As CSPs make the transformation from 4G to 5G networks to support the massive number of mobile and IoT devices, they require an NFVI platform that is flexible and can seamlessly scale to meet the demands of 5G and multi-cloud environments.

VIO 6.0 natively integrated with the latest release of VMware NSX-T 2.4 supporting greater scale, resiliency and performance, with near line-rate speed using a DPDK-based accelerated data plane.

With the depletion of IPv4 addresses and the massive number of IoT and mobile devices, the adoption of IPv6 will continue to grow.  VIO 6.0 with NSX-T 2.4 introduces support for IPv6 to meet the critical requirement of cloud-scale networks for CSPs.  With dual-stack support, CSPs can continue to manage dual IPv4 and IPv6 stacks across their control and data plane.

Service Assurance across OpenStack environments with VMware Smart Assurance 10.0

CSPs deploying OpenStack for their 4G and 5G networks require robust service assurance capabilities with monitoring and management tools that will allow CSPs to deliver highly reliable networks and ensure high QoS and stringent SLAs are met.  The latest release of VMware Smart Assurance with VIO 6.0, provides assurance capabilities that will deliver service impact and root-cause analysis with visibility across physical and virtual OpenStack networks, as well as multi-cloud networks.  With VMware Smart Assurance and VIO 6.0, CSPs will gain an automated approach to operational intelligence to reduce service impact and operational expenses.

Additional Resources

  • Latest VMware Integrated OpenStack information can be found here
  • Read the latest VIO 6.0 release notes and get technical documentation here
  • Learn more at vmware.com

 

 

VMware & Nokia at MWC Americas 2018

 

Nokia and VMware continue to collaborate to deliver integrated and proven end-to-end NFV solutions which are widely deployed at communication service providers (CSP) globally.  Together we bring joint value with decades of telecom innovation combined with virtualization and cloud expertise. Powered by continued networking and cloud technology portfolio investment, supported by the research and creativity of Bell Labs, Nokia has the industry’s most complete, end-to-end Telecom portfolio of products and services. Likewise, VMware brings more than two decades of innovation and has delivered the most trusted and widely deployed virtualization and cloud solution to the market. The end result is a solid, unified solution that helps our customers address not only today’s rapid technology shifts, but also capitalize on new opportunities as they arise. Ultimately positioning CSPs for rapid growth by accelerating their digital transformation journey to the new era of enhanced mobile broadband,  IOT and 5G

Leveraging VMware vCloud NFV Platform, Nokia and VMware have delivered the following joint Virtual Network Functions (VNFs)

Additional details such as solution overview, solution white papers and  ready for NFV certification details can be found on the solution exchange: https://marketplace.vmware.com/vsx/company/nokia

Nokia and VMware have put their heads together to simplify the day to day operational requirement to efficiently run NFV services.   To learn more about our joint capabilities of how we intelligently automate network functions on a virtualized telco cloud, visit us at Mobile World Congress USA in Los Angeles in South Hall Booth 1714 and see a live demonstration.  Here is a sneak peak of what you will see:

In one demo, we will show an instance of a Nokia Cloud Packet Core network function that is exceeding its defined capacity thresholds.  Our joint solution will show the automated scaling of a Cloud Packet Core network function to address dynamic VNF demands.

As more and more CSP networks transition from bare metal to NFV, the upgrade of the NFV infrastructure has quickly become a significant operational challenge. Over the past 20 years, VMware has evolved to become the industry standard for seamless and effortless upgrades. This fundamental capability is also readily available in VMware’s vCloud NFV platform. Nokia and VMware have partnered to demonstrate how to upgrade NFV infrastructure without service disruption of traffic through Nokia’s Cloud Packet Core functions.  Join us to see how effortless NFVI upgrade can be with Nokia and VMware.

 

A Transformation Towards a 5G-Ready NFV Infrastructure

Mobile wireless technologies have gone through systematic generational refreshes over the last three decades. The next such refresh, called 5G, isn’t just an incremental upgrade from 4G, but represents a significant leap forward.  5G promises to improve efficiency, enhance security, reduce latency times, increase network capacity, and accelerate current data connections by 10, or even 100 times.  And for the first time, 5G brings wireline technologies into greater prominence and convergence with wireless infrastructure, enabling telecom carriers to extract efficiency from their full network.

However, before the benefits of 5G can be widely exploited, Communications Service Providers need to invest in virtualized and cloud-based infrastructure. 5G requires both a software-driven architecture and Network Functions Virtualization, or NFV, to be successful.

At VMware, we see 5G as a big opportunity and challenge for Communications Service Providers to leverage the benefits of 5G with new business models and to reassert their influence in the cloud economy.  While NFV is the foundation to deliver 5G, CSPs are going to require a telecom transformation that represents a new approach to delivering agile services and enables the ability to have these services move between clouds, include private, edge, and public clouds.

A key aspect of 5G networks is network slicing.  For CSPs, network slicing provides the ability to divide and scale the network on an as-a-service and ‘on-demand’ basis.  This requires an advanced, software-defined infrastructure that allows multiple virtual networks to be created atop a shared physical infrastructure. Virtual networks can then be customized to meet the needs of applications, services, devices, customers or other global CSPs.

For CSPs, network slicing enables new business opportunities. Services such as remote health and surgery, smart metering, smart drones, and connected cars all require connectivity, but with vastly different network characteristics.  New technologies, such as virtualization, network programmability, and network slicing will enable networks to be customized and meet the needs for each application.  As a result, new products and services can be introduced to market quickly and can be easily customized, deployed, and adapted to fast-changing demands.

VMware and Telia have jointly collaborated on a 5G Technical Whitepaper: 5G-Ready NFV Infrastructure- A Transformation Journey Towards Full Automation.  In this paper, we have outlined the characterization of the network driven by 5G and the requirements that CSPs should take into consideration to support new services and SLAs.  In addition, the paper highlights the requirement to provide an efficient network infrastructure that can deliver services at the best price point, along with creating an on-demand network to support new business consumption models and a frequent service rollout cadence.

We believe 5G to deliver massive changes as it will create a larger, more efficient network that offers new possibilities for developers and CSPs.  The collaborative nature of 5G may also prompt more partnering across the ecosystem and alter the competitive landscape of the industry.  Hybrid environments that leverage the best of both worlds, coupling hyper scale public cloud infrastructures like those from Amazon Web Services, or IBM, and the NFV enhanced clouds from CSPs, could become the basis for new marketplaces for consumer and enterprise applications and tools.