Home > Blogs > VMware Telco Cloud Blog > Author Archives: mmahmoodi

Author Archives: mmahmoodi

Running CNFs on Bare Metal — Merit or Mirage? The Abstraction of Virtualization Yields Concrete Benefits

As CSPs turn to containers to help roll out 5G services and pursue new use cases, engineers and architects at CSPs are trying to gauge the benefits of running containers on virtual machines or bare metal.

A container wraps a network function in a consistent, portable package that can be independently distributed and modified with little effort and few dependencies. Containers then run on a host operating system and share its kernel. The host operating system resides on either a virtual machine or a physical server.

If you’re part of 5G effort at a CSP, you’re probably considering the merits of running containers on virtual machines or bare metal. Containerized network functions (CNFs) help CSPs streamline the development and deployment of 5G services and functions so you can gain flexibility, speed, and agility to address 5G use cases while maintaining or exceeding your existing levels of security, performance, and reliability.

Embodied in the term cloud-native technologies, this containerization trend is advanced by using a microservices architecture and a container orchestration system—typically Kubernetes. Containers, in general, can ease the path to being able to independently deploy, modify, and maintain network functions. Kubernetes comes into the picture to automate the deployment and management of containerized functions and services at scale.

When it comes to containers, some people tend to cast the choice between virtual machines and bare metal as a binary one, but that’s not the case. Containers are a form of operating system virtualization; virtual machines are, of course, hardware virtualization, which was originally developed to eliminate the many pain points of working with physical hardware and to reduce costs.

As such, VMs solve infrastructure-related problems by better utilizing servers, improving infrastructure management, streamlining IT operations, and isolating resources for security. These are some of the reasons why the major public cloud providers use hypervisors and VMs to run containers. Containers solve application-related problems by, among other things, streamlining DevOps, fostering a microservices architecture, improving portability, and further improving resource utilization.

Containers complement the many benefits of hardware virtualization, and security is a case in point. Because containers alone are inadequate security boundaries, the strong isolation provided by VMs improves security for containerized functions and services, and the mature, proven ecosystem of virtualization technology enables you to build security into the infrastructure with such measures as micro-segmentation.

This passage from the NIST Application Container Security Guide (NIST Special Publication 800-190) sums up this synergy nicely:

“Although containers are sometimes thought of as the next phase of virtualization, surpassing hardware virtualization, the reality for most organizations is less about revolution than evolution. Containers and hardware virtualization not only can, but very frequently do, coexist well and actually enhance each other’s capabilities. VMs provide many benefits, such as strong isolation, OS automation, and a wide and deep ecosystem of solutions. Organizations do not need to make a choice between containers and VMs. Instead, organizations can continue to use VMs to deploy, partition, and manage their hardware, while using containers to package their apps and utilize each VM more efficiently.”

Because of this synergistic problem-solving relationship, running containers on virtual machines helps CSPs speed up the transition from 4G to 5G and ease the management of CNFs and 5G services. At the center of this combination is VMware Telco Cloud Platform, which uses a telco-grade Kubernetes distribution to orchestrate containers on virtual machines in a telco cloud.

A new white paper and an executive level solution brief from VMware explains how running containers on VMs establishes the perfect catalyst for efficiently and securely operating CNFs at scale.

Visit telco.vmware.com for more information on VMware’s Telco Cloud.

Announcing VMware Telco Cloud Platform — A Cloud-Native Architecture to Propel CSPs Toward 5G

The rollout of 5G networks is driving a monumental shift among communications service providers. As a CSP, you’re likely envisioning a multi-cloud strategy that lets you deploy both virtual network functions and cloud-native network functions from various vendors side by side on hybrid infrastructure so you can rapidly launch new services, explore new opportunities, and improve your competitive position.

How can you modernize your network and your infrastructure in a way that gives you the agility and efficiency to be able to pursue your 5G objectives while maintaining carrier-grade performance, quality, and reliability?

The following elements are critical to establishing a modern cloud with the power to innovate quickly, scale with elasticity, and manage functions and services efficiently:

  • Cloud-native technology such as containers and Kubernetes that lets you build, manage, and run cloud-native network functions (CNFs) across distributed sites.
  • Hybrid infrastructure that spans across multiple clouds and sites, from the core to the edge and from private to public clouds.
  • Multi-layer, cloud-first automation that unites your infrastructure and multi-cloud resources in a centralized orchestration system, which uses intent-based placement with late binding for optimization.

The trick, of course, is combining all of these elements into a consistent, horizontal platform that  eliminates silos, simplifies operations, and manages your networks and infrastructure efficiently so you can keep costs low while maintaining carrier-grade quality.

VMware Telco Cloud Platform does just that, and we are excited to announce the platform. Powered by field-proven telco infrastructure and cloud-first automation, VMware Telco Cloud Platform is a multi-cloud platform that enables you to rapidly deploy and efficiently operate multi-vendor CNFs and VNFs with agility and scalability across distributed 5G networks, from the core and the edge to the radio access network (RAN).

By solving the problems that undermine the architecture of existing telecommunications networks — vertical monolithic stacks marred by complexity, silos, and vendor lock-in — VMware Telco Cloud Platform empowers you to launch innovative services on consistent horizontal infrastructure that reduces operational complexity and radically improves agility. The two fundamental elements of this architecture are VMware Telco Cloud Infrastructure and VMware Telco Cloud Automation. As such, the platform’s architecture includes not only compute, storage, and networking but also containers as a service (CaaS) and multi-layer automation.

VMware Telco Cloud Platform establishes an open, disaggregated, and vendor-agnostic ecosystem to streamline 5G service delivery from design to lifecycle management automation while creating a unified, developer-friendly architecture with key capabilities for resource optimization, operational consistency, and multi-layer automation.

Here’s a summary of the platform’s critical capabilities and some of the associated business benefits.

Consistent Horizontal Platform

  • Consistent and horizontal platform: The platform’s hybrid IaaS and CaaS modernizes existing clouds to run both VNFs and CNFs across unified, consistent infrastructure. This architecture fosters low-latency performance in the data plane and improves scalability through virtualized networking with VMware NSX.
  • Multiple clouds with centralized management: The platform enables you to manage and automate functions, services, and resources across multiple clouds and sites. From a centralized location, you can facilitate and seamlessly accelerate service delivery  across your network.

Carrier-Grade Cloud-Native Capabilities

  • Cloud-native architecture: You can deploy, orchestrate, and optimize cloud resources and processes with intent-based placement. This cloud-native architecture establishes network resiliency, seamless cross-cloud application continuity, and multi-tenant service isolation to address business requirements and compliance regulations, such as high availability and service-level agreements.
  • Containers as a service (CaaS): The platform provides containers as a service (CaaS), which includes telco-specific enhancements that operationalize Kubernetes and containers specifically for telco networks.
  • Carrier-grade Kubernetes: The platform lets you capitalize on the advantages of a microservices architecture. You can use microservices with a resource-optimized Kubernetes runtime for device attachment, NUMA alignment, resource reservation, and placement. This cloud-native architecture delivers the capability to roll out 5G networks with Multus, DPDK modules, an SR-IOV plugin, CPU/Topology Manager, and Kubernetes cluster automation tailored for telco use cases.
  • Kubernetes cluster management: The platform deploys and operates new Kubernetes versions and worker nodes, and it validates on-boarded network functions on the updated version of Kubernetes. Cluster management eases the shift to Kubernetes so the business can focus on deploying new services.

Multi-Layer Automation 

  •  Zero-touch provisioning: The platform enables you to automate the onboarding and upgrading of network functions and infrastructure components with zero-touch provisioning.  Zero-touch provisioning simplifies operations by automatically provisioning new sites, services, and functions. Predefined templates for core, edge, and other sites let you rapidly set up and deploy infrastructure and services. By coupling this new capability with automated CaaS management, VNF management, and NFV operations, you can automatically roll out a complete telco cloud from infrastructure and CaaS to network functions and services.
  • Intelligent placement of functions through service-aware infrastructure: This capability optimizes resource utilization through analyzing infrastructure usage and service requirements. Based on holistic information gleaned from continuously synchronizing with registered clouds, VMware Telco Cloud Platform recommends where and when network functions should be deployed. This capability improves resource utilization and operational efficiency by dynamically adjusting the deployment schema. As a result, you can architect your 5G systems for optimal application response, scale, and service availability. Say goodbye to all those retries.
  • Dynamic resource allocation and late binding for optimization: A CNF is placed using late biding in Kubernetes clusters that were fine tuned during instantiation to meet the CNF’s requirements. The container network interface (CNI) and the operating system for the container host are configured to fulfill the needs of the CNF. This automation improves the resource utilization of clusters. More specifically, during the workload instantiation process, if none of the available Kubernetes clusters is suitable, the system will optimize an existing cluster or create new ones that match its network function requirements, such as location, DPDK, and SR-IOV.
  • Multi-layer lifecycle management: The platform improves operational efficiency by automating the provisioning and management of all the layers of the telco cloud, from network services to infrastructure, reducing provisioning and maintenance costs.
  • CI/CD pipeline integration: The platform makes possible lean and agile DevOps practices across operational functions by integrating with your CI/CD pipeline to deploy, redeploy, and upgrade network functions quickly and reliably, which helps achieve telco-grade resiliency and always-on service availability. These capabilities help you connect your business objectives and organizational structures with technical solutions that address 5G use cases.

Business Benefits for the 5G Era

These capabilities come together to drive a unique combination of benefits for a multi-cloud 5G era. The platform empowers you to:

  • Innovate faster by modernizing your telco cloud with web-scale speed and agility while maintaining carrier-grade performance, resiliency, and quality.
  • Deploy network functions and services throughout 5G networks, from the core to edge sites.
  • Run and manage CNFs and VNFs side by side on consistent horizontal infrastructure.

The combination of these capabilities and their benefits gives you the foundation for digital transformation. In the face of fierce competition and a rapidly changing marketplace, you can stand out by bringing innovative services to the market faster and by establishing a cutting-edge position in the new landscape of 5G.

Learn more about VMware’s Telco Cloud.

Introducing VMware Integrated OpenStack 7.0

VMware recently announced VMware Integrated OpenStack 7.0. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow.We are truly excited about our latest OpenStack distribution as this release enables customers to take advantage of advancements in the upstream Train release and is fully compatible with VMware vSphere 7.0 and NSX-T Data Center 3.0.

For our Telco customers, VMware Integrated OpenStack 7.0 offers a 5G ready platform that meets the demands for their 5G core and other workloads. VMware Integrated OpenStack 7.0 is built with VMware Tazu Kubernetes Grid underpinning as it’s control plane, providing resilience in addition to availability. Upgrading directly from VMware Integrated OpenStack 5.1 or VMware Integrated OpenStack 6.0 to VMware Integrated OpenStack 7.0 is seamless with zero data plane downtime.  We are super excited to bring these features in VMware Integrated OpenStack 7.0!

VMware Integrated OpenStack 7.0 Feature Details:

Feature Enhancements:

OpenStack Release:

  • Alignment with upstream OpenStack Train release

Seamless Integration with VMware SDDC:

  • Interop with the latest vSphere 7.0, vSAN and NSX-T 3.0
  • vRealize Operations Management Pack
  • vRealize Log Insight


  • Selective vCPU Pinning: Previously, VM’s set to high latency sensitivity required full CPU reservation and all vCPU’s were pinned to physical cores. In VMware Integrated OpenStack 7.0, you can specify which vCPU’s need to be pinned to physical cores. This feature supports mixing high performing applications and non-critical applications while providing better CPU resource utilization and improves consolidation ratios and virtualization ROI for Telco NFV workloads.


  • Support NSX-T 3.0 Policy API for L3: enhances the intent-based API and policy UI to retrieve runtime information on the gateways. 
  • Network Trunk services: The trunk service plugin is enabled by default with Neutron NSX-T Policy Plugin and enables VNFs to connect to many networks at once through a single vNIC, and dynamically connect or disconnect from networks. It provides containers with network isolation inside Nova instances and allows:
  • Multiple networks to be connected to an instance using a single virtual NIC
  • Multiple networks can be presented to an instance by connecting it to a single port
  • IPv6 Support:
    • Full IPv6 data plane support
    • FWaaS: Leverages Tier-1 Edge Firewall and Fully compatible with Neutron FWaaS API spec and supports FWaaS v2 API in VMware Integrated OpenStack 7.0
    • LBaaS:. Since the Queens OpenStack release cycle neutron-lbaas and neutron-lbaas-dashboard were deprecated. To align with the OpenStack community in VMware Integrated OpenStack 7.0 LBaaS service has been migrated from Neutron LBaaS to Octavia while preserving the LBaaS v2 API.
  • SRIOV Network Redundancy Support: Provides High Availability for VM connectivity through SRIOV networks by ensuring virtual functions are scheduled to separate physical functions


  • EVPN and L3 Multicast: Configured through NSX-T (transparent to VIO)
  • vRF Lite Support: Support for vRF Lite in NSX-T 3.0 provides multiple tenant data plane isolation through Virtual Routing Forwarding (VRF) in Tier-0 gateway with separate routing table, NAT, and firewall within each VRF eliminating the need for Tier0 deployments for multiple tenants.

VMware Integrated OpenStack Management Cluster update:

Management API: VMware Integrated OpenStack 7.0 introduces Public API to automate VMware Integrated OpenStack management through API. Fully open Kubernetes based lifecycle management APIs with CRD extension and provides public API for managing day-1 deployment and day-2 management of VMware Integrated OpenStack services.


  • ​All OpenStack components run on Python 3

VMware Integrated OpenStack Control Plane Update: The dedicated Kubernetes cluster underpinning the VMware Integrated OpenStack control plane has been upgraded to Tanzu Kubernetes Grid (Kubernetes 1.17.2).

Platform Lifecycle Automation:

  • Skip-release upgrades:
    • Support upgrade from 6.0 to 7.0
    • Support upgrade from 5.1 to 7.0 directly
  • Patch management:
    • Patches are delivered as container images rolled out via Kubernetes
    • viocli command set for patch management

VMware Integrated OpenStack LCM UI Enhancement:

  • Admins can see each service’s desired v.s. observed state
  • Easier to create Neutron availability zones directly from UI

VMware Integrated OpenStack CLI updates:  Enhanced CLI includes:

  • Ability to start/stop for each single services and automatically propagates changes using a single command on Custom Resource(CR)
  • More secure backups
  • Bash completion and CLI shortcuts added to LCM

VMware Integrated OpenStack 7.0 Max Configuration Data: VMware configuration maximums are published to https://configmax.vmware.com/


VMware vCloud Director 9.7 Appliance Installation

Starting with version 9.7, the vCloud Director appliance includes an embedded PostgreSQL database with а high availability (HA) function. Whereas vCloud Director on Linux uses an external database that needs to be installed and configured before you install vCloud Director on Linux.

You can create a vCloud Director server group by deploying one or more instances of the vCloud Director appliance with first member as a primary cell and a subsequent member as a standby or vCD application cell.  You deploy the vCloud Director appliance by using the vSphere Client (HTLM5), the vSphere Web Client (Flex), or VMware OVF Tool.

This blog post describes deploying vCloud Director server group using VMware OVF Tool. You can use the Deploy OVF from vCenter to deploy the appliances using single OVA file that displays 5 different deployment configurations to choose from: Primary node (small and large), Standby node (small and large) and vCD Cell Application node. The large vCloud Director primary appliance size is suitable for production systems, while the small is suitable for lab or test systems.

For more details visit vCloud Director Installation, Configuration, and Upgrade Guide

Note **

  • Mixed vCloud Director installations on Linux and vCloud Director appliance deployments in one server group are unsupported.
  • The vCloud Director appliance does not support external databases.

 To create deployment with a database HA cluster, we deploy one instance of the vCloud Director appliance as a primary cell, and two instances as standby cells (In this example we deploy only one standby cell). However if you have a production environment we recommend 2 standby cells deployment. vCD application cells connect to the database in the primary cell.


Recommended datastore for the appliance deployment is a vSAN datastore with redefined policy.

Next regarding network configuration, starting with version 9.7, the vCloud Director appliance is deployed with two networks, eth0 and eth1, so that you can isolate the HTTP traffic from the database traffic. Different services listen on one or both of the corresponding network interfaces. First interface eth0 is primarily used for services such as http – ports 80, 443, console proxy – port 8443, jmx – ports 61611, 61616. Second interface(eth1) is used for services such as database communication (port 5432).  Both interfaces are used for services such as ssh, management UI, etc.  Ensure you define these interface subnets based on your testbed requirements. Here we use IP addresses in same subnet for both the interfaces.

Note **

After you deploy the vCloud Director appliance, you cannot change the eth0 and eth1 network IP addresses or the hostname of the appliance. If you want the vCloud Director appliance to have different addresses or hostname, you must deploy a new appliance.

To see if the deployments succeeded, you can use the eth0 IP address of the primary cell to login to the Admin provider portal or you can check status of the cell by logging into the appliance management user interface at https://vcd_ip_address:5480.

In order to deploy standby cells, use same steps to deploy an appliance with deployment configuration as Standby small. Once deployed, embedded databases are configured in a replication mode with the primary database.


After the initial standby appliance deployment, the replication manager begins synchronizing its database with the primary appliance database. During this time, the vCloud Director database and therefore the vCloud Director UI are unavailable.

Once deployment is successful, you can verify the Database High Availability Cluster status by logging to either of cells admin portal or appliance management user interface.

What to do next:

If a standby cell is not in a running state, deploy a new standby cell.

If the primary cell is not in a running state, Recover from a Primary Database Failure in a High Availability Cluster.

Detailed step by step demo videos can be found on our Telco YouTube Channel


VMware vCloud Director Integration with vCenter and NSX

Once the vCloud Director appliance is deployed and configured. Register vCenter with vCloud Director from vCloud provider admin portal.

Step 1: First we login to the vCloud Director service provider admin portal at https:// VCD FQDN_or_IP_address /provider with administrator credentials >> vSphereResources >> vCenters >> Add vCenter Server

Step2: Based on your environment you can skip or add NSX-V Manager instance that is associated with vCenter Server and complete the registration

Step3: Once vCenter is registered you can register the NSX-T separately as it is independent of vCenter. You may register the NSX-T from admin portal >> >> vSphereResources >> NSX-T Managers >> Register NSX-T Manager

Once registration is complete you perform various operations to utilize the NSX-T features. For example, network pool creation is the same as with VXLAN. You can also import all the networks defined in NSX-T.


Now you can create Provider VDC (PVDC) which as a collection of compute, memory, and storage resources from vCenter Server instance that relies on NSX Data Center for vSphere or NSX-T Data Center for network resources. In addition, a Provider VDC provides resources to organization VDCs. A provider VDC can be created using the vCloud Director API, a powerful and easy to use solution for getting information about organizations, VDCs, networking and vApps. In this blog post Chrome “postman” REST api client is used for making the vCloud Director API calls.


Step 1: Get the “x-vcloud-authorization” token value which is required for further REST API calls


Step2: Retrieve the NSX-T Manager registered to this cloud.

Step 3: Retrieve the list of vCenter servers registered to this cloud.

Step 4: Retrieve the resources on a vCenter Server

Step 5: Retrieve a list of resource pools from a vCenter Server

Step 6:  Create a Provider VDC backed by NSX-T Data Center.

Step7: Finally verify that Provider VDC from the vCloud Director Service Provider Admin portal.

Next you can create organization VDCs and External networks-based on your environment.

Detailed step by step demos can be found on our Telco YouTube Channel

Multi-Tenancy with vCloud Director and NSX-T

This blog post walks through the steps on how to achieve secure multi-tenancy with vCloud Director and NSX-T.  The below reference topology is used to show the network resource isolation. For example, as shown below we will create 2 Tenants, Tenant A with two VMs and Tenant B with one VM.

Network isolation is achieved with the advanced networking capabilities of NSX-T Data Center that provides a fully-isolated and secure traffic paths across workloads and tenant switch and routing fabric. As described in Multi-Tenancy Design Objectives, NSX-T Data Center introduces a two-tiered routing architecture enabling the management of networks at the provider (Tier-0) and tenant (Tier-1) tiers. As shown in reference topology above, a provider routing tier is attached to the physical network for North-South traffic, while the tenant routing context can connect to the provider Tier-0 and manage East-West communications. In vCloud Director, each Organization VDC will have a single Tier-1 distributed router that provides the intra-tenant routing capabilities.


Step1: From vCloud Director Admin Portal create two Organizations one for each Tenant, Tenant A and Tenant B.

Step 2: Create two Organization VDCs one for each Tenant, Tenant A and Tenant B using the wizard as follows:

Step 3: Create two Logical switches using overlay networks and two uplink logical switches using VLAN on NSX-T one for each Tenants, Tenant A and Tenant B.

Step 4: Create two Tier-0 routers on NSX-T one for each Tenants, Tenant A (High-availability Mode as Active-Active) and Tenant B (High-availability Mode as Active-Standby).


Step 5: Create two Tier-1 routers on NSX-T one for each Tenants, Tenant A & Tenant B.

Step 6: Create uplink router ports on NSX-T for each of the Tier-0 routers, for both Tenants, Tenant A and Tenant B virtual machines to connect using the uplink logical switches created earlier.

Step 7:  Enable Route-Redistribution and create a new redistribution-criteria to allow the T0 & T1 sources for each of the Tier-0 routers, for both Tenants, Tenant A and Tenant B.

Step 8: Create downlink ports for each of the Tier-1 routers which will be used as gateway for both Tenants, Tenant A and Tenant B virtual machines using the logical switches created earlier.

Step 9: From the vCloud Director Tenant portals of each Tenants import the logical networks corresponding to each Tenant created in NSX-T and add static IP Pools in that subnet.

Step 10: Create a new vApp for Tenant A by adding two virtual machines for each Tenants as per reference topology.

Step 11: Add the networks imported from NSX-T into vApp.

Step 12: For each VM in vApp, edit the Network settings for VM-1 in Tenant A to select the newly added network and Static IP pool we created earlier.

Step 13: Power on the vApp and repeat steps 9 -12 for Tenant B.

Step 14: Now verify the connectivity between virtual machines in Tenant-A. Results show a successful ping between VM-1 and VM-2 in Tenant-A.

Step 15: Now verify the connectivity between virtual machines in Tenant-A and Tenant-B. Results show that ping between VMs in Tenant-A and VM in Tenant-B fails confirming secure multi-tenancy between the Tenants.

Detailed step by step demos can be found on the Telco YouTube channel:


Load Balancing vCloud Director with NSX

This blog post is to show how to load balance vCloud Director cells with NSX-T. To use logical load balancers, you must start by configuring a load balancer instance which is deployed into the NSX-T Edge Cluster. You can configure load balancer in different sizes that determines the number of virtual servers, server pools, and pool members the load balancer can support.

For more details refer to  Scaling Load Balancer Resources.

Create a Logical Switch

From NSX-T Manager UI create a VLAN Logical Switch, and provide the VLAN id. In this example, VLAN 0. Advance Networking & Security > Networking > Switches >Add New Logical Switch

Create Tier-1 Router

On NSX-T UI, deploy a new standalone Tier-1 Router

Add a new logical Router Port on the newly created Tier-1 Router as Centralized Service Port, connecting it to the logical switch which we created earlier.

Under the Subnets, Add an IP address and subnet which will be used as the load balancer virtual IP address in later steps.

Add Load Balancer

Now we can create load balancer instance and associate the virtual servers with it. Create the LB instance on the Tier 1 Gateway which routes to your VCD cell network. Make sure the Tier 1 Gateway runs on an Edge node with the proper size (see the doc link before).

Advance Networking & Security > Networking > Load Balancers > Add

In this example we use following:

  • Name: VCD_LB
  • Size: small

First, we need to attach the Tier-1 router created in previous step.

Load Balancers > VCD_LB Overview > Attachment > Edit

Add Active Monitor

Next, we configure an active health monitor which will perform health checks on load balancer pool members according to the active health monitor parameters.

Create new monitor in Advance Networking & Security > Networking > Load Balancers > Monitors > Add New Active Health Monitor

  • Health Check Protocol: LbTcpMonitor
  • Monitoring Port: 443
  • default Interval, Fall Count, Rise Count, Timeout Period

Add Server Pools

Create a server pool and vCloud Director cells as the pool members. NSX-T Server Pools are used to handle traffic for use by the virtual server.

Create new Server Pool in Advance Networking & Security > Networking > Load Balancers > Server Pools > Add New Server Pool

  • Load Balancing Algorithm: Round Robin
  • TCP Monitoring: Disabled(defaults)
  • SNAT Translation: Auto Map
  • Pool members: Add vCloud Director Cells IPs
  • Health Monitor: Created in above step

Add Virtual Servers

Create new virtual server in Advance Networking & Security > Networking > Load Balancers > Virtual Servers > Add New Virtual Server

  • Application Type: Layer 4 TCP
  • Application Profile: nsx-tcp profile
  • Virtual Server Identifier: IP address of Tier 1 logical port defined above
  • Port: 443, 80
  • Protocol: TCP
  • Server Pool: Created above
  • Load Balancing Profile: nsx-default source-ip persistence profile

Attach the load balancer created above to this virtual server. Advance Networking & Security > Networking > Load Balancers > Virtual Servers >LB-VirtualServer>Loab Balancers>Attach

Verify that the Operational Status is Up

In this network topology the load balancer virtual IP and vCloud Director cells IP are in the same subnet and reachable from outside world. If you have an internal IP you need to set up NAT such that load balancer virtual servers are available both from outside (Tier-0 Gateway) and internal networks.

To ensure that the vCloud Director cells connect to the public load balancer virtual IP, URL needs to be configured in vCloud Director public addresses.

Now you may enter the “load balanced URL” to access the vCloud Director provider admin portal for successful verification of the configuration.

You can view this demo on our VMware Telco YouTube channel:



VMware Integrated OpenStack 6.0: What’s New




VMware recently announced VMware Integrated OpenStack (VIO) 6.0. We are truly excited about our latest OpenStack distribution as this release enables customers to take advantage of advancements in the upstream Stein release including support for Cinder generic volume groups, improved admin panels, and security improvements throughout the stack.

For our Telco customers, VIO 6.0 delivers scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). VIO 6.0 is built with a Kubernetes-managed high availability control plane built on top of VMware SDDC, providing resilience in addition to availability. Upgrades from VIO 5.1 to VIO 6.0 is seamless with zero data plane downtime. We are super excited to bring these features in VIO 6.0!


VIO 6.0 Feature Details:

Advanced Kubernetes Support:

Kubernetes-powered OpenStack control plane: VIO 6.0 is now intent based running on top of a dedicated Kubernetes cluster. The intent-based design allows the VIO control plane to self-heal from failures. Since OpenStack services are now deployed as pods in Kubernetes, the new intent-based control plane also allows VIO components to be horizontally scaled up or down seamlessly and independently of one another. The new control plane architecture achieves a lower out-of-the-box footprint while allowing cloud admins to easily expand capacity.

Tenant-facing Kubernetes: In addition to providing a Kubernetes managed control plane, VIO provides tenant facing clusters powered by Essential PKS. A reference implementation using Heat is available for download from github. The Heat stack is intended to accelerate Essential PKS on VIO. The Heat stacks are open source software and provide support for either native integration with NSX-T using NCP (NSX Container Plugin) or Calico (v3.7) networking.

Feature Enhancements:


  • New multi-attach feature allows a volume to be attached to multiple instances at once
  • vSphere First Class Disk (FCD) support:
    • FCD does away with the need for shadow VM’s to house unattached Cinder volumes
    • Faster than VMDK driver for most operations
    • Compliments the existing VMDK driver: FCD can be enabled as an optional secondary backend
    • Users can create traditional VMDK or FCD volumes using volume types

IPv4 / IPv6 and Dual Stack Support:

  • Dual stack IPv4/IPv6 for Nova instances, Neutron security groups & routers
  • Support IPv6 with NSX-T 2.5 and NSX-P Neutron plugin
  • IPv6 addressing modes: static, SLAAC
  • Static IPv6 routing on Neutron routers
  • IPv6 support for FWaaS


  • Federation support using JSON Web Tokens (JWT)

OpenStack at Scale:

VIO 6.0 features seamlessly scaling OpenStack services to meet changes in demand and load. VIO 6 supports horizontally scaling the controllers as well as the pods that are run in those controllers. An out of the box compact deployment uses only one controller and an out of the box HA deployment uses three controllers, but users can scale up to a maximum of 10 controllers with a few clicks or CLI commands. This provides increased flexibility for cloud admins to right-size their deployments according to their needs. The ability to scale controller nodes provides for simple expansion of capacity for higher-load environments. VIO 6.0 supports scale out of individual OpenStack services by increasing pod replica counts. OpenStack services can be scaled out with just few clicks from the UI or a command from CLI without affecting other services or causing data plane downtime.

Essential PKS on OpenStack:

  • Provides a hybrid VM and container platform that combines best of breed components.
  • OpenStack and Kubernetes APIs for workloads, cluster and resource lifecycle management
  • Ability to deploy Essential PKS with OpenStack Heat for a native OpenStack experience and repeatable cluster creation
  • OpenStack multi-tenancy for more secure separation of container workloads
  • VMware Certified Kubernetes distribution, support and reference architecture with Essential PKS

Enhanced Management Tools:

  • viocli rewritten in Golang, new enhancements added
  • Bash completion and CLI shortcuts added to Life Cycle Manager
  • HTML5 WebUI:
    • No dependency on vCenter Web Plugin
    • Native Clarity theme provides a congruent user experience for VMware admins

Photon 3:

  • VIO control plane VM’s now use VMware Photon OS 3, a lightweight, secure, container-optimized Linux distribution backed by VMware
  • Containers are also built on Photon OS 3 base images

Industry-standard API’s:

  • Proprietary OMS API’s replaced with standard Kubernetes API’s and extensions
  • Many parts of VIO can optionally be managed with kubectl commands in addition to viocli
  • Cluster API responsible for additional VM bringup/management


Automated VIO Backups: Cloud administrators can schedule backups of the VIO control plane to a vSphere Content Library

Lifecycle Management: VIO provides lifecycle management for OpenStack components including deployment, patching, upgrade (with NO data plane downtime) and rich day-2 operations via Kubernetes deployment rollouts.

Versioning:  VIO 6.0 comes with built-in versioning of configuration and config version control for control plane configuration changes.

Clarity Theme:  VIO 6.0 Horizon now ships with a Clarity theme. The VIO 6.0 life cycle manager web interface also uses Clarity to familiar look and feel for vSphere administrators.

OpenStack Helm:  VIO 6.0 uses OpenStack-Helm for deploying OpenStack components. OpenStack-Helm is an OpenStack community project that provides deployment tooling for OpenStack based on the Helm package manager, which is a CNCF project.

OpenStack-Helm provides benefits such as:

  • Better management of loosely coupled OpenStack services
  • Better service dependencies handling
  •  Declarative and easy to operate
  • Enhanced rolling update workflow and rollback


Assurance & Intelligent Operations:
Service impact and root-cause analysis with Smart Assurance
Operational Monitoring and Intelligence with vROps OpenStack Dashboards and Container monitoring
vRealize Log Insight integration to get insight into the OpenStack day to day operations.
Visibility across physical and virtual OpenStack networks
Automated approach to operational intelligence to reduce service impact and OpEx

VIO 6.0 Demos:
Below are a list of videos that provides a step by step walkthrough of deployment, upgrade, 360 degree visibility and Deployment of VMware Essential PKS on top of VIO 6.0.

1. VMware Integrated OpenStack Deployment: This demo video shows the step by step deployment of Virtual Appliance on your vCenter Server instance and deployment of OpenStack by using the Integrated OpenStack.

2. VMware Integrated OpenStack Upgrade 5.1 to 6.0: Upgrading VIO 5.1 deployment to VIO 6.0 allows you to take advantage of new features and functions, plus it also ensures you zero downtime.

3. VMware Essential PKS on top of VMware Integrated OpenStack 6.0: VIO provides pre-built open-source Heat stacks to help deploy Essential PKS on top of VIO for individual tenants of the cloud. Using the Heat orchestration engine simplifies and speeds up the deployment of Kubernetes as well as managing the lifecycle of Kubernetes (e.g. scaling out, tearing down clusters, etc), and does so using an orchestrator that OpenStack users will already be familiar with. This demo video shows step by step process to deploy Essential PKS on top of VIO.


4. 360 Degree Visibility: This demo video shows the integration of VIO 6.0 with vRealize Operations Manager and vRealize Log Insight. vRealize Operations provides a comprehensive dashboard for monitoring the Health, Risk, and Efficiency of your entire SDDC infrastructure. vRealize Operations OpenStack Management Pack offers OpenStack the ability to monitor and troubleshoot VMware Integrated OpenStack or other OpenStack distributions. vRealize Log Insight extends analytics capabilities to unstructured data and log management, giving you operational intelligence and deep enterprise-wide visibility across all tiers of your IT infrastructure and applications.

Accelerate the move to 5G and Edge on VMware Integrated OpenStack


Last week at VMworld, we announced in a press release the launch of VMware Integrated OpenStack v6.0. In this PR, we highlighted the expansion of VMware’s Telco and Edge Cloud portfolio to drive real-time intelligence for telco networks, as well as improved automation and security for Telco, Edge and IoT applications.  A key element of our telco portfolio includes the availability of VMware Integrated OpenStack and how VMware continues to invest in OpenStack-managed virtualized telco clouds enabling Communications Service Providers to deploy networks and services across multiple clouds (Private, Telco, Edge and Public) and have consistent operations and management across clouds.

Today, we are excited to announce that VIO 6.0 is now officially available for our Communications Service Provider and Enterprise customers to download and install.  We have added several new capabilities in the latest release, giving CSPs the fastest path to deploy services on OpenStack.

VIO based on Stein

VMware continues its leadership in being one of the first commercial OpenStack distributions to support Stein and is fully tested and validated with OpenStack 2018.11. With the latest Stein release, VMware continues to deliver core functionality with strengthened support of container and networking capabilities to support key CSP use cases including NFV, Edge and supporting the network evolution to 5G.

Cloud Native brings greater efficiency and higher resiliency for 5G networks

5G networks are being with the premise that it will be cloud native.  A cloud native architecture accelerates CSPs time to deploy and scale services as well as provide greater resiliency and flexibility in the network with the ability to rapidly instantiate new services and applications based on real-time customer demand.

The latest release of VMware Integrated OpenStack 6.0 includes support for VMware Essential PKS, which provides CSPs access to the latest release of upstream Kubernetes, supported by VMware, giving CSPs a flexible secure cloud platform that will allow them to build, run, and manage next generation container based applications and services on any cloud.

As CSPs evolve their network architectures from 4G to 5G, maintaining a hybrid network with both VM and Container-based workloads will be required.  With VIO 6.0, CSPs will be able to deploy and manage both environments using a common platform.

As part of VMware Essential PKS, CSPs have access to the VMware’s Kubernetes architect team who can guide CSPs through every step of their cloud native journey and ensure they build a platform that supports network operations at massive scale.

Virtual Cloud Networking Scale and Performance with VIO support of NSX-T Data Center

As CSPs make the transformation from 4G to 5G networks to support the massive number of mobile and IoT devices, they require an NFVI platform that is flexible and can seamlessly scale to meet the demands of 5G and multi-cloud environments.

VIO 6.0 natively integrated with the latest release of VMware NSX-T 2.4 supporting greater scale, resiliency and performance, with near line-rate speed using a DPDK-based accelerated data plane.

With the depletion of IPv4 addresses and the massive number of IoT and mobile devices, the adoption of IPv6 will continue to grow.  VIO 6.0 with NSX-T 2.4 introduces support for IPv6 to meet the critical requirement of cloud-scale networks for CSPs.  With dual-stack support, CSPs can continue to manage dual IPv4 and IPv6 stacks across their control and data plane.

Service Assurance across OpenStack environments with VMware Smart Assurance 10.0

CSPs deploying OpenStack for their 4G and 5G networks require robust service assurance capabilities with monitoring and management tools that will allow CSPs to deliver highly reliable networks and ensure high QoS and stringent SLAs are met.  The latest release of VMware Smart Assurance with VIO 6.0, provides assurance capabilities that will deliver service impact and root-cause analysis with visibility across physical and virtual OpenStack networks, as well as multi-cloud networks.  With VMware Smart Assurance and VIO 6.0, CSPs will gain an automated approach to operational intelligence to reduce service impact and operational expenses.

Additional Resources

  • Latest VMware Integrated OpenStack information can be found here
  • Read the latest VIO 6.0 release notes and get technical documentation here
  • Learn more at vmware.com



VMware & Nokia at MWC Americas 2018


Nokia and VMware continue to collaborate to deliver integrated and proven end-to-end NFV solutions which are widely deployed at communication service providers (CSP) globally.  Together we bring joint value with decades of telecom innovation combined with virtualization and cloud expertise. Powered by continued networking and cloud technology portfolio investment, supported by the research and creativity of Bell Labs, Nokia has the industry’s most complete, end-to-end Telecom portfolio of products and services. Likewise, VMware brings more than two decades of innovation and has delivered the most trusted and widely deployed virtualization and cloud solution to the market. The end result is a solid, unified solution that helps our customers address not only today’s rapid technology shifts, but also capitalize on new opportunities as they arise. Ultimately positioning CSPs for rapid growth by accelerating their digital transformation journey to the new era of enhanced mobile broadband,  IOT and 5G

Leveraging VMware vCloud NFV Platform, Nokia and VMware have delivered the following joint Virtual Network Functions (VNFs)

Additional details such as solution overview, solution white papers and  ready for NFV certification details can be found on the solution exchange: https://marketplace.vmware.com/vsx/company/nokia

Nokia and VMware have put their heads together to simplify the day to day operational requirement to efficiently run NFV services.   To learn more about our joint capabilities of how we intelligently automate network functions on a virtualized telco cloud, visit us at Mobile World Congress USA in Los Angeles in South Hall Booth 1714 and see a live demonstration.  Here is a sneak peak of what you will see:

In one demo, we will show an instance of a Nokia Cloud Packet Core network function that is exceeding its defined capacity thresholds.  Our joint solution will show the automated scaling of a Cloud Packet Core network function to address dynamic VNF demands.

As more and more CSP networks transition from bare metal to NFV, the upgrade of the NFV infrastructure has quickly become a significant operational challenge. Over the past 20 years, VMware has evolved to become the industry standard for seamless and effortless upgrades. This fundamental capability is also readily available in VMware’s vCloud NFV platform. Nokia and VMware have partnered to demonstrate how to upgrade NFV infrastructure without service disruption of traffic through Nokia’s Cloud Packet Core functions.  Join us to see how effortless NFVI upgrade can be with Nokia and VMware.