Home > Blogs > VMware Telco Cloud Blog

Telco Bits & Bytes – 9 July 2020

Our regular roundup of the technology news that matters

Welcome to the next edition of our ‘Telco Bits & Bytes’ news blog. Here we share news and insights from across VMware and the technology industry that caught our attention, so you don’t miss a beat. Let us know in the comments below how we can improve this service and enjoy!

VMware Bits

Technology Bytes

For daily updates, follow us on LinkedIn and our website

Adapting to a Changing Landscape and Shifting Requirements with Built-in Security

Adapting to Emerging Security Requirements

It’s easy to forget the role of security and compliance in delivering an excellent customer experience — consumers rightfully dread the thought of interrupted communications, breached personal data, or hacked credit card numbers. A highly secure network contributes to a differentiated and distinguished service that attracts and retains customers, but sometimes it’s hard to remember that fact because the value of security lies in the absence of attention: For CSPs and customers alike, no news is good news.

With the shift toward 5G, however, some security standards for CSPs have gone out of date. In the U.K., for instance, the NCSC’s previous telecoms assurance standard known as CAS(T) is done. The NCSC formally closed CAS(T) on Jan. 31, 2020, saying that the “technical aspects of the standard do not align to the evolving telecommunications landscape and will quickly become out-of-date, without NCSC maintenance. Therefore, whilst it will remain available on the NCSC website for historic purposes, the NCSC does not recommend its continued use.”

CAS(T) is being replaced in part by the NCSC’s new telecommunications security requirements, or TSRs, which are focused on improving network security. Based on a framework of contemporary security principles, the requirements provide extensive implementation guidance for technology that is critically important as CSPs shift their networks, equipment, operations, services, and business models to 5G. Software-defined networking, cloud native network functions, containerized applications, orchestration, and the virtualization plane take center stage.

“The potential economic and social benefits of 5G and full-fibre digital connectivity,” the NCSC’s report says, “can only be realized if we have confidence in the security and resilience of the underpinning infrastructure.”

The Benefits of Built-in Security

When security is an intrinsic part of the technology from start to finish — that is, when security is built into the software and infrastructure from the beginning instead of bolted on as an afterthought — it empowers you to quickly, effectively, and economically capitalize on the new market opportunities of 5G without undermining the security of the virtualized network or its management.

Why? Because intrinsic security improves your ability adapt to changes. The VMware model, for example, helps you more easily and quickly make changes to security settings, network policies, and even the network topology itself to meet emerging telecommunications security requirements, such as those that the United Kingdom’s National Cyber Security Centre is working on.


The Shifting Security Landscape

Here in the United States, NIST has also shelved at least one of its old telecommunications guidelines, and a replacement hasn’t been forthcoming yet. The previous guidelines, Telecommunications Security Guidelines for Telecommunications Management Network, SP 800-13, was withdrawn as outdated on August 1, 2018. Meantime, NIST and the National Cybersecurity Center of Excellence are working on a project for 5G security titled Preparing a Secure Evolution to 5G ; so far, however, only the project description has been published, which makes taking concrete action difficult.

VMware has published two new white papers to discuss the security challenges that CSPs are facing as they evolve their network architectures to 5G and how VMware is addressing these security challenges with our existing products and solutions:

Intrinsic Security for Telco Clouds at the Dawn of 5G. 



This technical white paper summarizes the security risks and requirements that CSPs face as they transition to 5G networks and increasingly rely on virtualization, containers, and cloud computing. The paper illustrates how VMware technology protects telecom networks with an array of built-in security measures, many of which can be automated.


Intrinsic Security for Telco Clouds: Protect infrastructure with built-in measures

This short paper explains how the VMware Telco Cloud emphasizes intrinsic security—integrated with the software and infrastructure so that security is programmable, automated, adaptive, and context-aware.

With the VMware Telco Cloud, security is built into the software and infrastructure, which improves visibility, reduces complexity, and enables CSPs to focus their defenses by applying automated security measures like micro-segmentation in the right place.

Micro-segmentation is a pertinent example. It divides a virtual data center and its workloads into logical segments, each of which contain a single workload. You can then apply security controls to each segment, restricting an attacker’s ability to move to another segment or workload. This approach reduces the risk of attack, limits the possible damage from an attack, and improves your overall security posture.

 Isolating and Automating Security with the VMware Telco Cloud

The NCSC’s TSRs, then, seem to be prescient — they furnish an early government-driven perspective on security and compliance for CSPs as they roll out 5G networks and services.

The security measures that are built into the VMware Telco Cloud help you readily adapt to the NCSC’s key high-level security imperatives for virtualized networks, such as isolating the management network, segmenting traffic, and automating administration.



Telco Bits & Bytes – 25 June 2020

Our regular roundup of the technology news that matters

Welcome to the next edition of our ‘Telco Bits & Bytes’ news blog. Here we share news and insights from across VMware and the technology industry that caught our attention, so you don’t miss a beat. Let us know in the comments below how we can improve this service and enjoy!

VMware Bits

Technology Bytes

For daily updates, follow us on LinkedIn and our website


Getting ready for the 5G transformation with VMware Ready for Telco Cloud

Today, we are announcing another milestone in our support for communications service providers (CSPs) as they transition to a software-defined telco cloud. We are enhancing our award-winning VMware Ready for NFV program by including interoperability and readiness for VMware Telco Cloud Automation.

The addition of VMware Telco Cloud Automation into the program further accelerates the ability of CSPs to deploy software-based workloads on the VMware Telco Cloud platform. In order to reflect this expanded scope, we are renaming the program to VMware Ready for Telco Cloud. The original Ready for NFV certification will become Ready for Telco Cloud Infrastructure while the new certification level for VMware Telco Cloud Automation is called Ready for Telco Cloud.

As the telecommunication industry continues its migration to 5G, there are two critical shifts occurring that will shape the industry for years to come: a need for improved automation and the migration of network functions to a cloud native architecture. The expanded scope of the VMware Ready for Telco Cloud program reflects VMware’s commitment to supporting our carrier service provider customers as they engage with these shifts. The new certification level, VMware Ready for Telco Cloud, introduces our telco partner ecosystem to VMware’s multi-cloud and standards-based automation and orchestration product, aptly called VMware Telco Cloud Automation. A partner Network Function successfully completing this level of certification has demonstrated its support for a more automated deployment and operations.

The updated program offers two paths for partners by introducing two levels of certifications:

The Ready for Telco Cloud Infrastructure certification identifies virtual network functions that have been proven to interoperate with the core infrastructure layers of the VMware Telco Cloud, as referenced by the ETSI-compliant VMware vCloud NFV Reference Architecture. The focus of this certification level is on compatibility with the virtualized infrastructure manager (VIM) as well as compatibility with the other core components: VMware vSphere for virtualized compute, NSX for virtualized networking, and vSAN for virtualized storage. The fully automated program is available at no-cost to VMware partners, both in the VMware on-premise certification lab as well as in the VMware cloud as self-certification.

The second and new level of certification is called Ready for Telco Cloud. The certification additionally ensures that network functions are ready for deployment and lifecycle operations through VMware Telco Cloud Automation. In this level, the VMware team collaborates with partners to create an ETSI-compliant descriptor as well as a workflow, resource and commissioning artifact for a validated and tested Cloud Service Archive (CSAR). The built-in generic VNF Manager (gVNFM) function within VMware Telco Cloud Automation and the network function designer are central elements in this tier. This higher-level certification is available at the Ready for Telco Cloud lab on the VMware premises and includes the VMware Ready for Telco Cloud Infrastructure certification as a prerequisite.

The following diagram illustrates the path from the Ready for Telco Cloud Infrastructure certification to the Ready for Telco Cloud certification level:


The telecommunication industry has been on a journey toward software-based network functions that will leverage cloud-native architecture and design. Eventually, software development and delivery models will evolve to become more collaborative, significantly faster, and more automated.

We are committed to supporting our customers in their journey and are more than happy to share the extensive experience we have in the area with our partners. We see a logical progression in the way we engage with our telco partners:

  1. Ensure that the network function is interoperable with the fundamental cloud platform
  2. Automate the deployment and lifecycle operations

As containerized network functions become available, we will continue supporting our ecosystem. We see full alignment between suppliers and customers: we all want to accelerate the adaptation of software-based network functions, increase components integration, and elevate innovation. This is why the VMware Ready for Telco Cloud has embraced automation, cloud labs, and a continued evolution.

Connect with us: To certify a network function through the VMware Ready for Telco Cloud program, reach out to us at TelcoCloudCertification@vmware.com.

Telco Bits & Bytes – 11 June 2020

Our regular roundup of the technology news that matters

Welcome to the next edition of our ‘Telco Bits & Bytes’ news blog. Here we share news and insights from across VMware and the technology industry that caught our attention, so you don’t miss a beat. Let us know in the comments below how we can improve this service and enjoy!

VMware Bits

Technology Bytes

For daily updates, follow us on LinkedIn and our website

Introducing VMware Integrated OpenStack 7.0

VMware recently announced VMware Integrated OpenStack 7.0. VMware Integrated OpenStack delivers out-of-the-box OpenStack functionality and an easy configuration workflow.We are truly excited about our latest OpenStack distribution as this release enables customers to take advantage of advancements in the upstream Train release and is fully compatible with VMware vSphere 7.0 and NSX-T Data Center 3.0.

For our Telco customers, VMware Integrated OpenStack 7.0 offers a 5G ready platform that meets the demands for their 5G core and other workloads. VMware Integrated OpenStack 7.0 is built with VMware Tazu Kubernetes Grid underpinning as it’s control plane, providing resilience in addition to availability. Upgrading directly from VMware Integrated OpenStack 5.1 or VMware Integrated OpenStack 6.0 to VMware Integrated OpenStack 7.0 is seamless with zero data plane downtime.  We are super excited to bring these features in VMware Integrated OpenStack 7.0!

VMware Integrated OpenStack 7.0 Feature Details:

Feature Enhancements:

OpenStack Release:

  • Alignment with upstream OpenStack Train release

Seamless Integration with VMware SDDC:

  • Interop with the latest vSphere 7.0, vSAN and NSX-T 3.0
  • vRealize Operations Management Pack
  • vRealize Log Insight


  • Selective vCPU Pinning: Previously, VM’s set to high latency sensitivity required full CPU reservation and all vCPU’s were pinned to physical cores. In VMware Integrated OpenStack 7.0, you can specify which vCPU’s need to be pinned to physical cores. This feature supports mixing high performing applications and non-critical applications while providing better CPU resource utilization and improves consolidation ratios and virtualization ROI for Telco NFV workloads.


  • Support NSX-T 3.0 Policy API for L3: enhances the intent-based API and policy UI to retrieve runtime information on the gateways. 
  • Network Trunk services: The trunk service plugin is enabled by default with Neutron NSX-T Policy Plugin and enables VNFs to connect to many networks at once through a single vNIC, and dynamically connect or disconnect from networks. It provides containers with network isolation inside Nova instances and allows:
  • Multiple networks to be connected to an instance using a single virtual NIC
  • Multiple networks can be presented to an instance by connecting it to a single port
  • IPv6 Support:
    • Full IPv6 data plane support
    • FWaaS: Leverages Tier-1 Edge Firewall and Fully compatible with Neutron FWaaS API spec and supports FWaaS v2 API in VMware Integrated OpenStack 7.0
    • LBaaS:. Since the Queens OpenStack release cycle neutron-lbaas and neutron-lbaas-dashboard were deprecated. To align with the OpenStack community in VMware Integrated OpenStack 7.0 LBaaS service has been migrated from Neutron LBaaS to Octavia while preserving the LBaaS v2 API.
  • SRIOV Network Redundancy Support: Provides High Availability for VM connectivity through SRIOV networks by ensuring virtual functions are scheduled to separate physical functions


  • EVPN and L3 Multicast: Configured through NSX-T (transparent to VIO)
  • vRF Lite Support: Support for vRF Lite in NSX-T 3.0 provides multiple tenant data plane isolation through Virtual Routing Forwarding (VRF) in Tier-0 gateway with separate routing table, NAT, and firewall within each VRF eliminating the need for Tier0 deployments for multiple tenants.

VMware Integrated OpenStack Management Cluster update:

Management API: VMware Integrated OpenStack 7.0 introduces Public API to automate VMware Integrated OpenStack management through API. Fully open Kubernetes based lifecycle management APIs with CRD extension and provides public API for managing day-1 deployment and day-2 management of VMware Integrated OpenStack services.


  • ​All OpenStack components run on Python 3

VMware Integrated OpenStack Control Plane Update: The dedicated Kubernetes cluster underpinning the VMware Integrated OpenStack control plane has been upgraded to Tanzu Kubernetes Grid (Kubernetes 1.17.2).

Platform Lifecycle Automation:

  • Skip-release upgrades:
    • Support upgrade from 6.0 to 7.0
    • Support upgrade from 5.1 to 7.0 directly
  • Patch management:
    • Patches are delivered as container images rolled out via Kubernetes
    • viocli command set for patch management

VMware Integrated OpenStack LCM UI Enhancement:

  • Admins can see each service’s desired v.s. observed state
  • Easier to create Neutron availability zones directly from UI

VMware Integrated OpenStack CLI updates:  Enhanced CLI includes:

  • Ability to start/stop for each single services and automatically propagates changes using a single command on Custom Resource(CR)
  • More secure backups
  • Bash completion and CLI shortcuts added to LCM

VMware Integrated OpenStack 7.0 Max Configuration Data: VMware configuration maximums are published to https://configmax.vmware.com/


Telco Bits & Bytes – 28 May 2020

Our regular roundup of the technology news that matters

Welcome to the next edition of our ‘Telco Bits & Bytes’ news blog. Here we share news and insights from across VMware and the technology industry that caught our attention, so you don’t miss a beat. Let us know in the comments below how we can improve this service and enjoy!

VMware Bits

Technology Bytes

For daily updates, follow us on LinkedIn and our website

VMware vCloud Director 9.7 Appliance Installation

Starting with version 9.7, the vCloud Director appliance includes an embedded PostgreSQL database with а high availability (HA) function. Whereas vCloud Director on Linux uses an external database that needs to be installed and configured before you install vCloud Director on Linux.

You can create a vCloud Director server group by deploying one or more instances of the vCloud Director appliance with first member as a primary cell and a subsequent member as a standby or vCD application cell.  You deploy the vCloud Director appliance by using the vSphere Client (HTLM5), the vSphere Web Client (Flex), or VMware OVF Tool.

This blog post describes deploying vCloud Director server group using VMware OVF Tool. You can use the Deploy OVF from vCenter to deploy the appliances using single OVA file that displays 5 different deployment configurations to choose from: Primary node (small and large), Standby node (small and large) and vCD Cell Application node. The large vCloud Director primary appliance size is suitable for production systems, while the small is suitable for lab or test systems.

For more details visit vCloud Director Installation, Configuration, and Upgrade Guide

Note **

  • Mixed vCloud Director installations on Linux and vCloud Director appliance deployments in one server group are unsupported.
  • The vCloud Director appliance does not support external databases.

 To create deployment with a database HA cluster, we deploy one instance of the vCloud Director appliance as a primary cell, and two instances as standby cells (In this example we deploy only one standby cell). However if you have a production environment we recommend 2 standby cells deployment. vCD application cells connect to the database in the primary cell.


Recommended datastore for the appliance deployment is a vSAN datastore with redefined policy.

Next regarding network configuration, starting with version 9.7, the vCloud Director appliance is deployed with two networks, eth0 and eth1, so that you can isolate the HTTP traffic from the database traffic. Different services listen on one or both of the corresponding network interfaces. First interface eth0 is primarily used for services such as http – ports 80, 443, console proxy – port 8443, jmx – ports 61611, 61616. Second interface(eth1) is used for services such as database communication (port 5432).  Both interfaces are used for services such as ssh, management UI, etc.  Ensure you define these interface subnets based on your testbed requirements. Here we use IP addresses in same subnet for both the interfaces.

Note **

After you deploy the vCloud Director appliance, you cannot change the eth0 and eth1 network IP addresses or the hostname of the appliance. If you want the vCloud Director appliance to have different addresses or hostname, you must deploy a new appliance.

To see if the deployments succeeded, you can use the eth0 IP address of the primary cell to login to the Admin provider portal or you can check status of the cell by logging into the appliance management user interface at https://vcd_ip_address:5480.

In order to deploy standby cells, use same steps to deploy an appliance with deployment configuration as Standby small. Once deployed, embedded databases are configured in a replication mode with the primary database.


After the initial standby appliance deployment, the replication manager begins synchronizing its database with the primary appliance database. During this time, the vCloud Director database and therefore the vCloud Director UI are unavailable.

Once deployment is successful, you can verify the Database High Availability Cluster status by logging to either of cells admin portal or appliance management user interface.

What to do next:

If a standby cell is not in a running state, deploy a new standby cell.

If the primary cell is not in a running state, Recover from a Primary Database Failure in a High Availability Cluster.

Detailed step by step demo videos can be found on our Telco YouTube Channel


VMware vCloud Director Integration with vCenter and NSX

Once the vCloud Director appliance is deployed and configured. Register vCenter with vCloud Director from vCloud provider admin portal.

Step 1: First we login to the vCloud Director service provider admin portal at https:// VCD FQDN_or_IP_address /provider with administrator credentials >> vSphereResources >> vCenters >> Add vCenter Server

Step2: Based on your environment you can skip or add NSX-V Manager instance that is associated with vCenter Server and complete the registration

Step3: Once vCenter is registered you can register the NSX-T separately as it is independent of vCenter. You may register the NSX-T from admin portal >> >> vSphereResources >> NSX-T Managers >> Register NSX-T Manager

Once registration is complete you perform various operations to utilize the NSX-T features. For example, network pool creation is the same as with VXLAN. You can also import all the networks defined in NSX-T.


Now you can create Provider VDC (PVDC) which as a collection of compute, memory, and storage resources from vCenter Server instance that relies on NSX Data Center for vSphere or NSX-T Data Center for network resources. In addition, a Provider VDC provides resources to organization VDCs. A provider VDC can be created using the vCloud Director API, a powerful and easy to use solution for getting information about organizations, VDCs, networking and vApps. In this blog post Chrome “postman” REST api client is used for making the vCloud Director API calls.


Step 1: Get the “x-vcloud-authorization” token value which is required for further REST API calls


Step2: Retrieve the NSX-T Manager registered to this cloud.

Step 3: Retrieve the list of vCenter servers registered to this cloud.

Step 4: Retrieve the resources on a vCenter Server

Step 5: Retrieve a list of resource pools from a vCenter Server

Step 6:  Create a Provider VDC backed by NSX-T Data Center.

Step7: Finally verify that Provider VDC from the vCloud Director Service Provider Admin portal.

Next you can create organization VDCs and External networks-based on your environment.

Detailed step by step demos can be found on our Telco YouTube Channel

Multi-Tenancy with vCloud Director and NSX-T

This blog post walks through the steps on how to achieve secure multi-tenancy with vCloud Director and NSX-T.  The below reference topology is used to show the network resource isolation. For example, as shown below we will create 2 Tenants, Tenant A with two VMs and Tenant B with one VM.

Network isolation is achieved with the advanced networking capabilities of NSX-T Data Center that provides a fully-isolated and secure traffic paths across workloads and tenant switch and routing fabric. As described in Multi-Tenancy Design Objectives, NSX-T Data Center introduces a two-tiered routing architecture enabling the management of networks at the provider (Tier-0) and tenant (Tier-1) tiers. As shown in reference topology above, a provider routing tier is attached to the physical network for North-South traffic, while the tenant routing context can connect to the provider Tier-0 and manage East-West communications. In vCloud Director, each Organization VDC will have a single Tier-1 distributed router that provides the intra-tenant routing capabilities.


Step1: From vCloud Director Admin Portal create two Organizations one for each Tenant, Tenant A and Tenant B.

Step 2: Create two Organization VDCs one for each Tenant, Tenant A and Tenant B using the wizard as follows:

Step 3: Create two Logical switches using overlay networks and two uplink logical switches using VLAN on NSX-T one for each Tenants, Tenant A and Tenant B.

Step 4: Create two Tier-0 routers on NSX-T one for each Tenants, Tenant A (High-availability Mode as Active-Active) and Tenant B (High-availability Mode as Active-Standby).


Step 5: Create two Tier-1 routers on NSX-T one for each Tenants, Tenant A & Tenant B.

Step 6: Create uplink router ports on NSX-T for each of the Tier-0 routers, for both Tenants, Tenant A and Tenant B virtual machines to connect using the uplink logical switches created earlier.

Step 7:  Enable Route-Redistribution and create a new redistribution-criteria to allow the T0 & T1 sources for each of the Tier-0 routers, for both Tenants, Tenant A and Tenant B.

Step 8: Create downlink ports for each of the Tier-1 routers which will be used as gateway for both Tenants, Tenant A and Tenant B virtual machines using the logical switches created earlier.

Step 9: From the vCloud Director Tenant portals of each Tenants import the logical networks corresponding to each Tenant created in NSX-T and add static IP Pools in that subnet.

Step 10: Create a new vApp for Tenant A by adding two virtual machines for each Tenants as per reference topology.

Step 11: Add the networks imported from NSX-T into vApp.

Step 12: For each VM in vApp, edit the Network settings for VM-1 in Tenant A to select the newly added network and Static IP pool we created earlier.

Step 13: Power on the vApp and repeat steps 9 -12 for Tenant B.

Step 14: Now verify the connectivity between virtual machines in Tenant-A. Results show a successful ping between VM-1 and VM-2 in Tenant-A.

Step 15: Now verify the connectivity between virtual machines in Tenant-A and Tenant-B. Results show that ping between VMs in Tenant-A and VM in Tenant-B fails confirming secure multi-tenancy between the Tenants.

Detailed step by step demos can be found on the Telco YouTube channel: