Home > Blogs > The Network Virtualization Blog > Tag Archives: network virtualization

Tag Archives: network virtualization

Cross vCenter Networking & Security with VMware NSX

NSX 6.2 was released on August 20, 2015. One of the key features in NSX 6.2 is Cross vCenter Networking and Security. This new capability scales NSX vSphere across vCenter boundaries. Now, one can span logical networking and security constructs across vCenter boundaries irrespective of whether the vCenters are in adjacent racks or across datacenters (up to 150ms apart). This enables us to solve a variety of use cases including:

  • Capacity pooling across vCenters
  • Simplifying data center migrations
  • Cross vCenter and long distance vMotion
  • Disaster recovery

With Cross vCenter Networking & Security one can extend logical switches (VXLAN networks) across vCenter boundaries enabling a layer 2 segment to span across VCs even when the underlying network is a pure IP / L3 network. However, the big innovation here is that with NSX we can also extend distributed routing and distributed firewalling seamlessly across VCs to provide a comprehensive solution as seen in the figure below. Continue reading

VMware NSX – It’s About the Platform Ecosystem

The basis of competition has shifted from individual products and technologies to platforms,

Best-In-Class Partners

Best-In-Class Partners

but with everyone aspiring to be a platform the bar is set high. A platform must be a value-creation entity, underpinned by a robust architecture that includes a set of well-integrated software artifacts and programming interfaces to enable reuse and extensibility by third parties. Platforms must support an ecosystem that can function in a unified way, foster interactions among its members and orchestrate its network of partners. And finally, platforms must adhere to the network effect theory which asserts that the value of a platform to a user increases as more users subscribe to it, in effect, creating a positive feedback loop.

The VMware NSX network virtualization platform meets this criteria resoundingly. NSX is specifically designed to provide a foundation for a high-value, differentiated ecosystem of partners that includes some of the networking industry’s most significant players.  The NSX platform leverages multi-layered network abstractions, an extensible and distributed service framework with multiple entry points, and transparent insertion and orchestration of partner services. What distinguishes NSX from other platforms is its inherent security constructs which partner solutions inherit, and a context sharing and synchronization capability that allows partners to fine-tune the delivery of their services on the NSX platform inside the data center in a closed feedback loop. Continue reading

VCDX-NV Interview: Nemtallah Daher Discusses VMware NSX Certification

Nemtallah Daher is Senior Network Delivery Consultant at the consulting firm AdvizeX Technology. Recently he took some time out of his day to talk with us about why, as a networking guy, he thinks learning about network virtualization is critical to further one’s career. 


I’ve been at AdvizeX for about a year now. I do Cisco, HP, data center stuff, and all sorts of general networking things: routing, switching, data center, UCS. That kind of stuff. Before coming to AdvizeX, I was a senior network specialist at Cleveland State University for about 20 years.

I started at Cleveland State in 1988 as a systems programmer, working on IBM mainframe doing CICS, COBOL and assembler. About 2 years after I started at Cleveland State, networking was becoming prevalent, and the project I was working on was coming to an end, so they asked me if I would help start a networking group. So from a small lab here, a building here, a floor there, I built the network at Cleveland State. We applied for a grant to get some hardware, applied for an IP address, domain name, all these things. There was nothing at the time, so we did everything. We incorporated wireless about 10 years in. Over time it became a ubiquitous, campus-wide network. So that’s my brief history. Continue reading

VCDX-NV Interview: Ron Flax On The Importance Of Network Virtualization

Ron Flax is the Vice President of August Schell, a reseller of VMware products and IT services company that specializes in delivering services to commercial accounts and the federal government, particularly intelligence and U.S. Department of Defense. RonFlaxRon is a VCDX-NV certified network virtualization professional and a VMware vExpert. We spoke with Ron about network virtualization and the NSX career path.


The most exciting thing about network virtualization, I think, is the transformative nature of this technology. Networks have been built the same way for the last 20 to 25 years. Nothing has really changed. A lot of new features have been built, a lot of different technologies have come around networks, but the fundamental nature of how networks are built has not changed. But VMware NSX, because it’s a software-based product, has completely altered everything. It enables a much more agile approach to networks: the ability to automate the stand-up and tear-down of networks; the ability to produce firewalling literally at the virtual network interface. And because things are done at software speed, you can now make changes to the features and functions of networking products at software speed. You no longer have to deal with silicon speed. It’s very, very exciting. With a software-based approach, you can just do so much more in such a small amount of time.

What we’re hearing from customers, at this point, is that they’re very interested to learn more. They’re at a phase where they’re ready to get their hands dirty, and they really want to understand it better. What’s driving a lot of adoption today is security, it is our foot in the door. When you speak with customers about the security aspects, the micro-segmentation capabilities, you may not even have to get to a virtual network discussion. Once you get the security aspect deployed, customers will see it in action and then a few weeks later will say, ‘Hey, you know, can you show me how the new router works?’ or ‘Can you show me how other features of NSX work?’ That’s when you can start to broaden your approach. So these compelling security stories like micro-segmentation or distributed firewalling get you in and get the deployment started, but ultimately it’s the flexibility of being able to deliver networks at speed, in an agile way, through software, through automation, that’s the home run. Continue reading

3 Ways To Get Started With VMware NSX

Over the past 12 months, VMware NSX momentum has continued to grow, as we’ve added VMware NSXnew platform capabilities, expanded our partner ecosystem, and of course, had more than 250+ customers purchase NSX for deployment. And as interest in VMware NSX has grown with both customers and IT professionals looking to evolve their careers by adding certification in network virtualization, one of the most common questions that we get is “How can I get started with NSX?.”

We understand that there is a strong demand for individuals and organizations to get their hands on the NSX technology. Many of you are working towards your initial VCP-NV certification. Others of you are exploring NSX as a way to improve your organization’s agility and security while reducing overall costs.

Here are three ways individuals and companies can get started with NSX. Continue reading

OVS Fall 2014 Conference: Observations and Takeaways

Last week we hosted the Open vSwitch 2014 Fall Conference, which was another great opportunity to demonstrate our continued investment in leading open source technologies. To get a sense of the energy and enthusiasm at the event, take a quick view of this video we captured with attendees.

I’ve been thinking about the key takeaways from everything I saw and everyone I spoke with.

First, there’s huge interest in Open vSwitch performance, both in terms of measurement and improvement. The talks from Rackspace and Noiro Networks/Cisco led me to believe that we’ve reached the point where Open vSwitch performance is good enough on hypervisors for most applications, and often faster than competing software solutions such as the Linux bridge.

Talks from Intel and one from Luigi Rizzo at the University of Pisa demonstrated that by bypassing the kernel entirely through DPDK or netmap, respectively, we haven’t reached the limits of software forwarding performance. Based on a conversation I had with Chris Wright from Red Hat, this work is helping the Linux kernel community look into reducing the overhead of the kernel, so that we can see improved performance without losing the functionality provided by the kernel.

Johann Tönsing from Netronome also presented a talk describing all the ways that Netronome’s NPU hardware can accelerate OpenFlow and Open vSwitch; I’ve talked to Johann many times before, but I had never realized how many different configurations their hardware supports, so this was an eye-opening talk for me.

Next, enhancing Open vSwitch capabilities at L4 through L7 is another exciting area. Our own Justin Pettit was joined by Thomas Graf from Noiro to talk about the ongoing project to add support for NAT and tracking L4 connections, which is key to making Open vSwitch capable of implementing high-quality firewalls. A later talk by Franck Baudin from Qosmos presented L7 enhancements to this capability.

The final area that I saw highlighted at the conference is existing applications for Open vSwitch today. Peter Phaal from InMon, for example, demonstrated applications for sFlow in Open vSwitch. I found his talk interesting because although I knew about sFlow and had talked to Peter before, I hadn’t realized all of the varied uses for sFlow monitoring data. Vikram Dham also showed his uses for MPLS in Open vSwitch and Radhika Hirannaiah her use case for OpenFlow and Open vSwitch in traffic engineering.

I want to thank all of our participants and the organizing committee for helping to put together such an amazing event.


Free Seminar – Advancing Security with the Software-Defined Data Center

We’re excited to take to the road for another edition of our VMware Software-Defined Data Center Seminar Series. Only this time, we’ll be joined by some great company.

VMware & Palo Alto Networks invite you along for a complementary, half-day educational event for IT professionals interested in learning about how Palo Alto Networks and VMware are transforming data center security.

Thousands of IT professionals attended our first SDDC seminar series earlier this year in more than 20 cities around the globe. Visit #VirtualizeYourNetwork.com to browse the presentations, videos, and other content we gathered.

This free seminar will highlight:

  • The Software-Defined Data Center approach
  • Lessons learned from real production customers
  • Using VMware NSX to deliver never before possible data center security and micro-segmentation

Who should attend?

People who will benefit from attending this session include:

  • IT, Infrastructure and Data Center Managers
  • Network professionals, including CCIEs
  • Security & Compliance professionals
  • IT Architects
  • Networking Managers and Administrators
  • Security Managers and Administrators


  • 8:30 a.m. Registration & Breakfast
  • 9:00 a.m. VMware: Better Security with Micro-segmentation
  • 10:00 a.m. Palo Alto Networks: Next Generation Security Services for the SDDC
  • 11:00 a.m. NSX & Palo Alto Networks Integrated Solution Demo
  • 11:45 a.m. Seminar Wrap-up
  • 12:00 p.m. Hands-on Workshop
  • 1:30 p.m. Workshop Wrap-up

Check out the schedule and register. Space is limited.

Learn more at http://info.vmware.com/content/26338_nsx_series


Using Differentiated Services to Tame Elephants

This post was co-authored by Justin Pettit, Staff Engineer, Networking & Security Business Unit at VMware, and Ravi Shekhar, Distinguished Engineer, S3BU at Juniper Networks.


As discussed in other blog posts and presentations, long-lived, high-bandwidth flows (elephants) can negatively affect short-lived flows (mice). Elephant flows send more data, which can lead to queuing delays for latency-sensitive mice.

VMware demonstrated the ability to use a central controller to manage all the forwarding elements in the underlay when elephant flows are detected.  In environments that do not have an SDN-controlled fabric, an alternate approach is needed.  Ideally, the edge can identify elephants in such a way that the fabric can use existing mechanisms to treat mice and elephants differently.

Differentiated services (diffserv) were introduced to bring scalable service discrimination to IP traffic. This is done using Differentiated Services Code Point (DSCP) bits in the IP header to signal different classes of service (CoS). There is wide support in network fabrics to treat traffic differently based on the DSCP value.

A modified version of Open vSwitch allows us to identify elephant flows and mark the DSCP value of the outer IP header.  The fabric is then configured to handle packets with the “elephant” DSCP value differently from the mice.

Figure 1: Elephants are detected at the edge of the network and signaled to the fabric through DSCP.  Based on these code points, the fabric can treat elephant traffic differently from mice

Figure 1: Elephants are detected at the edge of the network and signaled to the fabric through DSCP. Based on these code points, the fabric can treat elephant traffic differently from mice

Detecting and Marking Elephants with Open vSwitch

Open vSwitch’s location at the edge of the network gives it visibility into every packet in and out of each guest.  As such, the vSwitch is in the ideal location to make per-flow decisions such as elephant flow detection. Because environments are different, our approach provides multiple detection mechanisms and actions so that they can be used and evolve independently.

An obvious approach to detection is to just keep track of how many bytes each flow has generated.  By this definition, if a flow has sent a large amount of data, it is an elephant. In Open vSwitch, the number of bytes and an optional duration can be configured. By using a duration, we can ensure that we don’t classify very short-lived flows as elephants. We can also avoid identifying low-bandwidth but long-lived flows as elephants.

An alternate approach looks at the size of the packet that is being given to the NIC.  Most NICs today support TCP Segmentation Offload (TSO), which allows the transmitter (e.g., the guest) to give the NIC TCP segments up to 64KB, which the NIC chops into MSS-sized packets to be placed on the wire.

Because of TCP’s slow start, the transmitter does not immediately begin sending maximum-sized packets to the NIC.  Due to our unique location, we can see the TCP window as it opens, and tag elephants earlier and more definitively. This is not possible at the top-of-rack (TOR) or anywhere else in the fabric, since they only see the segmented version of the traffic.

Open vSwitch may be configured to track all flows with packets of a specified size. For example, by looking for only packets larger than 32KB (which is much larger than jumbo frames), we know the transmitter is out of slow-start and making use of TSO. There is also an optional count, which will trigger when the configured number of packets with the specified size is seen.

Some new networking hardware provides some elephant flow mitigation by giving higher priority to small flows. This is achieved by tracking all flows and placing new flows in a special high-priority queue. When the number of packets in the flow has crossed a threshold, the flow’s packets from then on are placed into the standard priority queue.

This same effect can be achieved using the modified Open vSwitch and a standard fabric.  For example, by choosing a packet size of zero and threshold of ten packets, each flow will be tracked in a hash table in the kernel and tagged with the configured DSCP value when that flow has generated ten packets.  Whether mice are given a high priority or elephants are given a low priority, the same effect is achieved without the need to replace the entire fabric.

Handling Elephants with Juniper Devices

Juniper TOR devices (such as QFX5100) and aggregation devices (such as MX, EX9200) provide a rich diffserv model CoS to to achieve these goals in the underlay.  These include:

  • Elaborate controls for packet admittance with dedicated and shared limits. Dedicated limits provide a minimum service guarantee, and shared limits allow statistical sharing of buffers across different ports and priorities.
  • A large number of flexibly assigned queues; up to 2960 unicast queues at the TOR and 512K at the aggregation device.
  • Enhanced and varied scheduling methods to drain these queues: strict and round-robin scheduling with up to 4-levels of hierarchical schedulers.
  • Shaping and metering to control the rate of injection of traffic from different queues of a TOR in the underlay network. By doing this, bursty traffic at the edge of the physical network can be leveled out before it reaches the more centrally shared aggregation devices.
  • Sophisticated controls to detect and notify congestion, and set drop thresholds. These mechanisms detect possible congestion in the network sooner and notify the source to slow down (e.g. using ECN).

With this level of flexibility, it is possible to configure these devices to:

  • Enforce minimum bandwidth allocation for mice flows and/or maximum bandwidth allocation for elephant flows on a shared link.
  • When experiencing congestion, drop (or ECN mark) packets of elephant flows more aggressively than mice flows.  This will result in TCP connections of elephant flows to back off sooner, which alleviates congestion in the network.
  • Take a different forwarding path for elephant flows from that of mice flows.  For example, a TOR can forward elephant flows towards aggregation switches with big buffers and spread mice flows towards multiple aggregation switches that support low-latency forwarding.


By inserting some intelligence at the edge and using diffserv, network operators can use their existing fabric to differentiate between elephant flows and mice. Most networking gear provides some capabilities, and Juniper, in particular, provides a rich set of operations that can be used based on the DSCP.  Thus, it is possible to reduce the impact of heavy hitters without the need to replace hardware. Decoupling detection from mitigation allows each to evolve independently without requiring wholesale hardware upgrades.


Physical Networks in the Virtualized Networking World

[This post was co-authored by VMware’s Bruce Davie and Ken Duda from Arista Networks, and originally appeared on Network Heresy]

Almost a year ago, we wrote a first post about our efforts to build virtual networks that span both virtual and physical resources. As we’ve moved beyond the first proofs of concept to customer trials for our combined solution, this post serves to provide an update on where we see the interaction between virtual and physical worlds heading.

Our overall approach to connecting physical and virtual resources can be viewed in two main categories:

  • terminating the overlay on physical devices, such as top-of-rack switches, routers, appliances, etc.
  • managing interactions between the overlay and the physical devices that provide the underlay.

The latter topic is something we’ve addressed in some other recent posts (herehere and here) — in this blog we’ll focus more on how we deal with physical devices at the edge of the overlay. Continue reading

Geneve, VXLAN, and Network Virtualization Encapsulations

In this post, Bruce Davie and T. Sridhar of VMware’s Networking and Security Business Unit take a look at a proposed a new encapsulation protocol that would standardize how traffic is tunneled over the physical infrastructure by network overlay software.


For as long as we’ve been doing Network Virtualization, there has been debate about how best to encapsulate the data. As we pointed out in an earlier post, it’s entirely reasonable for multiple encapsulations (e.g. VXLAN and STT) to co-exist in a single network. With the recent publication of “Geneve”, a new proposed encapsulation co-authored by VMware, Microsoft, Red Hat and Intel, we thought it would be helpful to clarify a few points regarding encapsulation for network virtualization. First, with all the investment made by us and our partners in developing support for VXLAN (described here), we very much intend to continue supporting VXLAN — indeed, we’ll be enhancing our VXLAN capabilities. Second, we want to explain why we believe Geneve is a necessary and useful addition to the network virtualization landscape.

Read the rest of Bruce’s blog on the Office of the CTO blog here.