Home > Blogs > The Network Virtualization Blog > Tag Archives: network virtualization

Tag Archives: network virtualization

3 Ways To Get Started With VMware NSX

Over the past 12 months, VMware NSX momentum has continued to grow, as we’ve added VMware NSXnew platform capabilities, expanded our partner ecosystem, and of course, had more than 250+ customers purchase NSX for deployment. And as interest in VMware NSX has grown with both customers and IT professionals looking to evolve their careers by adding certification in network virtualization, one of the most common questions that we get is “How can I get started with NSX?.”

We understand that there is a strong demand for individuals and organizations to get their hands on the NSX technology. Many of you are working towards your initial VCP-NV certification. Others of you are exploring NSX as a way to improve your organization’s agility and security while reducing overall costs.

Here are three ways individuals and companies can get started with NSX.

Complete NSX: Install, Configure, Manage Training – for individuals on the NSX career path, we offer “NSX: Install, Configure, Manage” training.  We are offering ICM training as part of our On-Demand Curriculum, or you can take a 5-day instructor led course. Here is the detailed course description and class schedule. ICM training is a pre-requisite for VMware NSX certification. Once ICM training is complete, individuals will be submitted for approval to receive access to the NSX bits for your own lab two weeks later. Please note VMware export restrictions apply. We’re running a promo right now that allows you to get a voucher for 50 percent off your VCP exam when you attend this class before December 19.

Attend VMware NSX Exploration – if you are a customer interested in learning more about how VMware NSX can help your business. VMware and select reseller partners are hosting NSX exploration sessions. This formal 2-day training will help you get prepared for network virtualization. These are invitation-only sessions for organizations looking at potentially moving forward with an NSX proof-of-concept. Space is limited and attendance is on a first come, first served basis.  To inquire about NSX exploration, speak with your VMware SE or account manager to see if there is one in your area. Once you complete NSX Exploration, you will be ready to work with your VMware SE to get NSX installed for your proof of concept.

Purchase a VMware NSX Starter Kit – If you’ve already decided that virtualizing your network with VMware NSX is best way from you to improve your business agility and security while reducing costs, and you want to get started with a small-scale installation to prove out the technology, VMware can offer you a 50 VM starter kit on a term-based license.  Again, speak with your VMware Account Manager or SE, and they can work with you to purchase and install the starter kit and you’ll be on your way.

If you are not ready to make VMware NSX part of your career path, and not sure how VMware NSX might benefit your business, the Hands-On Labs (HOL) are still the best way for you to get familiar with the technology. VMware NSX is by far the most popular HOL right now. More than 73,000 NSX hands-on-labs have been accessed in 2014, with more than 20,000 of those labs taking place since the week of August 24, 2014 during VMworld 2014.  We currently offer three VMware NSX Hands-On Labs:

  • HOL-SDC-1403: VMware NSX Introduction
  • HOL-SDC-1425: VMware NSX Advanced Lab
  • HOL-SDC-1424: VMware NSX in the SDDC

2015 promises to be another great year for NSX, and we hope that you will make it a keystone of the strategic conversations you are having with and across IT leadership in the context of adopting a software-defined data center approach.

Chris

OVS Fall 2014 Conference: Observations and Takeaways

Last week we hosted the Open vSwitch 2014 Fall Conference, which was another great opportunity to demonstrate our continued investment in leading open source technologies. To get a sense of the energy and enthusiasm at the event, take a quick view of this video we captured with attendees.

I’ve been thinking about the key takeaways from everything I saw and everyone I spoke with.

First, there’s huge interest in Open vSwitch performance, both in terms of measurement and improvement. The talks from Rackspace and Noiro Networks/Cisco led me to believe that we’ve reached the point where Open vSwitch performance is good enough on hypervisors for most applications, and often faster than competing software solutions such as the Linux bridge.

Talks from Intel and one from Luigi Rizzo at the University of Pisa demonstrated that by bypassing the kernel entirely through DPDK or netmap, respectively, we haven’t reached the limits of software forwarding performance. Based on a conversation I had with Chris Wright from Red Hat, this work is helping the Linux kernel community look into reducing the overhead of the kernel, so that we can see improved performance without losing the functionality provided by the kernel.

Johann Tönsing from Netronome also presented a talk describing all the ways that Netronome’s NPU hardware can accelerate OpenFlow and Open vSwitch; I’ve talked to Johann many times before, but I had never realized how many different configurations their hardware supports, so this was an eye-opening talk for me.

Next, enhancing Open vSwitch capabilities at L4 through L7 is another exciting area. Our own Justin Pettit was joined by Thomas Graf from Noiro to talk about the ongoing project to add support for NAT and tracking L4 connections, which is key to making Open vSwitch capable of implementing high-quality firewalls. A later talk by Franck Baudin from Qosmos presented L7 enhancements to this capability.

The final area that I saw highlighted at the conference is existing applications for Open vSwitch today. Peter Phaal from InMon, for example, demonstrated applications for sFlow in Open vSwitch. I found his talk interesting because although I knew about sFlow and had talked to Peter before, I hadn’t realized all of the varied uses for sFlow monitoring data. Vikram Dham also showed his uses for MPLS in Open vSwitch and Radhika Hirannaiah her use case for OpenFlow and Open vSwitch in traffic engineering.

I want to thank all of our participants and the organizing committee for helping to put together such an amazing event.

Ben

Free Seminar – Advancing Security with the Software-Defined Data Center

We’re excited to take to the road for another edition of our VMware Software-Defined Data Center Seminar Series. Only this time, we’ll be joined by some great company.

VMware & Palo Alto Networks invite you along for a complementary, half-day educational event for IT professionals interested in learning about how Palo Alto Networks and VMware are transforming data center security.

Thousands of IT professionals attended our first SDDC seminar series earlier this year in more than 20 cities around the globe. Visit #VirtualizeYourNetwork.com to browse the presentations, videos, and other content we gathered.

This free seminar will highlight:

  • The Software-Defined Data Center approach
  • Lessons learned from real production customers
  • Using VMware NSX to deliver never before possible data center security and micro-segmentation

Who should attend?

People who will benefit from attending this session include:

  • IT, Infrastructure and Data Center Managers
  • Network professionals, including CCIEs
  • Security & Compliance professionals
  • IT Architects
  • Networking Managers and Administrators
  • Security Managers and Administrators

Agenda

  • 8:30 a.m. Registration & Breakfast
  • 9:00 a.m. VMware: Better Security with Micro-segmentation
  • 10:00 a.m. Palo Alto Networks: Next Generation Security Services for the SDDC
  • 11:00 a.m. NSX & Palo Alto Networks Integrated Solution Demo
  • 11:45 a.m. Seminar Wrap-up
  • 12:00 p.m. Hands-on Workshop
  • 1:30 p.m. Workshop Wrap-up

Check out the schedule and register. Space is limited.

Learn more at http://info.vmware.com/content/26338_nsx_series

Roger

Using Differentiated Services to Tame Elephants

This post was co-authored by Justin Pettit, Staff Engineer, Networking & Security Business Unit at VMware, and Ravi Shekhar, Distinguished Engineer, S3BU at Juniper Networks.

********************

As discussed in other blog posts and presentations, long-lived, high-bandwidth flows (elephants) can negatively affect short-lived flows (mice). Elephant flows send more data, which can lead to queuing delays for latency-sensitive mice.

VMware demonstrated the ability to use a central controller to manage all the forwarding elements in the underlay when elephant flows are detected.  In environments that do not have an SDN-controlled fabric, an alternate approach is needed.  Ideally, the edge can identify elephants in such a way that the fabric can use existing mechanisms to treat mice and elephants differently.

Differentiated services (diffserv) were introduced to bring scalable service discrimination to IP traffic. This is done using Differentiated Services Code Point (DSCP) bits in the IP header to signal different classes of service (CoS). There is wide support in network fabrics to treat traffic differently based on the DSCP value.

A modified version of Open vSwitch allows us to identify elephant flows and mark the DSCP value of the outer IP header.  The fabric is then configured to handle packets with the “elephant” DSCP value differently from the mice.

Figure 1: Elephants are detected at the edge of the network and signaled to the fabric through DSCP.  Based on these code points, the fabric can treat elephant traffic differently from mice

Figure 1: Elephants are detected at the edge of the network and signaled to the fabric through DSCP. Based on these code points, the fabric can treat elephant traffic differently from mice

Detecting and Marking Elephants with Open vSwitch

Open vSwitch’s location at the edge of the network gives it visibility into every packet in and out of each guest.  As such, the vSwitch is in the ideal location to make per-flow decisions such as elephant flow detection. Because environments are different, our approach provides multiple detection mechanisms and actions so that they can be used and evolve independently.

An obvious approach to detection is to just keep track of how many bytes each flow has generated.  By this definition, if a flow has sent a large amount of data, it is an elephant. In Open vSwitch, the number of bytes and an optional duration can be configured. By using a duration, we can ensure that we don’t classify very short-lived flows as elephants. We can also avoid identifying low-bandwidth but long-lived flows as elephants.

An alternate approach looks at the size of the packet that is being given to the NIC.  Most NICs today support TCP Segmentation Offload (TSO), which allows the transmitter (e.g., the guest) to give the NIC TCP segments up to 64KB, which the NIC chops into MSS-sized packets to be placed on the wire.

Because of TCP’s slow start, the transmitter does not immediately begin sending maximum-sized packets to the NIC.  Due to our unique location, we can see the TCP window as it opens, and tag elephants earlier and more definitively. This is not possible at the top-of-rack (TOR) or anywhere else in the fabric, since they only see the segmented version of the traffic.

Open vSwitch may be configured to track all flows with packets of a specified size. For example, by looking for only packets larger than 32KB (which is much larger than jumbo frames), we know the transmitter is out of slow-start and making use of TSO. There is also an optional count, which will trigger when the configured number of packets with the specified size is seen.

Some new networking hardware provides some elephant flow mitigation by giving higher priority to small flows. This is achieved by tracking all flows and placing new flows in a special high-priority queue. When the number of packets in the flow has crossed a threshold, the flow’s packets from then on are placed into the standard priority queue.

This same effect can be achieved using the modified Open vSwitch and a standard fabric.  For example, by choosing a packet size of zero and threshold of ten packets, each flow will be tracked in a hash table in the kernel and tagged with the configured DSCP value when that flow has generated ten packets.  Whether mice are given a high priority or elephants are given a low priority, the same effect is achieved without the need to replace the entire fabric.

Handling Elephants with Juniper Devices

Juniper TOR devices (such as QFX5100) and aggregation devices (such as MX, EX9200) provide a rich diffserv model CoS to to achieve these goals in the underlay.  These include:

  • Elaborate controls for packet admittance with dedicated and shared limits. Dedicated limits provide a minimum service guarantee, and shared limits allow statistical sharing of buffers across different ports and priorities.
  • A large number of flexibly assigned queues; up to 2960 unicast queues at the TOR and 512K at the aggregation device.
  • Enhanced and varied scheduling methods to drain these queues: strict and round-robin scheduling with up to 4-levels of hierarchical schedulers.
  • Shaping and metering to control the rate of injection of traffic from different queues of a TOR in the underlay network. By doing this, bursty traffic at the edge of the physical network can be leveled out before it reaches the more centrally shared aggregation devices.
  • Sophisticated controls to detect and notify congestion, and set drop thresholds. These mechanisms detect possible congestion in the network sooner and notify the source to slow down (e.g. using ECN).

With this level of flexibility, it is possible to configure these devices to:

  • Enforce minimum bandwidth allocation for mice flows and/or maximum bandwidth allocation for elephant flows on a shared link.
  • When experiencing congestion, drop (or ECN mark) packets of elephant flows more aggressively than mice flows.  This will result in TCP connections of elephant flows to back off sooner, which alleviates congestion in the network.
  • Take a different forwarding path for elephant flows from that of mice flows.  For example, a TOR can forward elephant flows towards aggregation switches with big buffers and spread mice flows towards multiple aggregation switches that support low-latency forwarding.

Conclusion

By inserting some intelligence at the edge and using diffserv, network operators can use their existing fabric to differentiate between elephant flows and mice. Most networking gear provides some capabilities, and Juniper, in particular, provides a rich set of operations that can be used based on the DSCP.  Thus, it is possible to reduce the impact of heavy hitters without the need to replace hardware. Decoupling detection from mitigation allows each to evolve independently without requiring wholesale hardware upgrades.

 

Physical Networks in the Virtualized Networking World

[This post was co-authored by VMware's Bruce Davie and Ken Duda from Arista Networks, and originally appeared on Network Heresy]

Almost a year ago, we wrote a first post about our efforts to build virtual networks that span both virtual and physical resources. As we’ve moved beyond the first proofs of concept to customer trials for our combined solution, this post serves to provide an update on where we see the interaction between virtual and physical worlds heading.

Our overall approach to connecting physical and virtual resources can be viewed in two main categories:

  • terminating the overlay on physical devices, such as top-of-rack switches, routers, appliances, etc.
  • managing interactions between the overlay and the physical devices that provide the underlay.

The latter topic is something we’ve addressed in some other recent posts (herehere and here) — in this blog we’ll focus more on how we deal with physical devices at the edge of the overlay. Continue reading

Geneve, VXLAN, and Network Virtualization Encapsulations

In this post, Bruce Davie and T. Sridhar of VMware’s Networking and Security Business Unit take a look at a proposed a new encapsulation protocol that would standardize how traffic is tunneled over the physical infrastructure by network overlay software.

++++

For as long as we’ve been doing Network Virtualization, there has been debate about how best to encapsulate the data. As we pointed out in an earlier post, it’s entirely reasonable for multiple encapsulations (e.g. VXLAN and STT) to co-exist in a single network. With the recent publication of “Geneve”, a new proposed encapsulation co-authored by VMware, Microsoft, Red Hat and Intel, we thought it would be helpful to clarify a few points regarding encapsulation for network virtualization. First, with all the investment made by us and our partners in developing support for VXLAN (described here), we very much intend to continue supporting VXLAN — indeed, we’ll be enhancing our VXLAN capabilities. Second, we want to explain why we believe Geneve is a necessary and useful addition to the network virtualization landscape.

Read the rest of Bruce’s blog on the Office of the CTO blog here.

Juniper and VMware: Collaborating to Enable The Software-Defined Data Center

The need for businesses to enhance the efficiency of IT and increase application agility is overwhelming. Embracing operational models such as cloud computing helps, but in order to fully leverage these new models companies must explore new ways of handling network connectivity. Network virtualization solutions such as VMware NSX provide an answer for the new cloud-centric networking models. As with any technology, though, network virtualization doesn’t solve some existing challenges by itself: consistent, efficient performance for business-critical applications that span virtual and physical worlds; correlated and integrated management; and enhancing data sharing between the network virtualization solution and the underlying physical network are all critical elements to successful cloud deployments. To address these challenges, we are pleased to announce that Juniper and VMware are expanding our partnership to help our joint customers achieve better application agility for their cloud environments. Continue reading

The Goldilocks Zone: Security In The Software-Defined Data Center Era

Last week, we spoke at the RSA Conference about a new concept in security – the Goldilocks zone.  With the help of Art Coviello, Executive Chairman of RSA, Chris Young, senior vice president and GM of Cisco’s Security business unit, and Lee Klarich, senior vice president of product management from Palo Alto Networks, we departed from the typical discussions about new controls or the latest threats.  We took the opportunity to lay out what we believe is a fundamental architectural issue holding back substantial progress in cyber security, and how virtualization may just provide the answer. The growing use of virtualization and the move towards software-defined data centers enable huge benefits in speed, scalability and agility; those benefits are undeniable. It may turn out, however, that one of virtualization’s biggest benefits is security. Continue reading

VMware at RSA Conference 2014 (#RSAC)

Summary:logo_rsac

  • Company outlines vision for security in the Software-Defined Data Center
  • Product and partner demonstrations in Booth #1615 to showcase growing security portfolio
  • New PCI-DSS 3.0 and FedRAMP reference architectures to be presented

Throughout its history, RSA Conference has consistently attracted the world’s best and brightest in the security field, creating opportunities for attendees to learn about IT security’s most important issues through first-hand interactions with peers, luminaries and emerging and established companies. Continue reading

Deploying VMware NSX with Cisco UCS and Nexus 7000

VMware NSX network virtualization software makes it possible for IT organizations to obtain new levels of business agility, allowing you to deploy advanced policy based network services for your applications as quickly as a virtual machine, today, leveraging your existing physical network infrastructure or next generation network hardware from any vendor.

Back in September, I wrote about the value of deploying the VMware NSX network virtualization platform on Cisco UCS and Nexus infrastructure. We had an overwhelming amount of customer response and request for more content about this deployment scenario. As such, we’ve created a new design guide which you can download here that describes a simple and straight forward example of how to deploy VMware NSX for vSphere on an infrastructure consisting of Cisco UCS and Nexus 7000 series switches. This basic guide should serve as a starting point from which to tailor a design for your specific environment.

We want to offer our thanks to the content creation team here at VMware for collaborating on this effort:

• Dmitri Kalintsev
• Bruce Davie
• Ray Budavari
• Venky Deshpande
• Nikhil Kelshikar
• Rod Stuhlmuller
• Scott Lowe
• Marcos Hernandez
• Chris King

Cheers,
Brad