Home > Blogs > The Network Virtualization Blog > Tag Archives: network virtualization

Tag Archives: network virtualization

VCDX-NV Interview: Ron Flax On The Importance Of Network Virtualization

Ron Flax is the Vice President of August Schell, a reseller of VMware products and IT services company that specializes in delivering services to commercial accounts and the federal government, particularly intelligence and U.S. Department of Defense. RonFlaxRon is a VCDX-NV certified network virtualization professional and a VMware vExpert. We spoke with Ron about network virtualization and the NSX career path.

***

The most exciting thing about network virtualization, I think, is the transformative nature of this technology. Networks have been built the same way for the last 20 to 25 years. Nothing has really changed. A lot of new features have been built, a lot of different technologies have come around networks, but the fundamental nature of how networks are built has not changed. But VMware NSX, because it’s a software-based product, has completely altered everything. It enables a much more agile approach to networks: the ability to automate the stand-up and tear-down of networks; the ability to produce firewalling literally at the virtual network interface. And because things are done at software speed, you can now make changes to the features and functions of networking products at software speed. You no longer have to deal with silicon speed. It’s very, very exciting. With a software-based approach, you can just do so much more in such a small amount of time.

What we’re hearing from customers, at this point, is that they’re very interested to learn more. They’re at a phase where they’re ready to get their hands dirty, and they really want to understand it better. What’s driving a lot of adoption today is security, it is our foot in the door. When you speak with customers about the security aspects, the micro-segmentation capabilities, you may not even have to get to a virtual network discussion. Once you get the security aspect deployed, customers will see it in action and then a few weeks later will say, ‘Hey, you know, can you show me how the new router works?’ or ‘Can you show me how other features of NSX work?’ That’s when you can start to broaden your approach. So these compelling security stories like micro-segmentation or distributed firewalling get you in and get the deployment started, but ultimately it’s the flexibility of being able to deliver networks at speed, in an agile way, through software, through automation, that’s the home run.

I also think clients are excited about being able to deliver services more quickly to their business units. In the space I work in, the U.S. Federal Government, the workforce is typically segmented into a server team, storage team, network team, maybe a virtualization team. They haven’t gotten yet to the point where they have a cloud team, so it’s all kind of meshed together. What tends to happen in these siloed environments is the business, or the end user, is waiting on one of these factions to get their job done before they can deliver services. In a lot of cases it’s become the network team that acts as the long pole in the tent and gets things organized for getting a solution built. If they are the log jam, well…

With network virtualization it’s possible—it’s quite easy, in fact—to bring that capability to the virtualization guy, the server guy, the storage guy, or even the end user if you deliver this as a full Software-Defined Data Center or SDDC. Essentially you create a self-service interface, where the end user can actually build and create their networks for themselves. They no longer have to wait for the storage team to have enough storage, the network team to create the networks etc. They can do it themselves. So that’s a big “aha” moment for a lot of customers, They realize: ”we actually can deliver something secure, that works, and that’s isolated to the business in a reasonable amount of time.”

Seeing this transition made me realize that getting my VCDX-NV was a great opportunity. I just felt like if we were going to be in this market space, if we were going to be considered NSX experts, we had to have at least one person, if not many people, who were officially qualified by VMware. The experience was great. VMware went out of their way to really make a strong impression on us, and to invest in every candidate, to make it so that as many of us as possible would succeed and get through the process. I’m not going to say it wasn’t hard! The process is what it should be. It definitely will test you. But if you’re a network engineer, you’re going to want to learn as much as you can about networks. Certainly if you’re a CCIE and you have those skills, and you’ve passed certification for the physical network and all of the related design concepts. I would strongly advise you to get some form of NSX certification with VMware, even if it’s not the full VCDX-NV. The more you know, the more it’s going to help you. You still need to understand the underpinnings, the physical network, but you have that already, so take advantage. Learning about the software aspects of network virtualization can be instrumental in your job growth, your advancement. It’s going to help you in your career.

At the end of the day, this is technology. Technology changes very rapidly. Anybody who’s been around the technology world knows things change at a very, very quick pace. You can’t rest on your laurels. You have to retool yourself. You have to always retool yourself.

3 Ways To Get Started With VMware NSX

Over the past 12 months, VMware NSX momentum has continued to grow, as we’ve added VMware NSXnew platform capabilities, expanded our partner ecosystem, and of course, had more than 250+ customers purchase NSX for deployment. And as interest in VMware NSX has grown with both customers and IT professionals looking to evolve their careers by adding certification in network virtualization, one of the most common questions that we get is “How can I get started with NSX?.”

We understand that there is a strong demand for individuals and organizations to get their hands on the NSX technology. Many of you are working towards your initial VCP-NV certification. Others of you are exploring NSX as a way to improve your organization’s agility and security while reducing overall costs.

Here are three ways individuals and companies can get started with NSX. Continue reading

OVS Fall 2014 Conference: Observations and Takeaways

Last week we hosted the Open vSwitch 2014 Fall Conference, which was another great opportunity to demonstrate our continued investment in leading open source technologies. To get a sense of the energy and enthusiasm at the event, take a quick view of this video we captured with attendees.

I’ve been thinking about the key takeaways from everything I saw and everyone I spoke with.

First, there’s huge interest in Open vSwitch performance, both in terms of measurement and improvement. The talks from Rackspace and Noiro Networks/Cisco led me to believe that we’ve reached the point where Open vSwitch performance is good enough on hypervisors for most applications, and often faster than competing software solutions such as the Linux bridge.

Talks from Intel and one from Luigi Rizzo at the University of Pisa demonstrated that by bypassing the kernel entirely through DPDK or netmap, respectively, we haven’t reached the limits of software forwarding performance. Based on a conversation I had with Chris Wright from Red Hat, this work is helping the Linux kernel community look into reducing the overhead of the kernel, so that we can see improved performance without losing the functionality provided by the kernel.

Johann Tönsing from Netronome also presented a talk describing all the ways that Netronome’s NPU hardware can accelerate OpenFlow and Open vSwitch; I’ve talked to Johann many times before, but I had never realized how many different configurations their hardware supports, so this was an eye-opening talk for me.

Next, enhancing Open vSwitch capabilities at L4 through L7 is another exciting area. Our own Justin Pettit was joined by Thomas Graf from Noiro to talk about the ongoing project to add support for NAT and tracking L4 connections, which is key to making Open vSwitch capable of implementing high-quality firewalls. A later talk by Franck Baudin from Qosmos presented L7 enhancements to this capability.

The final area that I saw highlighted at the conference is existing applications for Open vSwitch today. Peter Phaal from InMon, for example, demonstrated applications for sFlow in Open vSwitch. I found his talk interesting because although I knew about sFlow and had talked to Peter before, I hadn’t realized all of the varied uses for sFlow monitoring data. Vikram Dham also showed his uses for MPLS in Open vSwitch and Radhika Hirannaiah her use case for OpenFlow and Open vSwitch in traffic engineering.

I want to thank all of our participants and the organizing committee for helping to put together such an amazing event.

Ben

Free Seminar – Advancing Security with the Software-Defined Data Center

We’re excited to take to the road for another edition of our VMware Software-Defined Data Center Seminar Series. Only this time, we’ll be joined by some great company.

VMware & Palo Alto Networks invite you along for a complementary, half-day educational event for IT professionals interested in learning about how Palo Alto Networks and VMware are transforming data center security.

Thousands of IT professionals attended our first SDDC seminar series earlier this year in more than 20 cities around the globe. Visit #VirtualizeYourNetwork.com to browse the presentations, videos, and other content we gathered.

This free seminar will highlight:

  • The Software-Defined Data Center approach
  • Lessons learned from real production customers
  • Using VMware NSX to deliver never before possible data center security and micro-segmentation

Who should attend?

People who will benefit from attending this session include:

  • IT, Infrastructure and Data Center Managers
  • Network professionals, including CCIEs
  • Security & Compliance professionals
  • IT Architects
  • Networking Managers and Administrators
  • Security Managers and Administrators

Agenda

  • 8:30 a.m. Registration & Breakfast
  • 9:00 a.m. VMware: Better Security with Micro-segmentation
  • 10:00 a.m. Palo Alto Networks: Next Generation Security Services for the SDDC
  • 11:00 a.m. NSX & Palo Alto Networks Integrated Solution Demo
  • 11:45 a.m. Seminar Wrap-up
  • 12:00 p.m. Hands-on Workshop
  • 1:30 p.m. Workshop Wrap-up

Check out the schedule and register. Space is limited.

Learn more at http://info.vmware.com/content/26338_nsx_series

Roger

Using Differentiated Services to Tame Elephants

This post was co-authored by Justin Pettit, Staff Engineer, Networking & Security Business Unit at VMware, and Ravi Shekhar, Distinguished Engineer, S3BU at Juniper Networks.

********************

As discussed in other blog posts and presentations, long-lived, high-bandwidth flows (elephants) can negatively affect short-lived flows (mice). Elephant flows send more data, which can lead to queuing delays for latency-sensitive mice.

VMware demonstrated the ability to use a central controller to manage all the forwarding elements in the underlay when elephant flows are detected.  In environments that do not have an SDN-controlled fabric, an alternate approach is needed.  Ideally, the edge can identify elephants in such a way that the fabric can use existing mechanisms to treat mice and elephants differently.

Differentiated services (diffserv) were introduced to bring scalable service discrimination to IP traffic. This is done using Differentiated Services Code Point (DSCP) bits in the IP header to signal different classes of service (CoS). There is wide support in network fabrics to treat traffic differently based on the DSCP value.

A modified version of Open vSwitch allows us to identify elephant flows and mark the DSCP value of the outer IP header.  The fabric is then configured to handle packets with the “elephant” DSCP value differently from the mice.

Figure 1: Elephants are detected at the edge of the network and signaled to the fabric through DSCP.  Based on these code points, the fabric can treat elephant traffic differently from mice

Figure 1: Elephants are detected at the edge of the network and signaled to the fabric through DSCP. Based on these code points, the fabric can treat elephant traffic differently from mice

Detecting and Marking Elephants with Open vSwitch

Open vSwitch’s location at the edge of the network gives it visibility into every packet in and out of each guest.  As such, the vSwitch is in the ideal location to make per-flow decisions such as elephant flow detection. Because environments are different, our approach provides multiple detection mechanisms and actions so that they can be used and evolve independently.

An obvious approach to detection is to just keep track of how many bytes each flow has generated.  By this definition, if a flow has sent a large amount of data, it is an elephant. In Open vSwitch, the number of bytes and an optional duration can be configured. By using a duration, we can ensure that we don’t classify very short-lived flows as elephants. We can also avoid identifying low-bandwidth but long-lived flows as elephants.

An alternate approach looks at the size of the packet that is being given to the NIC.  Most NICs today support TCP Segmentation Offload (TSO), which allows the transmitter (e.g., the guest) to give the NIC TCP segments up to 64KB, which the NIC chops into MSS-sized packets to be placed on the wire.

Because of TCP’s slow start, the transmitter does not immediately begin sending maximum-sized packets to the NIC.  Due to our unique location, we can see the TCP window as it opens, and tag elephants earlier and more definitively. This is not possible at the top-of-rack (TOR) or anywhere else in the fabric, since they only see the segmented version of the traffic.

Open vSwitch may be configured to track all flows with packets of a specified size. For example, by looking for only packets larger than 32KB (which is much larger than jumbo frames), we know the transmitter is out of slow-start and making use of TSO. There is also an optional count, which will trigger when the configured number of packets with the specified size is seen.

Some new networking hardware provides some elephant flow mitigation by giving higher priority to small flows. This is achieved by tracking all flows and placing new flows in a special high-priority queue. When the number of packets in the flow has crossed a threshold, the flow’s packets from then on are placed into the standard priority queue.

This same effect can be achieved using the modified Open vSwitch and a standard fabric.  For example, by choosing a packet size of zero and threshold of ten packets, each flow will be tracked in a hash table in the kernel and tagged with the configured DSCP value when that flow has generated ten packets.  Whether mice are given a high priority or elephants are given a low priority, the same effect is achieved without the need to replace the entire fabric.

Handling Elephants with Juniper Devices

Juniper TOR devices (such as QFX5100) and aggregation devices (such as MX, EX9200) provide a rich diffserv model CoS to to achieve these goals in the underlay.  These include:

  • Elaborate controls for packet admittance with dedicated and shared limits. Dedicated limits provide a minimum service guarantee, and shared limits allow statistical sharing of buffers across different ports and priorities.
  • A large number of flexibly assigned queues; up to 2960 unicast queues at the TOR and 512K at the aggregation device.
  • Enhanced and varied scheduling methods to drain these queues: strict and round-robin scheduling with up to 4-levels of hierarchical schedulers.
  • Shaping and metering to control the rate of injection of traffic from different queues of a TOR in the underlay network. By doing this, bursty traffic at the edge of the physical network can be leveled out before it reaches the more centrally shared aggregation devices.
  • Sophisticated controls to detect and notify congestion, and set drop thresholds. These mechanisms detect possible congestion in the network sooner and notify the source to slow down (e.g. using ECN).

With this level of flexibility, it is possible to configure these devices to:

  • Enforce minimum bandwidth allocation for mice flows and/or maximum bandwidth allocation for elephant flows on a shared link.
  • When experiencing congestion, drop (or ECN mark) packets of elephant flows more aggressively than mice flows.  This will result in TCP connections of elephant flows to back off sooner, which alleviates congestion in the network.
  • Take a different forwarding path for elephant flows from that of mice flows.  For example, a TOR can forward elephant flows towards aggregation switches with big buffers and spread mice flows towards multiple aggregation switches that support low-latency forwarding.

Conclusion

By inserting some intelligence at the edge and using diffserv, network operators can use their existing fabric to differentiate between elephant flows and mice. Most networking gear provides some capabilities, and Juniper, in particular, provides a rich set of operations that can be used based on the DSCP.  Thus, it is possible to reduce the impact of heavy hitters without the need to replace hardware. Decoupling detection from mitigation allows each to evolve independently without requiring wholesale hardware upgrades.

 

Physical Networks in the Virtualized Networking World

[This post was co-authored by VMware's Bruce Davie and Ken Duda from Arista Networks, and originally appeared on Network Heresy]

Almost a year ago, we wrote a first post about our efforts to build virtual networks that span both virtual and physical resources. As we’ve moved beyond the first proofs of concept to customer trials for our combined solution, this post serves to provide an update on where we see the interaction between virtual and physical worlds heading.

Our overall approach to connecting physical and virtual resources can be viewed in two main categories:

  • terminating the overlay on physical devices, such as top-of-rack switches, routers, appliances, etc.
  • managing interactions between the overlay and the physical devices that provide the underlay.

The latter topic is something we’ve addressed in some other recent posts (herehere and here) — in this blog we’ll focus more on how we deal with physical devices at the edge of the overlay. Continue reading

Geneve, VXLAN, and Network Virtualization Encapsulations

In this post, Bruce Davie and T. Sridhar of VMware’s Networking and Security Business Unit take a look at a proposed a new encapsulation protocol that would standardize how traffic is tunneled over the physical infrastructure by network overlay software.

++++

For as long as we’ve been doing Network Virtualization, there has been debate about how best to encapsulate the data. As we pointed out in an earlier post, it’s entirely reasonable for multiple encapsulations (e.g. VXLAN and STT) to co-exist in a single network. With the recent publication of “Geneve”, a new proposed encapsulation co-authored by VMware, Microsoft, Red Hat and Intel, we thought it would be helpful to clarify a few points regarding encapsulation for network virtualization. First, with all the investment made by us and our partners in developing support for VXLAN (described here), we very much intend to continue supporting VXLAN — indeed, we’ll be enhancing our VXLAN capabilities. Second, we want to explain why we believe Geneve is a necessary and useful addition to the network virtualization landscape.

Read the rest of Bruce’s blog on the Office of the CTO blog here.

Juniper and VMware: Collaborating to Enable The Software-Defined Data Center

The need for businesses to enhance the efficiency of IT and increase application agility is overwhelming. Embracing operational models such as cloud computing helps, but in order to fully leverage these new models companies must explore new ways of handling network connectivity. Network virtualization solutions such as VMware NSX provide an answer for the new cloud-centric networking models. As with any technology, though, network virtualization doesn’t solve some existing challenges by itself: consistent, efficient performance for business-critical applications that span virtual and physical worlds; correlated and integrated management; and enhancing data sharing between the network virtualization solution and the underlying physical network are all critical elements to successful cloud deployments. To address these challenges, we are pleased to announce that Juniper and VMware are expanding our partnership to help our joint customers achieve better application agility for their cloud environments. Continue reading

The Goldilocks Zone: Security In The Software-Defined Data Center Era

Last week, we spoke at the RSA Conference about a new concept in security – the Goldilocks zone.  With the help of Art Coviello, Executive Chairman of RSA, Chris Young, senior vice president and GM of Cisco’s Security business unit, and Lee Klarich, senior vice president of product management from Palo Alto Networks, we departed from the typical discussions about new controls or the latest threats.  We took the opportunity to lay out what we believe is a fundamental architectural issue holding back substantial progress in cyber security, and how virtualization may just provide the answer. The growing use of virtualization and the move towards software-defined data centers enable huge benefits in speed, scalability and agility; those benefits are undeniable. It may turn out, however, that one of virtualization’s biggest benefits is security. Continue reading

VMware at RSA Conference 2014 (#RSAC)

Summary:logo_rsac

  • Company outlines vision for security in the Software-Defined Data Center
  • Product and partner demonstrations in Booth #1615 to showcase growing security portfolio
  • New PCI-DSS 3.0 and FedRAMP reference architectures to be presented

Throughout its history, RSA Conference has consistently attracted the world’s best and brightest in the security field, creating opportunities for attendees to learn about IT security’s most important issues through first-hand interactions with peers, luminaries and emerging and established companies. Continue reading