Home > Blogs > The Network Virtualization Blog


Seven reasons VMware NSX, Cisco UCS and Nexus are orders of magnitude more awesome together

VMware NSX, Cisco UCS and Cisco Nexus, TOGETHER solve many of the most pressing issues at the intersection of networking and virtualization.

Executive Summary

VMware NSX brings industry-leading network virtualization capabilities to Cisco UCS and Cisco Nexus infrastructures, on any hypervisor, for any application, with any cloud management platform.  Adding state of the art virtual networking (VMware NSX) to best-in-class physical networking (Cisco UCS & Nexus) produces significant optimizations in these key areas:

  • Provision services-rich virtual networks in seconds
  • Orders of magnitude more scalability for virtualization
  • The most efficient application traffic forwarding possible
  • Orders of magnitude more firewall performance
  • Sophisticated application-centric security policy
  • More intelligent automation for network services
  • Best-of-breed synergies for multi data center
  • More simplified network configurations

Cisco UCS and Nexus 7000 infrastructure awesomeness

A well-engineered physical network always has been and will continue to be a very important part of the infrastructure. The Cisco Unified Computing System (UCS) is an innovative architecture that simplifies and automates the deployment of stateless servers on a converged 10GE network. Cisco UCS Manager simultaneously deploys both the server and its connection to the network through service profiles and templates; changing what was once many manual touch points across disparate platforms into one automated provisioning system. That’s why it works so well. I’m not just saying this; I’m speaking from experience.

Cisco UCS is commonly integrated with the Cisco Nexus 7000 series; a high-performance modular data center switch platform with many features highly relevant to virtualization, such as converged networking (FCoE), data center interconnect (OTV), Layer 2 fabrics (FabricPath, vPC), and location independent routing with LISP. This typically represents best-in-class data center physical networking.

With Cisco UCS and Nexus 7000 platforms laying the foundation for convergence and automation in the physical infrastructure, the focus now turns to the virtual infrastructure. VMware NSX, when deployed with Cisco UCS and Cisco Nexus, elegantly solves many of the most pressing issues at the intersection of networking and virtualization. VMware NSX represents the state of the art for virtual networking.

1) Virtualization-centric operational model for networking

VMware NSX adds network virtualization capabilities to existing Cisco UCS and Cisco Nexus 7000-based infrastructures, through the abstraction of the virtual network, complete with services such as logical switching, routing, load balancing, security, and more. Virtual networks are deployed programmatically with a similar speed and operational model as the virtual machine — create, start, stop, template, clone, snapshot, introspect, delete, etc. in seconds.

The virtual network allows the application architecture (including the virtual network and virtual compute) to be deployed together from policy-based templates, consolidating what was once many manual touch points across disparate platforms into one automated provisioning system. In a nutshell, VMware NSX is to virtual servers and the virtual network what Cisco UCS is to physical servers and the physical network.

2) More headroom for virtualization, by orders of magnitude (P*V)

VMware NSX provides the capability to dynamically provision logical Layer 2 networks for application virtual machines across multiple hypervisor hosts, without any requisite VLAN or IP Multicast configuration in the Cisco UCS and Cisco Nexus 7000 infrastructure. For example, thousands of VXLAN logical Layer 2 networks can be added or removed programmatically through the NSX API, with only a few static infrastructure VLANs; compared to what was once thousands of manually provisioned VLANs across hundreds of switches and interfaces.

Figure: NSX dynamic logical Layer 2 networks

Two of the most common breaking points when scaling a network for virtualization are:

  1. Limited number of STP logical port instances the switch control plane CPUs can support, placing a ceiling on VLAN density.
  2. Limited MAC & IP forwarding table resources available in switch hardware, placing a ceiling on virtual machine density.

VLANs and virtual machines; two things you don’t want a visible ceiling on. Fortunately, VMware NSX provides significant headroom for both, by orders of magnitude, for the simple reason that VLAN and STP instances are dramatically reduced; and hardware forwarding tables are utilized much more efficiently.

Consider (P1 * V1) = T. Switch ports * number of active VLANs = STP logical ports.

One thousand fewer infrastructure VLANs with VMware NSX translates into one thousand times fewer STP logical port instances loading the Cisco UCS and Nexus 7000 control plane CPUs. This can only help ongoing operational stability, along with the obvious scaling headroom.

Consider (P2 * V2) = D. Physical hosts * VMs per host equals virtual machine density.

Normally, the size of the MAC & IP forwarding tables in a switch roughly determines the ceiling of total virtual machines you can scale to (D), as each virtual machine requires one or more entries. With VMware NSX, however, virtual machines attached to logical Layer 2 networks do not consume MAC & IP forwarding table entries in the Cisco UCS and Nexus 7000 switch hardware. Only the physical hosts require entries. In other words, with VMware NSX, the ceiling is placed on the multiplier (P2), not the total (D).

Reduced VLAN sprawl and logical Layer 2 networks compound to both simplify the Cisco UCS and Nexus configurations and significantly extend the virtualization scalability and virtual life of these platforms.

3) Most efficient application traffic forwarding possible

Have you ever noticed the paradox that good virtualization is bad networking? For example, the network design that works best for virtualization (Layer 2 fabric) isn’t the best design for Layer 3 traffic forwarding, and vice versa. That is, until now.

VMware NSX provides distributed logical Layer 3 routing capabilities for the virtual network subnets at the hypervisor kernel. Each hypervisor provides the Layer 3 default gateway, ARP resolver, and first routing hop for its hosted virtual machines.  The result is the most efficient forwarding possible for east-west application traffic on any existing Layer 2 fabric design, most notably Cisco UCS.

Figure: NSX Distributed Layer 3 routing — intra host

In the diagram above, VMware NSX distributed logical routing provides east-west Layer 3 forwarding directly between virtual machines on the same Cisco UCS host, without any hairpin hops to the Cisco Nexus 7000 — the most efficient path possible.

VMware NSX spans multiple Cisco UCS hosts acting as one distributed logical router at the edge. Each hypervisor provides high performance routing only for its hosted virtual machines in the kernel I/O path, without impact on system CPU. Layer 3 traffic between virtual machines travels directly from source to destination hosts inside the non-blocking Cisco UCS fabric — the most efficient path possible.

Figure: NSX Distributed Layer 3 routing — inter host

This efficient Layer 3 forwarding works with the existing Cisco UCS Layer 2 fabric, keeping more east-west application traffic within the non-blocking server ports, minimizing traffic on the fewer uplink ports facing the Cisco Nexus 7000 switches.

With Layer 3 forwarding for the virtual network handled by the hypervisors on Cisco UCS, the Cisco Nexus 7000 switch configurations are simpler; because VMware NSX distributed routing obviates the need for numerous configurations of virtual machine adjacent Layer 3 VLAN interfaces (SVIs) and their associated HSRP settings.

Note: HSRP is no longer necessary with the VMware NSX distributed router, for the simple reason that virtual machines are directly attached to one logical router that hasn’t failed until the last remaining hypervisor has failed.

The Cisco Nexus 7000 switches are also made more scalable and robust as the supervisor engine CPUs are no longer burdened with ARP and HSRP state management for numerous VLAN interfaces and virtual machines.  Instead, VMware NSX decouples and distributes this function across the plethora of x86 CPUs at the edge.

4) More awesome firewall, by orders of magnitude (H*B)

Similar to the aforementioned distributed logical routing, VMware NSX for vSphere also includes a powerful distributed stateful firewall in the hypervisor kernel, which is ideal for securing east-west application traffic directly at the virtual machine network interface (inspecting every packet) with scale-out data plane performance. Each hypervisor provides transparent stateful firewall inspection for its hosted virtual machines, in the kernel, as a service – and yet all under centralized control.

The theoretical throughput of the VMware NSX distributed firewall is some calculation of (H * B). The number of Hypervisors * network Bandwidth per hypervisor. For example, 500 hypervisors each with two 10G NICs would approximate to a 20 Terabit east-west firewall.

Figure: NSX Distributed Firewall — intra host

As we see in the diagram above, the distributed firewall provides stateful east-west application security directly between virtual machines on the same Cisco UCS host, without any hairpin traffic steering through a traditional firewall choke point. Zero hops. The most efficient path possible.

The VMware NSX distributed firewall spans multiple Cisco UCS hosts, like one massive firewall connected directly to every virtual machines. Each hypervisor kernel provides the stateful traffic inspection for its hosted virtual machines. In other words, traffic leaving a Cisco UCS host and hitting the fabric has already been permitted by a stateful firewall, and is therefore free to travel directly to its destination (where it’s inspected again).

Figure: NSX Distributed Firewall — inter host

Given the VMware NSX distributed firewall is directly adjacent to the virtual machines, sophisticated security policies can be created that leverage enormous amount of application-centric metadata present in the virtual compute layer (things such as user identity, application groupings, logical objects, workload characteristics, etc.); far beyond basic IP packet header inspection.

As a simple example, a security policy might say that protocol X is permitted from the logical network ”Web” to ”App”  – no matter the IP address.  Consider a scenario where this application is moved to a different data center, with different IP address assignments for “Web” and “App” networks; and having no affect on the application’s security policy.  No need to change or update firewall rules.

Finally, we can see again that more east-west application traffic stays within the low latency non-blocking Cisco UCS domain — right where we want it.  This can only help application performance while freeing more ports on the Cisco Nexus 7000 previously needed for bandwidth to a physical firewall.

5) More awesome network services

One of the more pressing challenges in a virtualized data center surrounds efficient network service provisioning (firewall, load balancing) in a multi-tenant environment. Of particular importance are the services establishing the perimeter edge — the demarcation point establishing the application’s point of presence (NAT, VIP, VPN, IP routing). Typical frustrations often include:

  • Limited multi-tenancy contexts on hardware appliances
  • Static service placement
  • Manually provisioned static routing
  • Limited deployment automation
  • Service resiliency

To address this, VMware NSX includes performance optimized multi-service virtual machines (NSX Edge Services), auto deployed with the NSX API into a vSphere HA & DRS edge cluster. Multi-tenancy contexts are virtually unlimited by shifting perimeter services from hardware appliances to NSX Edge virtual machines on Cisco UCS.

Figure: Sample VMware NSX logical topology on Cisco UCS

Dynamic IP routing protocols on the NSX Edge (BGP, OSPF, IS-IS) allow the Cisco Nexus 7000 switches to learn about new (or moved) virtual network IP prefixes automatically — doing away with stale and error prone static routes.

VMware NSX Edge instances leverage HA & DRS clustering technology to provide dynamic service placement and perpetual N+1 redundancy (auto re-birth of failed instances); while Cisco UCS stateless computing provides the simplified and expedient restoration of service capacity (re-birth of failed hosts).

Figure: Application traffic flow. Before & After

With VMware NSX, traffic enters the Cisco UCS domain where all required network services for both north-south and east-west flows are applied using high performance servers within the non-blocking converged fabric, resulting in the most efficient application flows possible.

Note: VMware NSX is also capable of bridging virtual networks to physical through the NSX Edge, where specific VXLAN segments can be mapped to physical VLANs connecting physical workloads, or extended to other sites.

6) Divide and Conquer multi data center

Solving the multi data center challenge involves tackling a few very different problem areas related to networking. Rarely does one platform have all the tools to solve all of the different problems in the most elegant way. It’s usually best to divide and conquer each problem area with the best tool for the job. In moving an application from one data center to another, the networking challenges generally boil down to three problem areas:

  1. Recreate the application’s network topology and services
  2. Optimize Egress routing
  3. Optimize Ingress routing

In abstracting the virtual network, complete with Logical Layer 2 segments, distributed logical routing, distributed firewall, perimeter firewall, and load balancing, all entirely provisioned by API and software, VMware NSX is the ideal tool for quickly and faithfully recreating the applications network topology and services in another data center. At this point the NSX Edge provides the application a consolidated point of presence for optimized routing solutions to solve against.

Figure: Multi data center with VMware NSX, Cisco OTV and LISP

The next problem area — optimized egress routing — is ideal for a tool like OTV on the Cisco Nexus 7000 series, where the virtual network’s NSX Edge is given a consistent egress gateway network at either data center, with localized egress forwarding. Cisco OTV services are focused on the DMZ VLAN and the NSX Edge, and not burdened with handling every individual network segment, every virtual machine, and every default gateway within the application. With this simplicity the OTV solution becomes more scalable to handle larger sets of applications, and easier to configure and deploy.

With the Cisco Nexus 7000 and OTV keying on the NSX Edge (via VIPs and IP routing) for the application’s point of presence, this serves as in ideal layering point for the next problem area of optimized ingress routing. This challenge is ideal for tools such as BGP routing, or LISP on the Cisco Nexus 7000 switches and LISP capable routers; delivering inbound client traffic immediately and directly to the data center hosting the application.

7) A superior track record of integration and operational tools

It’s hard to think of two technology leaders with a better track record of doing more operationally focused engineering work together than Cisco and VMware. Examples are both recent and plenty; such as the Cisco Nexus 1000V, Cisco UCS VM-FEX, Cisco UCS Plugin for VMware vCenter, the Cisco UCS Plugin for VMware vCenter Orchestrator, and so on.

Operational visibility is all about providing good data and making it easily accessible. A comprehensive API is the basis on which two industry leaders can engineer tools together exchanging data to provide superior operational visibility. Cisco UCS and VMware NSX are two platforms with a rich API engineered at its core (not a bolted on afterthought). When looking at both the track record and capabilities of VMware and Cisco, working together to serve their mutual customer better, we’re excited about what lies ahead.

In closing

VMware NSX represents best-in-class virtual networking, for any hypervisor, any application, any cloud platform, and any physical network.  A well-engineered physical network is, and always will be, an important part of the infrastructure. Network virtualization makes it even better by simplifying the configuration, making it more scalable, enabling rapid deployment of networking services, and providing centralized operational visibility and monitoring into the state of the virtual and physical network.

The point of this post is not so much to help you decide what your data center infrastructure should be, but to show you how adding VMware NSX to Cisco UCS & Nexus will allow you to get much more out of those best-in-class platforms.

Brad Hedlund
Engineering Architect
VMware NSBU

32 thoughts on “Seven reasons VMware NSX, Cisco UCS and Nexus are orders of magnitude more awesome together

  1. David Zhang

    Hi Brad,

    Great post!

    Thank you very much for sharing!

    Could you please let me know where i can find more technical details about NSX?

    Best Regards,

    David

      1. Kelly McGrew

        Ivan’s seminars are always top-notch! I highly recommend them…and look forward to this one myself.

        Kelly

  2. Juan Tarrío (BROCADE)

    There isn’t much here that cannot be achieved with any other vendor’s networking infrastructure. In fact, isn’t the whole point and marketing message of VMware NSX is that you can build these virtual networks regardless of the underlying physical infrastructure, and that it provides all these benefits to any existing network from any vendor? Isn’t the whole point of SDN to “commoditize” the physical network infrastructure?

    I think it’s unprofessional of VMware to publish in their official blogs a post that sides so much with one of their many networking partners and shamelessly promotes Cisco Nexus and UCS infrastructure over other vendors in this manner. Of course, Brad, you can have your personal opinion and this post doesn’t surprise me given your past, but you should keep that to bradhedlund.com. VMware should be neutral. It should be up to Cisco (and the rest of the networking vendors) to convince their customers why VMware NSX is better running on their own networking infrastructure.

    DISCLAIMER: I work for Brocade. This is my personal opinion.

  3. Mark Berly

    The concept of virtual overlay topologies that NSX enables is truly intriguing and exciting technology. Unfortunately there really is nothing in the above post that discusses any differentiators that you get when using NSX with a Cisco infrastructure. Alternately there is one vendor that has products that are ready today that have deep integrations with NSX – these come from Arista Networks.

    Arista believes in a open ecosystem in which the customer can choose the vendors that best meet their need, to this end there are many direct integrations with Arista EOS and other vendors. In the case of NSX here are a few truly differentiating features / functions:

    1) Shipping VXLAN VTEP
    2) Tight integration with NSX / OVSDB
    3) Dynamic just in time provisioning of network resources for vm placement or during DRS, this includes VLANs and VTEPs
    4) Complete visibility to both physical and virtual topologies via the switches CLI
    5) Works with native hypervisor, no need for rip-n-replace

    All of the above was demonstrated at vmware 2013 by Arista and vmware, with Arista its not a roadmap item or marchitecture its a reality…

    Disclaimer: I work at Arista Networks, opinions expressed are my own

    1. ted

      Mark,
      Can you please explain more about this “Complete visibility to both physical and virtual topologies via the switches CLI”?

      Thanks

      1. Mark Berly

        Ted – From the switch’s CLI you can see the physical servers attached, the virtual machines associated with those servers, the status of the virtual machines as well as dvuplink and vnic information. This is all done with the native hypervisor from vmware and does not require a rip-n-replace.

        1. Kanat

          Hi Mark,
          Is this information embedded into NSX management tools or you need to jump to Arista CLI to access it?
          Can you share this info across the physical topology to say track VM traffic path?

    2. Brad HedlundBrad Hedlund Post author

      Hi Mark, Hi Juan,

      This post was written to answer questions from customers about how NSX can be used on their existing infrastructure, and what the benefits are.

      A large number of our enterprise and service provider customers have a significant Cisco installed base of physical network infrastructure. This post was intended to make sure that those customers have the information they need to understand how and why they should consider looking at VMware NSX today.

      We look forward to working with all of our partners, including Arista and Brocade, to promote how customers can benefit from deploying NSX across those infrastructure choices as well.

      1. Juan Tarrío

        Hi Brad, thanks for taking the time to respond. While I certainly acknowledge Cisco’s dominance in the networking industry, there are thousands of Brocade, Arista and many other vendors’ customers out there reading this post and wondering by VMware NSX is “better together” with Cisco Nexus and UCS and not any other vendor’s infrastructure. I still think this post would have made a better public service if it had stayed more “neutral” with regards to the underlying hardware vendor and had highlighted how important the underlying physical infrastructure continues to be when you deploy network virtualization, in line with your recent tweets…

    1. MZ

      N7k F3 supports VXLAN in hardware. UCS supports it via N1k ESX & HyperV with both multicast & unicast VXLAN modes.

  4. Eli Ben-Shoshan

    I think you missed one important point: troubleshooting.

    Where and how can a network engineer or systems or infrastructure engineer troubleshoot a reported network problem? Will we have to touch a lot of different hosts to accomplish what was once a span of a physical switch port? While I think NSX adds a lot of value especially when it comes to network provisioning for a VM, I would like to know how I am going to troubleshoot this infrastructure when something hits the virtual fan.

    1. Mark Berly

      Providing linkages between infrastructure and applications is critical in any highly virtualized data center. These linkages should allow visibility for all of the administrators of the various components of the data center ecosystem.

      As you point out having a SPAN session is critical is getting the appropriate information about what is going on in the network. While there are different ways to accomplish this goal the implementation of a tap aggregation switch can help solve many of these issues as it will allow the network monitoring tools to stay in one place aggregating back of your data traffic and allowing you to select which flows go to which tools. In addition having hooks in the network operating system which allow intelligent interaction with the virtualization platform so that SPAN sessions can follow a VM as it moves are vary useful.

      The issues you bring up are good ones and are being solved by the networking vendors that look toward an open ecosystem, instead of one that is closed. By working together best of breed vendors can provide both network and application teams the tools and visibility so they can work together in a positive manner.

      Looking into the future the merger of all of the data center disciplines will happen, as it has with so many other technologies, but looking nearer term I 100% agree with you that tools are need to help not only deploy but to manage these highly virtualized overlay based networks.

    2. David Klebanov

      Hi Eli,

      You are absolutely right. Network virtualization approach advocated by VMware in a form of NSX product creates operational, administrative and maintenance silo of network, security and application delivery principles encapsulated in a software-only form. If you want to know how VMware suggests you troubleshoot this silo, I advise you to take a look at session “NET5790 – Operational Best Practices for NSX in VMware Environments” from the recent VMworld 2013 event. In that session you will clearly see the deep networking expertise required for this task. You will have two disparate environments to deploy, manage and troubleshoot, the physical network and the virtual overlay.

      The only correlation between physical and virtual is occurring at the edges of an overlay network on either x86 hypervisors or one of the third-party partner switches supporting VXLAN VTEP functionality. This is “troubleshooting by rumor” approach, which is analogous to using traceroute to determine network problems. Sure, you can look at counters or perform packet capture at the overlay tunnel endpoints, you can also send a probe packets to determine end-to-end reachability, but it’s like trying to diagnose and solve a power grid problem in your neighborhood by looking at the power outlet in your home… Comprehensive solution should treat virtual and physical environments as one cohesive domain, where provisioning enhancements are coupled with full visibility and operational transparency. Organizations are striving to eliminate siloed approaches to increase efficiencies and NSX is not helping much on this front.

      Disclaimer: I work for Cisco, but this comment represents my own views only.

      Thank you for reading.
      David
      @DavidKlebanov

      1. Kanat

        hey David,
        ex Cisco myself, cheers for the tip on that session.
        It’s interesting, and I see how it’s not exactly easy to tshoot that. Actually it kinda looks like Cisco :) same sort of CLI kung-fu.

        I agree with you that operational side of NSX is… clunky and will create some tension in between server/network/security guys.

        That said – NSX is ain’t perfect but it’s out there and it’s been deployed (as Nicira) by some rather big names. It offers very attractive benefits – mainly around speeding up the network provisioning/alteration in highly mobile DC/SP environment. It’s vendor agnostic. And it’s a software solution, meaning more rapid development cycles.

        Question to you – can you comment on how Cisco ACI will be better?

  5. ITnuts

    sounds like the similiar argument for source base dedupe, inline dedupe and post dedupe on the storage world.

    Referring to the post above, there are arguments targeting the UCS Fabric Interconnect which did not support L3 traffic forwarding, and now NSX will perform the L3 traffic forwarding via the L2 physical link.
    Most data center do not enable L3 on every switch just to reduce the uplink and routing traffic. There are risk and operation concerns to enable L3 on every switch in the data center, by targeting to reduce latency on the number of hops. Throughput should not be the major challenge as 10Gbps Network is matured, and 40Gbps is on the way

    will this be really practical in every environment? may be useful for public cloud, but may not be best fit in every enterprise network. With NSX, the total packet forwarding speeds and limits will still depend on the physical switches. The network performance will not be determine by NSX only.

    Virtual firewall is not new concept and most users will buy in to the multi vendors and multi tiers firewall strategy, which doesn’t mean to remove all physical firewall, but introduce extra layer security on virtual layer

    I agree NSX is brand new concept to be reconsidered for virtualize environment, but it may not easily fit in to the existing infrastructure without major changes required. It may be good use case if users are targetting to deploy a brand new infrastructure and fully virtualize infrastructure.

    1. Brad HedlundBrad Hedlund Post author

      Definitely agree that packet forwarding throughput in the physical network plays an important role in performance. That’s true with or witout network virtualization. NSX provides the best possible forwarding path on that network. And as a software solution, you can add NSX in an existing environment, in a walled garden, without any changes to the physical network. You can start small with just a few hosts, running just a few Dev/Test apps. Once you get a feel for how well that NSX garden works, you can choose to grow it from there, or not.

      Cheers,
      Brad

  6. Jake

    WOW. Why does this look like HP’s Virtual Connect? You finally admit that UCS must move packet out of the enclosure and return to the enclosure to communicate with a server in the same enclosure? Could it be Cisco has it wrong? Cisco has a closed proprietary solution design -meant to sell more network devices. A Design that sends the Management packets down the same pipe as the data! Nobody else in the network market does this. Shrinking switch market = the birth of UCS. Come on Brad, Have some intellectual honesty and admit that this is Virtual Connect for UCS. Kind of. Cisco gets to keep allof the useless iron (Fabric Interconnects) and bill people for ports! However, HP has always attempted to eliminate layers and complexity with VC. For full disclosure I work for a resller that spends on average 2 less days per solution to implement VC versus UCS. UCS is a dinosaur meant to fuel the Cisco machine with cash only. And the maintenance and headaches with UCS are tremendous compared to the VC implementations I have done!

  7. Dan Robinson

    So I have to agree with Juan here and say this is a sad attempt at shilling for Cisco.
    Full disclosure, I work for HP. These opinions are my own.

    Lets break it down further.
    1) You say NSX adds Vitual Networking to UCS, but doesn’t it add this Virtual Networking to almost any vendor the same way? There is ZERO mention of ACTUAL integration between the 2 products. This Bullet basically says, “they are compatible” And as Jake pointed out, Virtual Connect has been doing this since around 2007.

    2) This is very similar in its so generic. Use of Virtual VLANs reduces the use of Physical VLANs. Groundbreaking stuff here. Then you go on to say that UCS is better here because its no longer congested by traffic it might not have otherwise been able to handle. Thats not saying UCS/Nexus is better with NSX, its saying Nexus sucks LESS when NSX is handling that workload. But again, there is nothing that points to actual integration or specific advantages for UCS/Nexus here.

    3) I feel like a parrot here. You say yourself in Paragraph 2, “on any existing Layer 2″ but still feel the need to call out UCS. The pictures here could have UCS Blade, UCS Fabric X and Nexus 7000 swapped out with virtually ANY vendor’s Blade and Network solution and would look almost identical. Again you point out the 7000 doesn’t scale high enough to handle this workload without NSX.

    4) Ugh, do I even have to say it? Again, nothing specific to UCS or Nexus.
    In fact, the East/West traffic in other solutions, Virtual Connect, HPN on c7000, hell even Dell or IBM Blades don’t have to send the traffic up to the Distribution layer to allow 2 blades to talk to each other INSIDE the same enclosure. HPN switches even allow “vPC” (called IRF on the HP side) right in the back of the Blade Enclosure and it scales to more than just 2 switches.

    5) Once again, nothing special here. Even the protocols mentioned like BGP and OSPF are industry standards and not unique to Nexus.
    And Re-birthof failed hosts? Why would you bother setting up “spares” in a VMware environment. Wouldnt it be better to have that Spare node running and servicing VMs and simply spread its VMs back out via HA during a failure? The only advantage I can see here is maybe a License cost savings on the VMware side. But if you can afford UCS, I am sure you can afford a few more vSphere licenses.

    6) Here is the only one where I might award you any points at all. Sure OTV can handle this type ofwork, but its not the only one in the industry that can i’m sure. And again you point out that by making the Nexus 7000 work less, it gets faster.

    7) Really? Superior track record of integration? The vCenter Plugin is still in Beta. The link you provide says version 0.9.2. At least the vCenter Orchestrator link is (barely) out of Beta. I especially like this “integration here”
    Following caveats were resolved in 0.9(2) release
    -CSCue57514 – ESX servers are shown as non-ESX servers in vcenter plugin
    So the plugin doesn’t know how to handle ESX (as opposedto ESXi). I can see those many years of Integration are paying off.

    This entire Blog post reads as if Written by Cisco Marketing.
    Quite honestly I expected better.

    BTW, can you tell me which Network Vendor is missing from this picture?
    http://img853.imageshack.us/img853/1692/d8to.jpg

  8. Marc Edwards

    There has been much hype in recent weeks about NSX positioning. Reading through blogs and looking at marketing (most notably the man with the hammer ready to thwart the dragon in the city), it appears that vmware has aspirations attempting to commoditize the networking industry and bring Cisco to it’s knees. Most of the marketing so far has been rather pretentious and would at least say this is a modest improvement to understanding the realities that exist in service provider and data center environments throughout the world. You can not simply rip out Cisco. Especially when it’s gear can run for over 10 years w/out a hitch. Cisco also provides world-class support to their products in development, pre sales, and post sales. Who has not been thankful to that TAC engineer who was able to save the day at 2 AM minimizing downtimes, lost revenues, and resume writing events. Simply can’t avoid Cisco and this article is what I see as a first attempt to also display recent innovations at Cisco with relation to hardware abstraction at server level drastically reducing the time it takes to upgrade/service the underlying metal vm’s are hosted on.

    Three are a few things that I believe do need clarification in this article.
    -’In a nutshell, VMware NSX is to virtual servers and the virtual network what Cisco UCS is to physical servers and the physical network’ This isn’t all that true. UCS service profiles are essentially a shim between the metal and the operating system. unique characteristics of the server (UUID, MACs, FW updates, BIOS rev, boot order, vNICs, vHBAs, etc,,,,) are stored in files abstracting these characteristics from the metal and automating the processes involved with prepping a server for an OS. As stated, it can reduce time to prep bare metal into minutes as opposed to hours (or more depending on the sysadmin). That is how it was able to gain 2nd position worldwide in an industry it did not compete in 4 years ago. You love UCS, I love UCS, and would bet that anybody who has racked/stacked servers would love UCS just as much. NSX isn’t a shim so much as a tunneling protocol that creates a lack of visibility into the physical characteristics of the network. This is a critical mis sight by vmware. By not marrying up both the physical and virtual networks, it adds additional troubleshooting for both network and systems admins = more finger pointing and less productivity.

    “Limited number of STP logical port instances the switch control plane CPUs can support, placing a ceiling on VLAN density.” – Have you heard of multiple spanning tree protocol? It bundles vlans into the same instance and is how the savvy engineers run data center networks today. Speaking of spanning tree, why do you see the need for spanning tree when there is now support for Multi-Chassis Etherchannels (vPC & VSS) , fabric path , TRILL already positioned to solve this issue and sipped in the Nexus 7000′s?

    “Limited MAC & IP forwarding table resources available in switch hardware, placing a ceiling on virtual machine density.” I don’t see this as a problem in Nexus 7000 that utilizes switch on chip (SOC) technology decoupling all forwarding from supervisors. Also scaling up to 1 million entries per line card.

    “Normally, the size of the MAC & IP forwarding tables in a switch roughly determines the ceiling of total virtual machines you can scale to” In my experiences, it has been the physical limitations of servers deployed that determines how many VM’s can run in a cluster. Do you have any test results to back your claim?

    In concluding. NSX has possibilities but really most of it’s capabilities already exist in virtually using the Cisco 1000v, VSG, ASA1000v, and Citrix 1000v. If a customer has invested in Cisco who has gained their trust through proven performance. I believe it worth while for them to see what capabilities exist with said products and due a true apples to apples comparison on both feature and price before making any hasty decisions on a rev 0 product that has generated plenty of hype and not much revenue.

    1. Brad HedlundBrad Hedlund Post author

      Hi Marc,

      You described how UCS abstracts the characteristics of a server into a profile stored as file that can be copied and templated, and how that reduces the time to deploy a server. NSX does exactly the same thing for the network. NSX abstracts network services such as Layer-2, Layer 3 routing, firewall, load balancing, vpn, etc. and stores it as a data object that can be copied and templated, dramatically reducing the time to deploy the network for virtual machines. Tunneling is just an implementation detail of how NSX accomplishes some of that, through decoupling.

      “Have you ever heard of multiple spanning tree protocol?” Indeed I have. Making the migration to MST is anything but trivial. Tell a network admin that all problems will be solved by just completely re-configuring the spanning tree in his/her production network and you’ll be shown the door. By the way, STP instances still count on VLANs in Multi-Chassis Etherchannel deployments.

      “1 million entries per line card” Depends on which line card, and depends on which entries you’re talking about. Yes, some linecards have 1 million IP route entries — now take a look at the port density and cost of that linecard, and the MAC table size of that linecard. What you’ll often find is that linecards with the best port density and cost are the ones with the smallest table sizes (16K in some cases).

      “Do you have any test results to back your claim?” This is really more of an obvious reality than it is a theory. Consider a core switch with linecards that have 16K MAC/IP table sizes, if you had a 50:1 vm density per server, that amounts to 320 servers. At 40 servers per rack, your deployment is only 8 racks. Your awesome core switch can probably handle a lot more than 8 racks, so you’re not getting the most potential out of that investment.

      “More finger pointing and less productivity”. I disagree. Because with NSX and network virtualization in general you’ll have a central view into the health and state of the complete virtual network(s), including L2, L3, FW, LB, and the health of the physical network. This allows you to get a lot more information about where a problem exists, be it in the virtual network (bad ACL on virtual port somewhere blocking traffic), or in the physical network (bad port dropping packets somewhere). NSX will be able to help you begin your troubleshooting exercise with more actionable data.

      Cheers,
      Brad

      1. Marc Edwards

        Brad,

        Thanks for reply. It is worth getting mac entry numbers straight for Nexus 7000:

        M1: 128,000
        F2: 16,384 per SoC, and up to 196,608 per module (depending on VLAN allocation)
        F3 40G: 64K

        To your point, routes would be higher but from a raw layer 2 perspective, it scales much higher than 16K mostly due to the custom ASICS and integrated Switch On Chip (SOC) capabilities of the line cards. That might make the ‘obvious’ a bit more fuzzy and perhaps why I didn’t understand the logic behind stated numbers and claims. I find it good practice state proven validations opposed to marketing. I have seen that get a company in trouble on a few levels and occasions.

        I have been personally thanked by network admins for upgrading per VLAN STP to MST. I set up proof of concept displaying faster convergence times and it usually sells itself. No need to fear when benefits are in plain sight. Typically, I am shown the console as opposed to the door.

        Again, glad to see acceptance of Cisco innovation and architecture. I think it is a positive step forward for the SDN movement. On that note Cisco does offer 1000v, Cloud Services Rotuer, 1000v ASA, VSG essentially already solving problems that have been identified int his article. It also does it with the same look and feel network engineers are used to.

        In concluding, very soon Cisco will shed light to an application centric infrastructure (http://blogs.cisco.com/datacenter/limitations-of-a-software-only-approach-to-data-center-networking/) that moves SDN past data center into all aspects of the network. a marriage of both physical and virtual that helps ease deployment time and reacts to the whole network in an application centric manner.

        Regards,

        Marc

        1. Brad HedlundBrad Hedlund Post author

          Hey Marc,

          “depending on VLAN allocation”
          It’s worth explaining that because it’s highly relevant. Meaning, if you forward the same set of VLANs on all ports, which is pretty typical in a server virtualization environment, the F2 module supports 16K.

          At any rate, the point of the post was show that NSX helps to extend the scalability of the existing Nexus hardware you have, without any necessary change to its configuration. For example, no need to make a change from STP to MST.

          Cheers,
          Brad

          1. Marc Edwards

            Brad,

            Thanks again for response. My final thought on this. The article does a great job pointing out recent innovations at Cisco both in compute, data center switching, and data center interconnect technologies.

            The Nexus 1000v soft-switch, which 1000′s of installs has proved to solve many of the traffic flow issues pointed out in this article.

            Cisco is continuing to innovate in the both the virtual switching space as well as moving into application centric architectures that will ease implementation, troubleshooting, and support by providing visibility of traffic both P to V and in a uniform manner.

            Things are surely changing. This blog came as a surprise to me, but it was well worth the read and I appreciate your prompt and candid feeback.

            Regards,

            Marc

  9. Jake

    Innovate the Virtual switch? That is laughable. The same and MORE features are in VMWare distributed switch technology without vendor lock-in. Cisco only tries to modify any standard enough to make it proprietary on their switches. And then if connecting to competitor product you have to dumb everything down to talk to Cisco. If the Virtual Switch from CIsco is so fantastic, Cisco should be selling millions of them. Want to post the numbers on those? OR deos Cisco even seperate that from switches? Faster convergence times on Cisco versus Cisco. WOW that’s great! How about Cisco versus the competition. This Cisco blather just makes me. Have you even looked at IRF and the capabilities of IRF? How many consoles and command lines do you need to even troubleshoot and maintain Cisco switches. 20? I am done here. Cannot even admit that Cisco needs NSX to help them perform better by doing the hairpin turn that is VEPA….

    1. Marc Edwards

      ‘Want to post the numbers on those? ‘

      CTO states there are over 6000 instances of 1000v in production. With respect to lock-in, it is hypervisor agnostic and officially supported on vmware, hyper-v, and KVM. How many instances of NSX are in production?

      With respect to innovations. Cisco typically innovates technologies that are released to standards bodies. They become standards due to high adoption levels. Where to start on this one HSRP (VRRP), CDP (LLDP), Fabric Path (TRILL), FCoE… It is a large list and growing.

      ‘How many consoles and command lines do you need to even troubleshoot and maintain Cisco switches. 20?’

      Well, if one adopts Nexus,UCS,1000v architecture it would be 1 for Nexus and supporting FEX, 1 for UCS (mostly gui based but also RESTFUL and programatic w/open APIS, or console access if needed), 1 for virtual. That totals 3. On that note in coming months this will be further simplified with ACI.

      Why admit Cisco needs NSX when they have innovated technologies that already solve these traffic flow challenges?

      Regards,

      Marc

  10. Kanat

    Wow… Nice article, but i’d expect it to come from cisco partner engineer trying to bundle vmware/cisco solution…
    And it kinda goes the opposite direction to VMware’s marketing message – NSX will run on any HW and liberate you from vendor shackles.

    First of I’d like to thank you for including some technical depth to your points, it’s kind of refreshing given usually these kind of blogs are very fluffy and vague.

    Question Brad – how are we supposed to take this without a grain (although i’d say spoon-full) of salt in the light of the fact that cisco is not listed as NSX HW partner and instead going with an in-house competitive solution (ACI)?
    I understand your attempt to reassure the customer base that invested in cisco, but don’t see any killer reasons to go for cisco+nsx pair (apart from the UCS platform distinct simplified deployment features + perhaps OTV, if you can live with multicast ).
    Can’t you achieve all above mentioned points with other vendor gear? Isn’t it the point of NSX?

    Also, Cisco pr machine is pretty persistent in pointing at NSX shortcoming – lack of visibility and multiple management slios. Can you refer me any material that describes the NSX functionality in those areas?

    Thank you.

    p.s. I’m ex Cisco.

Comments are closed.