Home > Blogs > OpenStack Blog for VMware > Tag Archives: NSX

Tag Archives: NSX

The Future For Network Engineers: Where We Are Today

A little over 7 years ago, shortly after joining VMware as a Network Virtualization Engineer, I published a blog post where I speculated on the possible evolution of the Network and Security Engineer roles, both key positions within IT staffs around the world tasked with designing, deploying and maintaining datacenter networks and firewalls.

 

The post generated some lighthearted controversy, as evidenced by the passionate comments you can read below that article. As a side note, you should know that I am super friendly with most of the folks who opined on my thoughts, as we keep meeting in the field, for various reasons.

 

Given that a lot has happened since that post, I thought it would be a good idea to reflect on the statements I made back then, check if they were right, or not, and take a guess again in terms of what the future holds. Let’s start with the concept of Network Virtualization itself and then explore some of the other predictions:

 

Is Network Virtualization a reality today?

 

Network Virtualization, this notion that one can simulate a network topology different from the physical one in order to provision connectivity for applications a lot faster, WAS a reality even at the time of my post. For years, networking vendors had been providing solutions that leveraged various encapsulation techniques, in order to propagate L2 or L3 traffic over trusted or untrusted transit networks (think “datacenter” vs. “the Internet”). IPsec, MPLS, VXLAN, Nicira’s Stateless TCP Transport (STT), etc., were all means used to achieve this goal, covering a wide array of use cases and justifications.

 

What we did at VMware was different though, and very revolutionary. We decided to decouple these encapsulation techniques from the network hardware and offer a virtual fabric that would work on top of literally any physical fabric, irrespective of the vendor who provided it. It is because of this decision that today you can use NSX to extend consistent application connectivity across clouds (private, hybrid and public), form factors (VMs running on multiple hypervisors, bare metal, containers and cloud native) and locations (datacenter, remote office or edge). The industry momentum behind SD-WAN, with VMware as one of the leaders in this space, is also proof that Network Virtualization goes beyond the datacenter.

 

What about security? What is the current state of affairs?

 

In my post, I imagined a world in which a particular security posture, let’s call it “a micro-segmentation policy”, could be defined and applied to target workloads regardless of cloud, location or format. Has that promised materialized itself in 2020?

 

Fortunately, the answer is yes.

 

Today, you can use NSX to define a security policy that is aligned with your business intent or compliance requirements, and then enforce it without worrying about where the application lives. I routinely demonstrate this multi-cloud support by showing a top-level security ruleset that looks and works the same for on-prem, AWS, Azure and some of the other public cloud offerings. What is more important, we now see technology that leverages the power of Machine Learning (ML) and Artificial Intelligence (AI) to help automate the creation and dissemination of such policies, while providing additional capabilities like malware and anomaly detection, next-gen antivirus and compliance attestation. Examples of these offerings are: vRealize Network Insight (holistic network and security visibility), NSX Intelligence (distributed analytics for providing granular network security policy recommendations), VMware Carbon Black Cloud (cloud-based analytics for providing endpoint security and protection) and VMware Secure State (cloud-native compliance engine).

 

Even more significant is the fact that this security is intrinsic. This means security is built-in and not bolted-on. These policies are embedded in the infrastructure (they are agentless), and they live, evolve and are decommissioned following the same lifecycle of the application. If an application is created, so is its security posture. If the application changes, or moves, so does its security posture. And finally, when an app is destroyed, the policy is automatically removed. From an operations perspective, this model has proven to be more efficient, less error-prone and obviously, more consistent than the alternatives.

 

What about the role of the Network Engineer itself? How has that changed?

 

In terms of how the role of a Network Engineer has evolved, I am going to go ahead and say that I was spot-on. This might sound like a boast, but bear with me.

 

Network Engineers, in particular Network Virtualization Engineers, have adopted operational models aligned with the core principles of DevOps and have acquired skills that leverage modern instrumentation, which allows them to create, manage and troubleshoot connectivity, security and elasticity policies in a consistent and repeatable manner. Current Network Engineers understand the application geometry and treat the network infrastructure that supports it as code. Furthermore, the rapid adoption of microservices has catapulted the importance of the Network Engineer role. The distributed nature of a microservices architecture means that a network is required to efficiently connect all these disparate services. Who better than an expert in networking to help design and operate the fabric that ties them all together?

 

VMware has open source and commercial solutions for all of the above: providers for the most popular DevOps frameworks (Terraform, Ansible, PowerShell, vRealize Automation, OpenStack Neutron, Public Cloud IaaS, Kubernetes and several others), and Service Mesh solutions, like Tanzu Service Mesh for automatic service discovery, service-to-service encryption, multi-cloud federation, observability and Service Level Objective (SLO) tracking, all leveraging the revolutionary concept of Global Namespaces.

 

So, was I right? If so, what’s next?

 

I think that my predictions were pretty accurate. The reason is very simple: my predictions were not mine alone. I rely on a fantastic team of thought leaders, amazing engineers and sales staffs that have all helped forge our own path. When you have access to this talent and this passion for an industry, you hard work pays off. This is why I believe that we have influenced our own destiny. This outcome is, in a way, a self-fulfilling prophecy.

 

In terms of what’s next, I will leave you with a teaser. Come join me and Dr. Bruce Davie, at VMworld 2020. For several years now, I have helped build and present the demos that accompany his daring predictions and thoughts with regards to our industry. The name of the breakout is, very apropos, “The Future of Networking with VMware NSX” and you can find it on the VMworld 2020 Content Catalog.

 

So maybe in another 7 years I will be checking in with you again to see if what we anticipated today becomes a reality then. In the meantime, keep investing in your network expertise, and keep innovating.

 

Marcos Hernandez

Chief Technologist, Network and Security

VMware

OpenStack and Kubernetes Better Together

Virtual machines and containers are two of my favorite technologies.  In today’s DevOps driven environment, deliver applications as microservices allows an organization to provide features faster.   Splitting a monolithic application into multiple portable fragments based on containers are often top of most organization’s digital transformation strategies.   Virtual Machines, delivered as IaaS, has been around since the late 90s, it is a way to abstract hardware to offer enhanced capabilities in fault tolerance, programmability, and workload scalability.  While enterprise IT large and small are scrambling to refactor application into microservices, the reality is IaaS are proven and often used to complement container based workloads:

1). We’ve always viewed the IaaS layer as an abstraction from the infrastructure to provide a standard way of managing and consolidate disparate physical resources. Resource abstraction is one of the many reasons most of the container today runs inside of Virtual machines.

2). Today’s distributed application consists of both Cattles and Pets.  Without overly generalizing, Pet workload tends to be “hand fed” and often have significant dependencies to the legacy OS that isn’t container compatible.  As a result, for most organizations, Pet workloads will continue to run as VMs.

3). While there are considerable benefits to containerize NFV workloads, current container implementation is not sufficient enough to meet 100% NFV workload needs.  See  IETF report for additional details.

4). Ability to “Right Size” the container host for dev/test workloads where multiple environments are required to perform different testings.

Instead of mutually exclusive, over time it’s been proven that two technologies complement each other.   As long as there are legacy workloads and better ways to manage and consolidate sets of diverse physical resources, Virtual Machines (IaaS) will co-exist to complement containers.

OpenStack IaaS and Kubernetes Container Orchestration:

It’s a multi-cloud world, and OpenStack is an important part of the mix. From the datacenter to NFV, due to the richness of its vendor-neutral API, OpenStack clouds are being deployed to meet needs of organizations needs in delivering public cloud like IaaS consumption in a private cloud data center.   OpenStack is also a perfect complement to K8S by providing underline services that are outside the scope of K8S.  Kubernetes deployments in most cases can leverage the same OpenStack components to simplify the deployment or developer experiences:

 

 

 

 

1). Multi-tenancy:  Create K8S cluster separation leveraging OpenStack Projects. Development teams have complete control over cluster resources in their project and zero visibility to other development teams or projects.

2). Infrastructure usage based on HW separation:  IT department often are the central broker for development teams across the entire organization. If Development team A funded X number of servers and Y for team B, OpenStack Scheduler can ensure K8S cluster resources always mapped to Hardware allocated to respective development teams.

3).  Infrastructure allocation based on quota:  Since deciding how much of your infrastructure to assign to different use cases can be tricky.  Organizations can also leverage OpenStack quota system to control Infrastructure usage.

4). Integrated user management:  Since most K8S developers are also IaaS consumers, leverage keystone backend simplifies user authentication for K8S cluster and namespace sharing.

5). Container storage persistence:  Since K8S pods are not durable, storage persistence is a requirement for most stateful workloads.   When leverage OpenStack Cinder backend, storage volume will be re-attached automatically after a pod restart (same or different node).

6). Security:  Since VM and containers will continue to co-exist for the majority of enterprise and NFV applications.  Providing uniform security enforcement is therefore critical.   Leverage Neutron integration with industry-leading SDN controllers such as the VMware NSX-T can simplify container security insertion and implementation.

7). Container control plane flexibility: K8S HA requirements require load balanced Multi-master and scaleable worker nodes.  When Integrated with OpenStack, it is as simple as leverage LBaaSv2 for master node load balancing.  Worker nodes can scale up and down using tools native to OpenStack.  WIth VMware Integrated OpenStack, K8S worker nodes can scale vertically as well using the VM live-resize feature.

Next Steps:

I will leverage VMware Integrated OpenStack (VIO) implementation to provide examples of this perfect match made in heaven. This blog is part 1 of a 4 part blog series:

1). OpenStack and Containers Better Together (This Post)

2). How to Integrate your K8S  with your OpenStack deployment

3). Treat Containers and VMs as “equal class citizens” in networking

4). Integrate common IaaS and CI / CD tools with K8S

Making OpenStack Neutron Better for Everyone

This blog post was created by Scott Lowe, VMware Engineering Architect in the Office of the CTO. Scott is an SDN expert and a published author. You can find more information about him at http://blog.scottlowe.org/

Additional comments and reviews: Xiao Gao, Gary Kotton and Marcos Hernandez.


In any open source project, there’s often a lot of work that has to happen “in the background,” so to speak, out of the view of the users that consume that open source project. This work often involves improvements in the performance, modularity, or supportability of the project without the addition of new features or new functionality. Sometimes this work is intended to help “pay technical debt” that has accumulated over the life of the project. As a result, users of the project may remain blissfully unaware of the significant work involved in such efforts. However, the importance of these “invisible” efforts cannot be understated.

One such effort within the OpenStack community is called neutron-lib (more information is available here). In a nutshell, neutron-lib is about two things:

  1. It aims to build a common networking library that Neutron and all Neutron sub-projects can leverage, with the eventual goal of breaking all dependencies between sub-projects.
  2. Pay down accumulated technical debt in the Neutron project by refactoring and enhancing code as it is moved to this common library.

To a user—using that term in this instance to refer to anyone using the OpenStack Neutron code—this doesn’t result in visible new features or functionality. However, this is high-priority work that benefits the entire OpenStack community, and benefits OpenStack overall by enhancing the supportability and stability of the code base over the long term.

Why do we bring this up? Well, it’s recently come to my attention that people may be questioning VMware’s commitment to the OpenStack projects. Since they don’t see new features and new functionality emerging, users may think that VMware has simply moved away from OpenStack.

Nothing could be further from the truth. VMware is deeply committed to OpenStack, often in ways, like the neutron-lib effort, that are invisible to users of OpenStack. It can be easy at times to overlook a vendor’s contributions to an open source project when those contributions don’t directly result in new features or new functionality. Nevertheless, these contributions are critically important for the long-term success and viability of the project. It’s not glorious work, but it’s important work that benefits the OpenStack community and OpenStack users.

Being a responsible member of an open source community means not only doing the work that garners lots of attention, but also doing the work that needs to be done. Here at VMware, we’re striving to be responsible members of the OpenStack community, tackling efforts, in conjunction and close cooperation with the community, that not only benefit VMware but that benefit the OpenStack community, the ecosystem, and the users.

In a future post, I’ll focus on some of the contributions VMware is making that will result in new functionality or new features. Until then, if you’d like more information, please visit http://www.vmware.com/products/openstack.html or contact us and follow us on Twitter @VMware_OS

Finally, don’t forget to visit our booth at the OpenStack Summit in Boston, May 8-12 2017.

VMware Integrated OpenStack 3.1 GA. What’s New!

VMware announced general availability (GA) of VMware Integrated OpenStack 3.1 on Feb 21 2017. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Mitaka release and streamlined user experience with Single Sign-On support with VMware Identity Manager.   For OpenStack Cloud Admins, the 3.1 release is also about enhanced integrations that allows Cloud Admins to further take advantage of the battle tested vSphere Infrastructure & Operations tooling providing enhanced security, OpenStack API performance monitoring,  brownfield workload migration, and seamless upgrade between central and distributed OpenStack management control planes.

images

 

 

 

 

VIO 3.1 is available for download here.  New features include:

  • Support for the latest versions of VMware products. VMware Integrated OpenStack 3.1 supports and is fully compatible with VMware vSphere 6.5, VMware NSX for vSphere 6.3, and VMware NSX-T 1.1.   To learn more about vSphere 6.5, visit here, vSphere 6.3 and NSXT, visit here.
  • NSX Policy Support in Neutron. NSX administrators can define security policies, shared by the OpenStack Cloud Admin with cloud users. Users can either create their own rules, bounded with the predefined ones that can’t be overridden, or only use the predefined, depending on the policy set by the OpenStack Cloud Admin.  NSX Provider policy feature allows Infrastructure Admins to enable enhanced security insertion and assurance all workloads are developed and deployed based on standard IT security policies.
  • New NFV Features. Further expanding on top of VIO 3.0 capability to leverage existing workloads in your OpenStack cloud, you can now import vSphere VMs with NSX network backing into VMware Integrated OpenStack.  The ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development projects or production workloads to the OpenStack Framework.  VM Import steps can be found here.  In addition full passthrough support by using VMware DirectPath I/O is supported.
  • Seamless update from compact mode to HA mode. If you are updating from VMware Integrated OpenStack 3.0 that is deployed in compact mode to 3.1, you can seamlessly transition to an HA deployment during the update. Upgrade docs can be found here.
  • Single Sign-On integration with VMware Identity Manager. You can now streamline authentication for your OpenStack deployment by integrating it with VMware Identity Manager.  SSO integration steps can be found here.
  • Profiling enhancements.  Instead of writing data into Ceilometer, OpenStack OSprofiler can now leverage vRealize Log Insight to store profile data. This approach provides enhanced scalability for OpenStack API performance monitoring. Detailed steps on enabling OpenStack Profiling can be found here.

Try VMware Integrated OpenStack Today

 

 

How To Efficiently Derive Value From VMware Integrated OpenStack

One of our recent posts on this blog covered how VMware Integrated Openstack (VIO) can be deployed in less than 15 minutes thanks to an easy and friendly wizard-driven deployment.

That post has also mentioned that recent updates have been added to VIO that focus on its ease of use and consumption, including the integration with your existing vSphere environment.

 

This article will explore in greater detail the latter topic and will focus on two features that are designed to help customers start deriving value from VIO quickly.


vSphere Templates as OpenStack Images

 

An OpenStack cloud without any image is like a physical server without an operating system – not so useful!

 

One of the first elements you want to seed your cloud with is images so users/developers can start building applications. In a private cloud environment, cloud admins will want to expose a list of standard OS (Operating System, not OpenStack…) images to be used to that end, in other words OS master images.

 

When VIO is deployed on top of an existing vSphere environment, these OS master images are generally already present in the virtualization layer as vSphere templates and a great deal of engineering hours have gone into creating and configuring those images to reflect the very own needs of a given corporate organization in terms of security, compliance or regulatory requirements – OS hardening, customization, agents installation, etc…

 

What if you were able to reuse those vSphere templates and turn them into OpenStack images and hence preserve all of your master OS configurations across all of your cloud deployments?

VIO supports this capability out of the box (see diagram below) and enables users to leverage their existing vSphere templates by adding them to their OpenStack deployment as Glance images, which can then be booted as OpenStack instances or used to create bootable Cinder volumes.

 

The beauty of this feature is that it is done without copying the template into the Glance data-store. The media only exists in one place (the original data-store where the template is stored) and we will actually create a “pointer” from the OpenStack image object towards the vSphere template thus saving us from the tedious and possibly lengthy process of copying media from one location to another (OS images tend to be pretty large in corporate environments).

 

This feature is available through the glance CLI only and here are the high-level steps that need to be performed to create an image:

– First: create an OpenStack image

– Second: note that image ID and specify a location pointing towards the vSphere template

– Third: in the images section of the Horizon dashboard for example, a new image will show up called “corporate-windows-2012-r2” from which instances can be launched.

Note: cloud admins will have to make sure those OS images have the cloud-init package installed on them before they can be fully used in the OpenStack environment. If cloud-init needs to be installed, this can be done either pre- or post- the import process into Glance.

Run the video below for a detailed tutorial on the configuration steps, including CLI commands:

Finally, here’s the section in the official configuration guide: http://tinyurl.com/hx4z4jt


Importing vSphere VMs into OpenStack

 

A frequent request from customers deploying VIO on their existing vSphere implementation is “Can I import my existing VMs into my OpenStack environment?”

 

The business rationale for this request is that IT wants to be consistent and offer a similar level of service and user experience to both the new applications deployed through the OpenStack framework as well as the existing workloads currently running under a vSphere management plane “only”. They basically want users in charge of existing applications to enjoy capabilities such as self-service, lifecycle management, automation, etc…and hence avoid creating a two-tier IT offering.

 

VIO supports this capability by allowing users to quickly import vSphere VMs into VIO and start managing them as instances through standard OpenStack APIs. This feature is also available through CLI only and leverages the newly released VMware DCLI toolset.

 

Here are the high-level steps for importing an existing VM under OpenStack:

– First, list the “Unmanaged” VMs in vCenter (ie unmanaged by VIO)

– Import one of those VMs into a specific project/tenant in OpenStack

– The system will then generate a UUID for the newly created instance and the instance will show up in Horizon where it can be managed like any other running one.

 

 

We hope you enjoyed reading this article and that those features will make you want to go ahead and discover VIO!

If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


This article was written by Hassan Hamade a Cloud Solution Architect at VMware in the EMEA SDDC technology practice team. 

VMware Integrated OpenStack (VIO) and NSX – Free webinar

On November 2nd, at 9 am PST/12 pm EST I will be hosting a free webinar, as part of our Getting More Out Of series, focused on OpenStack Integration with VMware NSX. We will also cover how easy it is to deploy VMware’s OpenStack distribution, VIO, and the Day-2 operational tools in the VMWare SDDC portfolio that make it possible for Cloud admins to properly monitor and optimize their OpenStack deployments.

As I mentioned in my 3-part blog series on Neutron-NSX integration, as more features are added to OpenStack (especially to Neutron), its architecture becomes more complex (a universal perception amongst OpenStack users). NSX mitigates these issues and provides a scalable and robust platform that incorporates Enterprise-grade network services into your OpenStack architecture.

Come and learn more about the benefits and participate in this unapologetically technical webinar, with live Q&A with our expert panel.

 

REGISTER NOW!

screenshot1

Also, don’t forget to visit us at the OpenStack Summit in Barcelona, where we will be talking about this and many other relevant topics.

¡Nos vemos en España!

Marcos

Advanced Security Services with Neutron, NSX and Palo Alto Next Generation Firewall

Building on the concepts and implementation that I have been working on for the past few weeks around service chaining in Neutron, this post will now focus on how to onboard the Palo Alto Next Generation Firewall platform onto OpenStack.

Palo Alto Networks has one of the most mature and robust integrations with VMware NSX and we also share many joint customers in production. Together, we have seen tremendous success in the market, and that success can now extend to those prospects wanting to do OpenStack, while augmenting their security strategy with the added visibility and protection that Palo Alto offers.

The basic tenets for this integration between Palo Alto Networks and VMware NSX, in the context of an OpenStack deployment, remain the same:

  • The Security/Firewall Team is in complete control of the security lifecycle of the tenant apps.
  • Although not mandatory, Provider Networks are preferred in this context over Tenant Networks.
  • Tenants use the OpenStack API to consume Compute and Storage Services, while Networking and Security remain under the control of the Cloud Admins or Central IT.
  • This model is relatively common in the Enterprise, but not common in the DevOps use case where Tenants control their own network and security workflows.

If the above prerequisites are met, one can safely implement the VMware NSX + Palo Alto integration and overlay OpenStack Neutron on top, offering a complete private Cloud deployment that incorporates advanced security controls for East-West traffic. VMware NetX is the glue holding everything together.

Here is the high level workflow:

  • Integrate VMware NSX and Palo Alto Networks following best practices and recommended software versions for NSX, Panorama and the PAN VM Series. The instructions to do this can be found here.
  • Deploy VMware Integrated OpenStack 3.0 (if it hasn’t been done already) or any OpenStack distribution compatible with the Mitaka release, using VMware vSphere and VMware NSX as the underlying infrastructure components.
  • Identify the Compute clusters that will host your OpenStack workloads and deploy Palo Alto network introspection to those clusters:

screenshot1

screenshot2

  • Ensure the Service VMs (local firewalls) are properly registered and licensed in Panorama:

screenshot3

  • Create an NSX Security Group with a classification criteria that meets your needs. In this example we are using the proverbial example based on VM Name (Name Contains Web in this case):

screenshot5

screenshot6

  • In Panorama, create a Dynamic Address Group for the OpenStack Instances, that corresponds to the NSX Security Group created in the previous step:

screenshot5-1

  • Then, in Panorama, create the Policy you want to apply to the redirected traffic:

screenshot6-8

  • Back on NSX, create a redirection policy, or Partner Security Rule, for the interesting traffic that will be subject to inspection (Network Introspection). In this example we are redirecting inbound HTTP/HTTPS traffic for additional security controls:

Note 1: You will also need to create DFW rules to allow the traffic that will be redirected, as these rules are applied prior to the redirection for outbound traffic (VM >> World) and are applied after redirection for inbound traffic (World >> VM). More details on how these flows move through the Hypervisor can be found on the NSX Design Guide.

Note 2: You may need to use the “ApplyTo” Field in NSX to limit the redirection policy to the specific VMs in question.

  • screenshot7Finally, you can use OpenStack Nova to boot Instances (VMs) that satisfy the membership criteria of the appropriate NSX Security Group. It is extremely important that you DO NOT attach a Neutron Security Group to these Instances. We are bypassing self-service security provisioning in OpenStack and delegating all security controls to the Firewall Team.

Note 3: If you are using Horizon (OpenStack GUI), you may need to detach the default Neutron Security Group after you launch your Instance(s).

screenshot7

Note 4: Another approach, not covered in this document, has to do with the manipulation of the policy.json file for Neutron, in order to restrict Security Group changes or additions by anyone other than the Admin. In this case, launching Instances without a Neutron Security Group attachment is not required, as the Neutron Security Group that is used would only be modified by said Admin.

  • Verify your configuration and security policies.

As we can see, the above approach safely integrates value-add security and visibility services into OpenStack today, and showcases the power of NSX as a platform for Private Cloud deployments based on OpenStack.

Follow these links for the previous two articles in our 3-part blog series:

Part 1: Next Generation Security Services in OpenStack

Part 2: Advanced Security Services in OpenStack and Fortinet

VMware and Palo Alto Networks will be discussing this and many other interesting topics at VMworld Europe 2016 in Barcelona, Spain. Don’t forget to swing by the Palo Alto Networks booth in the Solutions Exchange if you need more information.

Next Generation Security Services in OpenStack – Part 2

In a previous post, I described the high level steps a security admin would follow to onboard NetX redirection services onto an existing OpenStack deployment. If your OpenStack implementation is based on Mitaka and you are running VMware NSX, it is possible to launch instances with no Neutron Security Group association, which under normal circumstances would either blackhole all traffic or allow all traffic, depending on the way you configured the default section of the NSX Distributed Firewall. Prior to Mitaka, the same operation is possible by attaching the instance to a pre-existing Neutron port, but this post focuses on the Mitaka capabilities.

The example below assumes that the following conditions are met, which based on customer discussions, apply to a considerable amount of IT organizations looking at OpenStack:

  1. The security team has relinquished all security control from the Tenant.
  2. Tenants are not able to provision Neutron Security Groups or modify them (this is possible via policy.json manipulation, but outside of the scope of this article).
  3. All Firewall/security operations are owned by the security team.

The example below is based on Fortinet’s FortiGate-VMX Next Generation Firewall, a virtual firewall solution built from the ground-up that seamlessly integrates with VMware NSX vSphere.

Step 1: Integrate FortiGate-VMX with NSX vSphere

The instructions on how to do this can be found here. Once the solution is deployed and operational, you will see it under Service Definitions and in the NSX Service Deployments tab:

screenshot0

screenshot1

Step 2: Create an NSX Security Group and select a classification/membership criteria for the OpenStack VMs

Follow the standard process to create Security Groups in NSX. In this example, we created a Security Group called Virtual_DMZ and we are classifying VMs based on VM name which Contains the word Web:

screenshot2

It is important to notice that Security Groups created post-integration will appear on the FortiGate Services Management console, under Addresses and with the exact names defined in NSX. These Address Groups automatically capture the IP addresses of the corresponding VMs. Notice that OpenStack Security Groups are not populated in the Fortinet SVM:

screenshot3

This 2-way communication between the solutions makes it extremely convenient to keep naming conventions and visibility fully synchronized, leading to a consistent security policy.

Step 3: Create a Security Policy with the appropriate redirection rules and apply it to the NSX Security Group for the OpenStack VMs

An NSX Security Policy combines L3/L4 East-West firewall rules enforced by the NSX Firewall and Network Introspection Firewall rules controlled by Fortinet. In this example our redirection policy is redirecting everything to Fortinet’s VMX appliance. In a real life deployment you would see a combination of DFW rules and NGFW rules, where NSX handles the bulk of the traffic using in-kernel protection, while the partner solution would handle high-risk, “interesting” traffic to be inspected at L5-L7:

screenshot4

Step 4: Use the partner management console to manipulate your application-level security policies and deep traffic inspection

Fortinet’s SVM can be used to create and enforce rich security services for the redirected traffic (antivirus policy shown below):

screenshot5

Step 5: Launch the VIO/OpenStack instances without a Neutron Security Group

The OpenStack instances need to satisfy the membership criteria of the NSX Security Group (VM name contains “Web” in this example) and be launched with no Neutron Security Group attachments. This last step is crucial in maintaining the integrity of your security posture:

screenshot6

screenshot7

 

Step 6: Verify your security policy is working as expected

Take some time to confirm that the redirection rules are punting traffic correctly and that your deep traffic inspection is working as intended.

In a future post we will examine other partner solutions working in concert with NSX and VMware Integrated OpenStack.

Conclusion

The aforementioned approach combines the benefits of the OpenStack framework and its appealing API consumption model, with security services that provide more insight and visibility into the application traffic, using the advanced service integration in NSX. For all of this to work, the security team needs to own all controls of the application security policies, which is a very common Enterprise model.

For more information, you can watch a recorded version of this content (including a demo) co-presented with Fortinet at VMworld 2016 in Las Vegas.

Marcos Hernandez – CCIE #8283, VCIX, VCP-NV 

Twitter: @netvirt

 

Next Generation Security Services in OpenStack

OpenStack is quickly and steadily positioning itself as a great Infrastructure-as-a-Service solution for the Enterprise. Originally conceived for that proverbial DevOps Cloud use case (and as a private alternative to AWS), the OpenStack framework has evolved to add rich Compute, Network and Storage services to fit several enterprise use cases. This evolution can be evidenced by the following initiatives:

1) Higher number of commercial distributions are available today, in addition to Managed Services and/or DIY OpenStack.
2) Diverse and expanded application and OS support vs. just Cloud-Native apps (a.k.a “pets vs. cattle”).
3) Advanced network connectivity options (routable Neutron topologies, dynamic routing support, etc.).
4) More storage options from traditional Enterprise storage vendors.

This is definitely great news, but one area where OpenStack has lagged behind is security. As of today, the only robust option for application security offered in OpenStack are Neutron Security Groups. The basic idea is that OpenStack Tenants can be in control of their own firewall rules, which are then applied and enforced in the dataplane by technologies like Linux IP Tables, OVS conntrack or, as it is the case with NSX vSphere, a stateful and scalable Distributed Firewall with vNIC-level resolution operating on each and every ESXi hypervisor.

Neutron Security Groups were designed for intra and inter-tier L3/L4 protection within the same application environment (the so-called “East-West” traffic).

In addition to Neutron Security Groups, projects like Firewall-as-a-Service (FWaaS) are also trying to onboard next generation security services onto these OpenStack Clouds and there is an interesting roadmap taking form on the horizon. The future looks great, but while OpenStack gets there, what are the implementation alternatives available today? How can Cloud Architects combine the benefits of the OpenStack framework and its appealing API consumption model, with security services that provide more insight and visibility into the application traffic? In other words, how can OpenStack Cloud admins offer next generation security right now, beyond the basic IP/TCP/UDP inspection offered in Neutron?

The answer is: With VMware NSX.

NSX natively supports and embeds an in-kernel redirection technology called Network Extensibility, or NetX. Third party ecosystem vendors write solutions against this extensibility model, following a rigorous validation process, to deliver elegant and seamless integrations. Once the solution is implemented, the notion is simply beautiful: leverage the NSX policy language, the same language that made NSX into the de facto solution for micro-segmentation, to “punt” interesting traffic toward the partner solution in question. This makes it possible to have protocol-level visibility for East-West traffic. This approach also allows you to create a firewall rule-set that looks like your business and not like your network. Application attributes such as VM name, OS type or any arbitrary vCenter object can be used to define said policies, irrespective of location, IP address or network topology. Once the partner solution receives the traffic, then the security admins can apply deep traffic inspection, visibility and monitoring techniques to it.

screen-shot-2

How does all of the above relate to OpenStack, you may be wondering? Well, the process is extremely simple:

1) First, integrate OpenStack and NSX using the various up-streamed Neutron plugins, or better yet, get out-of-the-box integration by deploying VMware’s OpenStack distro, VMware Integrated OpenStack (VIO), which is free for existing VMware customers.
2) Next, integrate NSX and the Partner Solution in question following documented configuration best practices. The list of active ecosystem partners can be found here.
3) Proceed to create an NSX Security policy to classify the application traffic by using the policy language mentioned above. This approach follows a wizard-based provisioning process to select which VMs will be subject to deep level inspection with Service Composer.
4) Use the Security Partner management console to create protocol-level security policies, such as application level firewalling, web reputation filtering, malware protection, antivirus protection and many more.
5) Launch Nova instances from OpenStack without a Neutron Security Group attached to them. This step is critical. Remember that we are delegating security management to the Security Admin, not the Tenant. Neutron Security Groups do not apply in this context.
6) Test and verify that your security policy is applied as designed.

screen-shot-1

This all assumes that the security admin has relinquished control of the firewall from the Tenant and that all security operations are controlled by the firewall team, which is a very common Enterprise model.

There are some Neutron enhancements in the works, such as Flow Classifier and Service Chaining, that are looking “split” the security consumption between admins and tenants, by promoting these redirection policies to the Neutron API layer, thus allowing a Tenant (or a Security admin) to selectively redirect traffic without bypassing Neutron itself. This implementation, however, is very basic when compared to what NSX can do natively. We are actively monitoring this work and studying opportunities for future integration. In the meantime, the approach outlined above can be used to get the best of both worlds: the APIs you want (OpenStack) with the infrastructure you trust (vSphere and NSX).

In the next blog post we will show an actual working integration example with one of our Security Technology Partners, Fortinet, using VIO and NSX NetX technology.

Author: Marcos Hernandez
Principal Engineer, CCIE#8283, VCIX, VCP-NV
hernandezm@vmware.com
@netvirt

OpenStack Summit 2016 Re-Cap – Amadeus’ OpenStack Journey: Building a Private Cloud with VMware Integrated OpenStack and NSX.

How does a company build a private enterprise cloud using VMware Integrated OpenStack and NSX? You’ll find a great example in this 2016 OpenStack Summit presentation by VMware NSX product manager Sai Chaitanya and Arthur Knopper, associate director of the Amadeus IT Group.

The Amadeus IT Group is a multi-national IT service provider to the global travel industry with over 3 billion euros in revenue. Two years ago it embarked on a transformation project to modernize its infrastructure.

In their talk, Chaitanya and Knopper outline some of the business drivers for the project, which included readying their infrastructure to deploy next generation cloud native applications based on containers and building an entirely new, highly-reliable hotel guest reservation system using RedHat OpenShift PaaS.
Those drivers established a set of business requirements, such as speeding service delivery, instigating end-to-end automation and ensuring 99.999% service uptime, along with technical requirements that included a fault-resilient application architecture based on OpenShift and Kubernetes, and fast and automatic provisioning using OpenStack Heat.

Knopper details the variety of options (public cloud, alternative service providers etc.) that Amadeus considered for meeting their requirements. But their best option, he explains, was to build a product architecture featuring an underlying VMware infrastructure running OpenStack loads via VIO and NSX.

VMware’s technical reliability and the support it offered were crucial factors, says Knopper, as was Amadeus’ ability to leverage its existing experience with vSphere to get the project moving quickly.

 
The results have been impressive. Where it used to take weeks to bring up an application, Knopper notes, “with the solution we have at hand, this has been reduced down to around 50 minutes.” The new approach delivers the fault tolerance required and lets Amadeus deliver more frequent updates to their end users.

The talk winds up with suggestions for best practices for building private OpenStack clouds with VIO based on Amadeus’ experience, and an outline of their plans for continued technical improvement in partnership with VMware.

“What’s really important for success with OpenStack is having a clear driver for what you are trying to do, and then translating that into clear requirements,” emphasizes Chaitanya in conclusion. “Then if you have a very clear execution plan and break it into phases, your chances of success are high.”

 

To try VMware Integrated OpenStack for yourself, check out our Hands-on Lab, or download and install VMware Integrated OpenStack direct.