Home > Blogs > OpenStack Blog for VMware > Tag Archives: DevOps

Tag Archives: DevOps

Barbican Consumption and Operational Maintenance

VMware Integrated OpenStack(VIO) announced the official support for Barbican, OpenStack secrets manager,  in version 5.1.  With Barbican, cloud Operators can offer Key Management as a service by leveraging Barbican API and command line(CLI) to manage X.509 certificates, keys, and passwords. Basic Barbican workflow is relatively simple –  invoke the secrets-store plugin to encrypt a secret on the store and decrypt a secret on retrieval. In addition to generic secrets management, some OpenStack projects integrate with Barbican natively to provide enhanced security on top of its base offering.  This blog will introduce Barbican consumption and operation maintenance through the use of Neutron Load Balancer as a Service (LBaaS).

Understanding Policies

Barbican scopes the ownership of a secret at the OpenStack project level.  For each API call, OpenStack will check to ensure the project ID of the token matches the project ID stored as the secret owner.  Further, Barbican uses roles and policies to determine access to secrets. Following roles are defined in Barbican::

  • Admin – Project administrator. This user has full access to all resources owned by the project for which the admin role is scoped.
  • Creator – Users with this role are allowed to create and delete resources.  Users with this role cannot delete other user’s resources managed within same project. They are also allowed full access to existing secrets owned by the project in scope.
  • Observer – Users with this role are allowed access to existing resources but are not allowed to upload new secrets or delete existing secrets.
  • Audit – Users with this role are only allowed access to the resource metadata. So users with this role are unable to decrypt secrets

VIO 5.1 ships with “admin” and “creator” role out of the box.  A project member must be assigned with the creator role to consume barbican.  Based on the above roles, Barbican defines a set of rules or policies for access control. Only operations specified by the matching rule will be permitted.

While the policy framework works well, but secrets management is never one size fits all, and there are limitations with the policy framework if fine-grain control is required.  Scenarios such as grant specific user access to a particular secret or upload a secret for which only the uploader has access needs OpenStack ACLs.  Please refer to ACL API User Guide for full details.

Supported Plugin

The Barbican key manager service leverages secret-store plugins to allow authorized users to store secrets.   VIO 5.1 supports two type of plugins, simple crypto and KMIP enabled. Only a single plugin can be active for a VIO deployment.  Secret stores can be software-based, such as a software token, or hardware devices such as a hardware security module (HSM).

Simple crypto plugin

The simple crypto plugin uses a single symmetric key, stored locally on the VIO controller in the /etc/barbican/barbican.conf file to encrypt and decrypt secrets.  This plugin also leverages local Barbican database and stores user secrets as encrypted blobs in the local database.    The reliance on local text file and database for storage is considered insecure, and therefore upstream community considers simple crypto plugin to be suitable for development and testing workloads only.

Secret store KMIP plugins

The KMIP plugin stores secrets securely in an external KMIP-enabled device. The Barbican database, instead of storing encrypted secrets, maintain location references of secrets for later retrieval. Client certificate-based authentication is the recommended approach to integrate the plugin with the KMIP enabled device.

A cloud operator must use the VIOCLI to specify a plugin:

KMIP:

sudo viocli barbican –secret-store-plugin KMIP \

–host kmip-server –port kmip-port  \

–ca-certs ca-cert-file [–certfile local-cert-file –keyfile local-key-file –user kmip-user –password kmip-password]

Simple Crypto:

sudo viocli barbican –secret-store-plugin simple_crypto

Example Barbican Consumption:

One of the most commonly requested use case specific to VIO is Barbican integration with Neutron LBaaS to offer HTTPS offload.  This is a five step process, we will review each step in detail.

  1. Install KMIP server (Greenfield only)
  2. Integrate KMIP using VIOCLI
  3. ACL update
  4. Workflow to create secret
  5. Workflow to create LBaaSv2

Please note, you must leverage OpenStack API or CLI for step #4.  Horizon support for Barbican is not available.

Install KMIP server

Production Barbican deployment requires a KMIP server.  In a greenfield deployment, Dell EMC CloudLink is a popular solution VMware vSAN customers leverage to enable vSAN storage encryption.  CloudLink includes both a key management server (KMS) as well as the ability to control, monitor and encrypt secrets across a hybrid cloud environment. Additional details on CloudLink is available from VMware solution exchange.

Integrate KMIP using VIOCLI

To integrate with CloudLink KMS or any other KMIP based secret store, simply login into the VIO OMS server and issue the following VIOCLI command;

Configure Barbican to use the KMIP plugin.

viocli barbican –secret-store-plugin KMIP

–user viouser \

–password VMware**** \

–host <KMIP host IP> \

–ca-certs /home/viouser/viouser_key_cert/ca.pem \

–certfile /home/viouser/viouser_key_cert/cert.pem \

–keyfile /home/viouser/viouser_key_cert/key.pem –port 5696

Successful completion of VIOCLI command performs following set of actions:

  • Neutron.conf update to include Barbican specific service_auth account.
  • Barbican environment specific information provided via VIOCLI
  • Barbican service endpoints definition on the HAproxy

ACL updates based on consumption

Neutron LBaaS relies on a Barbican service account to read and push certificates and keys stored in the Barbican containers to a load balancer.  The Barbican service user is an admin member of the service project, part of the OpenStack Local domain. Default Barbican security policy does not allow admin or member of one project to access secrets stored in a different project. In order for Barbican service user to access and push certificate and keys, tenant users must grant access to the service account.  There are two ways to allow access:

Option 1:

1). Tenant creator gives Barbican service user access using the OpenStack ACL command.  Cloud administrator needs to supply the UUID of the Barbican service account.

openstack acl user add -u <barbican_service_account UUID > $(openstack secret list | awk ‘/ cert1 / {print $2}’)

Repeat this command with each certificate, key, and container you want to provide Neutron access to.

Option 2:

2.) If cloud administrators are comfortable providing Neutron with access to secrets without users granting access to individual objects, cloud administrators may elect to modify the Barbican policy file. Implementing this policy change means that tenants won’t need to add the Neutron barbican service_user to every object, which makes the process of creating TERMINATED_HTTPS listeners easier. Administrators should understand and be comfortable with the security implications of this action before implementing this approach. To perform the policy change, use a custom-playbook to change the following line in the Barbican policy.json file:

From:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read”,

To:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read or role:admin”,

Please refer to my previous blog on custom-playbook.

Workflow to Create Secret:

This step assumes you have pre-created certificates and keys.  If you have not created keys and certificates before, please refer to this blog for details.  To follow steps outlined below, make sure to name your output files accordingly (server.crt and server.key).  To upload a certificate:

openstack secret store –name=’certificate’ \

–payload=”$(cat server.crt)” \

–secret-type=passphrase

Most of options are fairly self-explanatory, passphrase indicates a plain text.  Repeat the same command for keys:

openstack secret store –name=’private_key’ \

–payload=”$(cat server.key)” \

–secret-type=passphrase

you can confirm by listing out all secrets:

 

 

 

 

Final, create a TLS container pointing to both private key and certificate secrets:

openstack secret container create –name=’tls_container’ –type=’certificate’ \

                   –secret=”certificate=$(openstack secret list | awk ‘/ certificate / {print $2}’)” \

                   –secret=”private_key=$(openstack secret list | awk ‘/ private_key / {print $2}’)”

Workflow to create LBaaSv2

With Barbican service up and running,  ACL configured to allow retrieval of secret keys, let’s start to create a Load balancer and upload a certificate and key from the KMS server.  Load balancer creation workflow does not change with Barbican.  When creating a listener, be sure to specify TERMINATED_HTTPS as the protocol, and URL of the TLS container stored in Barbican.

Please note:  

  1. If you are testing Barbican against NSX-T, NSX-MGR must be running at least version 2.2 or higher.
  2. Example assumes pre-created test VMs,  t1 router, logical switch and subnets.
  • Create TLS enabled LB:

neutron lbaas-loadbalancer-create \

$(neutron subnet-list | awk ‘/ {subnet name} / {print $2}’) \

–name lb

 

 

 

 

 

 

  • Create listener with TLS

neutron lbaas-listener-create –loadbalancer lb1 \

–protocol-port 443 \

–protocol TERMINATED_HTTPS \

–name listener1 \

–default-tls-container=$(openstack secret list | awk ‘/ tls_container / {print $2}’)

 

 

 

 

 

 

  • Create pool:

neutron lbaas-pool-create \

–name pool1 \

–protocol HTTP \

–listener listener1 \

–lb-algorithm ROUND_ROBIN

  • Add members:

neutron lbaas-member-create pool1 \

–address <address1> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)

neutron lbaas-member-create pool1 \

–address <address2> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)

 

 

 

 

 

 

 

 

 

 

 

 

you can associate a floating IP address with the Loadbalancer VIP for services requiring external access.

 

 

 

 

 

 

 

 

To test out the new LB service, simply curl the URL using the floating IP:

viouser@oms:~$  curl -k https://192.168.120.130

 

 

VIO Speed Challenge – Can a New Guy Get Production Quality OpenStack Setup and Running Under Three Hours

This article describes an OpenStack setup. To put it into context, I recently joined VMware as a Technical Marketing Manager responsible for VMware Integrated OpenStack technical product marketing and field enablement. After a smooth onboarding process (accounts, benefits, etc), my first task was to get lab environments I inherited up and running again. I came from a big OpenStack shop where deployment automation was built in house using both Puppet and Ansible, so my first questions were; Where’s the site hiera data? Settings for Cobbler environment? Which playbook to run to spin up controller VM nodes? Once the vm is up, which playbooks are responsible for deployment and configuration of keystone, nova, cinder, neutron, etc. How to seed the environment once OpenStack is configured and running? It was always a multi-person, multi-week effort. Everyone maintained a cheat sheet loaded with deployment caveats and workarounds (long list since increased feature = increased complexity = increased caveats = slow time to market). Luckily with VMware Integrated OpenStack, all those decision points and complexity are abstracted for the Cloud Admin. Really, the only decision to make is:

  • Restore from database backup – Restore from backup processes can be found here.
  • Redeploy and Import.

Redeploy seemed more interesting so I decided to give it a shot (will save restore for a different blog if there is interest).

Note: Redeploy will not clean up vSphere resources. One could re-import vm resources from vCenter once deployment completes. Refer to Instructions here.

Lab Topology:

Lab environment consists of 3 ESXi clusters

  • Management (vCenter, VIO, vROPS, vRealize Log Insight, vRealize Operation Manager, NSX management, NSX Controller and etc)
  • Edge ( OpenStack neutron tenant resources – NSX Edge)
  • Production (This is where Tenant VMs reside)

One instance of NSX-v spanning across all 3 clusters from a networking perspective.

Screen Shot 2017-01-21 at 10.48.43 PM

VIO OpenStack Deployment:

VMware Integrated Openstack can be deployed in 2 ways, Full HA or Compact. Compact mode was recently introduced in 3.0 and requires significantly fewer hardware resources and memory than full HA mode. All OpenStack components can be deployed in 2 VMs with Compact mode. Enterprise grade OpenStack high availability can be achieved by taking advantage of the HA capabilities of the vSphere infrastructure. Compact mode is useful for multiple, small deployments. Full HA mode provides high availability at the application layer. The HA deployment consists of a dedicated management node (aka OMS node), 3 DB, 2 HAproxy and 2 Controller nodes. Ceilometer can be enabled after completion of the VIO base stack deployment. Enterprise HA mode is what I chose to deploy.

Since this environment is new to me, I wanted to avoid playing detective and spending days trying to reverse engineer the entire setup. Luckily VIO has a built-in export configuration capability. Simply navigate to the OpenStack deployment (Home -> VMware Integrated OpenStack -> OpenStack Deployments), all OpenStack settings can be exported with a single click:

Screen Shot 2017-01-19 at 3.32.20 PM

In some ways the exported configuration file is similar to traditional site hiera data except much simpler to consume. The data is in JSON format so it is easy to read and only information specific to my OpenStack setup is included. I no longer need to worry about kernel / TCP settings, interfaces to configure for management vs. data, NIC bonding, driver settings, haproxy endpoints, etc. Even the Ansible inventory is automatically created and managed based on deployment data. This is because VIO is designed to work out of box with the VMware suite of products, allowing Cloud Admins to focus on delivering SLAs instead of maintaining thousands of hard to remember key/value pairs. Advanced users can still look into inventory and configuration parameter details. The majority of deployment metrics are maintained on the OMS node in the following directories: (Please Note: The settings below are intended for viewing only and should not be modified without VMware support. The OMS node is primarily used as a starting point for troubleshooting, run viocli commands and SSH to the other nodes):

Screen Shot 2017-01-23 at 1.09.04 PM

With configuration saved, I went ahead and deleted the old deployment and clicked the Deploy OpenStack link to redeploy:

Screen Shot 2017-01-21 at 8.50.52 PM

The process to re-deploy a VIO OpenStack cluster is extremely simple, one simply selects an exported template to pre-fill configuration settings.

Screen Shot 2017-01-21 at 8.52.04 PM

The remaining deployment processes are well documented via other VMware blogs. References can be found here.

The entire Full HA mode deployment process took slightly over 50 minutes because of an unexpected NSX disk error that prevented neutron from starting. The deployment took 30 minutes with a clean environment (see below). Compact mode users should expect deployment to take as little as 15 minutes.

Screen Shot 2017-01-21 at 8.53.36 PM

Create VM and Test External Connectivity:

Once deployment is completed, simply create a L2 private network, and test that VMs can boot successfully in the default tenant project. Note, in order for a VM to connect externally, an external network needs to be created, associate the private and external networks to a NAT router, request a floating ip and finally associate the floating ip to the test VM. This is all extremely simple as VIO is 100% based on DefCore compliant OpenStack APIs on top of VMware’s best-of-breed SDDC infrastructure. Two neutron commands are all that’s needed to create an external network:

Screen Shot 2017-01-20 at 4.36.24 PM

Floating IP is a fancy OpenStack word for NAT. In most OpenStack implementations NAT translation happens in the neutron tenant router. A VIO neutron tenant router(s) can be deployed in 3 different modes – centralized exclusive, centralized shared, or distributed. Performance aside, the biggest difference between centralized and distributed mode is the number of control VM’s deployed to support the logical routing function. Distributed mode requires 2 NSX control plane router VMs, one router instance (dLR) for optimized East to West traffic between hypervisors. A second instance is deployed to take care of North – South external traffic flow via NAT. A single NSX control VM instance is required for centralized mode. In centralized mode, all routed traffic ( N -> S, E -> W) flows through a central NSX Edge Service Gateway (ESG). Performance and scale requirements will determine which mode to choose. Centralized shared is the default behavior. Marcos Hernandez had written an excellent blog in the past on NSX networking and VIO. Marcos’s blog can be found here.

An enterprise grade NSX NAT router can be created via three neutron commands, one command to create the router, and two commands to associate corresponding neutron networks.

Screen Shot 2017-01-20 at 4.59.24 PM

Once the router is created, simply allocate a floating IP, and associate the floating IP to the test VM instance:

Screen Shot 2017-01-21 at 12.17.41 AM

The entire process from deployment to external VM reachability took me less than 3 hours in total, not bad for a new guy!

Deploying and configuring a production grade OpenStack environment traditionally takes weeks by highly skilled DevOps engineers specializing in deployment automation, in-depth knowledge of repo and package management, and strong Linux system administration. To come up with the right CI/CD process to support new features and align with upstream within a release is extremely difficult and results in snowflake environments.

The VIO approach changes all that. I’m impressed with what VMware has done to abstract away traditional complexities involved in deploying, supporting, and maintaining an enterprise grade OpenStack environment. Leveraging VMware’s suite of products in the backend, an enterprise grade OpenStack can be deployed and configured in hours rather than weeks. If you haven’t already, make sure to give VMware Integrated OpenStack a try, you will save tremendous amounts of time, meet the most demanding requests of application developers, while providing the highest SLA possible. Download now and get started , or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab , no installation required.

Tired of Waiting? Deploy OpenStack in 15 Minutes or Less

Watch this video to learn how to deploy OpenStack in Compact Management Mode in under 15 minutes


If you’re ready to try VIO, take it for a spin with the Hands on Labs that illustrates a step-by-step walkthrough of how to deploy OpenStack in Compact Management Mode in under fifteen minutes.

Deploying OpenStack challenges even the most seasoned, skilled IT organizations. With integrations, configurations, testing, re-testing, stress testing and more. For many, deploying OpenStack appears as an IT ‘Science Project’, wherein the light at the end of the tunnel dims with each passing month.

VMware Integrated OpenStack takes a different approach, reducing the redundancies and confusion of deploying OpenStack with the new Compact Management Control Pane. In the Compact Mode UI, wait minutes, not months. Enterprises seeking to evaluate OpenStack or those that are ready to build OpenStack clouds in the most cost efficient manner now have the ability to deploy in as little as 15 minutes quickly.

 

The architecture for VMware Integrated OpenStack is optimized to support compact architecture mode, reducing the need for support, overall resource costs, and the operational complexity keeping an enterprise from completing their OpenStack adoption.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.


If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


You’ll be surprised what you can accomplish in 15 minutes

Next Generation Security Services in OpenStack

OpenStack is quickly and steadily positioning itself as a great Infrastructure-as-a-Service solution for the Enterprise. Originally conceived for that proverbial DevOps Cloud use case (and as a private alternative to AWS), the OpenStack framework has evolved to add rich Compute, Network and Storage services to fit several enterprise use cases. This evolution can be evidenced by the following initiatives:

1) Higher number of commercial distributions are available today, in addition to Managed Services and/or DIY OpenStack.
2) Diverse and expanded application and OS support vs. just Cloud-Native apps (a.k.a “pets vs. cattle”).
3) Advanced network connectivity options (routable Neutron topologies, dynamic routing support, etc.).
4) More storage options from traditional Enterprise storage vendors.

This is definitely great news, but one area where OpenStack has lagged behind is security. As of today, the only robust option for application security offered in OpenStack are Neutron Security Groups. The basic idea is that OpenStack Tenants can be in control of their own firewall rules, which are then applied and enforced in the dataplane by technologies like Linux IP Tables, OVS conntrack or, as it is the case with NSX vSphere, a stateful and scalable Distributed Firewall with vNIC-level resolution operating on each and every ESXi hypervisor.

Neutron Security Groups were designed for intra and inter-tier L3/L4 protection within the same application environment (the so-called “East-West” traffic).

In addition to Neutron Security Groups, projects like Firewall-as-a-Service (FWaaS) are also trying to onboard next generation security services onto these OpenStack Clouds and there is an interesting roadmap taking form on the horizon. The future looks great, but while OpenStack gets there, what are the implementation alternatives available today? How can Cloud Architects combine the benefits of the OpenStack framework and its appealing API consumption model, with security services that provide more insight and visibility into the application traffic? In other words, how can OpenStack Cloud admins offer next generation security right now, beyond the basic IP/TCP/UDP inspection offered in Neutron?

The answer is: With VMware NSX.

NSX natively supports and embeds an in-kernel redirection technology called Network Extensibility, or NetX. Third party ecosystem vendors write solutions against this extensibility model, following a rigorous validation process, to deliver elegant and seamless integrations. Once the solution is implemented, the notion is simply beautiful: leverage the NSX policy language, the same language that made NSX into the de facto solution for micro-segmentation, to “punt” interesting traffic toward the partner solution in question. This makes it possible to have protocol-level visibility for East-West traffic. This approach also allows you to create a firewall rule-set that looks like your business and not like your network. Application attributes such as VM name, OS type or any arbitrary vCenter object can be used to define said policies, irrespective of location, IP address or network topology. Once the partner solution receives the traffic, then the security admins can apply deep traffic inspection, visibility and monitoring techniques to it.

screen-shot-2

How does all of the above relate to OpenStack, you may be wondering? Well, the process is extremely simple:

1) First, integrate OpenStack and NSX using the various up-streamed Neutron plugins, or better yet, get out-of-the-box integration by deploying VMware’s OpenStack distro, VMware Integrated OpenStack (VIO), which is free for existing VMware customers.
2) Next, integrate NSX and the Partner Solution in question following documented configuration best practices. The list of active ecosystem partners can be found here.
3) Proceed to create an NSX Security policy to classify the application traffic by using the policy language mentioned above. This approach follows a wizard-based provisioning process to select which VMs will be subject to deep level inspection with Service Composer.
4) Use the Security Partner management console to create protocol-level security policies, such as application level firewalling, web reputation filtering, malware protection, antivirus protection and many more.
5) Launch Nova instances from OpenStack without a Neutron Security Group attached to them. This step is critical. Remember that we are delegating security management to the Security Admin, not the Tenant. Neutron Security Groups do not apply in this context.
6) Test and verify that your security policy is applied as designed.

screen-shot-1

This all assumes that the security admin has relinquished control of the firewall from the Tenant and that all security operations are controlled by the firewall team, which is a very common Enterprise model.

There are some Neutron enhancements in the works, such as Flow Classifier and Service Chaining, that are looking “split” the security consumption between admins and tenants, by promoting these redirection policies to the Neutron API layer, thus allowing a Tenant (or a Security admin) to selectively redirect traffic without bypassing Neutron itself. This implementation, however, is very basic when compared to what NSX can do natively. We are actively monitoring this work and studying opportunities for future integration. In the meantime, the approach outlined above can be used to get the best of both worlds: the APIs you want (OpenStack) with the infrastructure you trust (vSphere and NSX).

In the next blog post we will show an actual working integration example with one of our Security Technology Partners, Fortinet, using VIO and NSX NetX technology.

Author: Marcos Hernandez
Principal Engineer, CCIE#8283, VCIX, VCP-NV
hernandezm@vmware.com
@netvirt

VMware Integrated OpenStack 3.0 Announced. See What’s In It

On 9/30/2016, VMware announced VMware Integrated OpenStack 3.0 at VMWorld in Las Vegas. We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the latest Mitaka release, an optimized management control plane architecture, and the ability leverage existing workloads in your OpenStack cloud.

VIO 3.0 is available for download here(Login may be required).

New features include:

  • OpenStack Mitaka Support
    VMware Integrated OpenStack 3.0 customers can leverage the great features and enhancements in the latest OpenStack release. Mitaka addresses manageability, scalability, and a greater user experience. To learn more about the Mitaka release, visit the OpenStack.org site at https://www.openstack.org/software/mitaka/
  • Easily Import Existing Workloads
    The ability to now directly import vSphere VMs into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development project or production workloads to the OpenStack Framework.
  • Compact Management Control Plane
    Building on enhancements from previous releases, organizations looking to evaluate OpenStack or to build OpenStack clouds for branch locations quickly and cost effectively can easily deploy in as little as 15 minutes. The VMware Integrated OpenStack 3.0 architecture has been optimized to support a compact architecture mode that dramatically reduces the infrastructure footprint saving resource costs and overall operational complexity.

If you are at VMWorld2016 in Las Vegas, we invite you to attend the following sessions to hear how our customers are using VMware Integrated OpenStack and learn more details about this great upcoming release.

VMware Integrated OpenStack 3.0

VMWorld 2016 VMware Integrated OpenStack Sessions:

  • MGT7752 – OpenStack in the Real World: VMware Integrated OpenStack 3.0 Customer Panel
  • MGT7671 – What’s New in VMware Integrated OpenStack Version 3.0!
  • NET8109 – Amadeus’s Journey Building a Software-Defined Data Center with VMware Integrated OpenStack and NSX
  • NET8343 – OpenStack Networking in the Enterprise: Real-Life Use Cases
  • NET8832 – The Role of VIO and NSX in Virtualizing the Telecoms Infrastructure
  • SEC9618-SPO – Deep Dive: Extending L4-L7 Security Controls for VMware NSX and VMware Integrated OpenStack (VIO) Environments with Fortinet Next Generation

Try VMware Integrated OpenStack Today

Sign up to be notified when VMWare Integrated OpenStack 3.0 is available.

OpenStack Summit 2016 Re-Cap – Speeding Up Developer Productivity with OpenStack and Open Source Tools

Some developers may avoid utilizing an OpenStack cloud, despite the advantages they stand to gain, because they have already established automation workflows using popular open source tools with public cloud providers.

But in a joint presentation to the 2016 OpenStack Summit, VMware Senior Technical Marketing Manager Trevor Roberts Jr.’s and VMware NSX Engineering Architect Scott Lowe explain how developers can take advantage of OpenStack clouds using the same open source tools that they already know.

Their talk runs through a configuration sequence, including image management, dev/test, and production deployment, showing how standard open source tools that developers already use for non-OpenStack deployments can run in exactly the same way with an OpenStack cloud. In this case, they discuss using Packer for image building, Vagrant and Docker Machine for software development and testing, and Terraform for production grade deployments.

“This is a way of using an existing tool that you’re already comfortable and familiar with to start consuming your OpenStack cloud,” says Roberts, before demoing an OpenStack customized image build with Packer and then using that image to create and deploy an OpenStack-provisioned instance with Vagrant.

 

Lowe next profiles Docker Machine, which provisions instances of the Docker Engine for testing, and shows how you can use Docker Machine to spin up instances of Docker inside an OpenStack cloud.

Lastly, Lowe demos Terraform, which offers an infrastructure-as-a-service/cloud services approach to production deployments on multiple platforms (it’s similar to Heat for OpenStack), creating an entire OpenStack infrastructure with a single click, including a new network and router, and launching multiple new instances with floating IP addresses for each, ready for pulling down containers as required.

 

“It’s very simple to use these development tools with OpenStack,” adds Roberts. “It’s just a matter of making sure your developers feel comfortable with the environment and letting them know there are plugins out there – you don’t have to keep using other infrastructures.”

As Roberts notes in the presentation, VMware itself is “full in on OpenStack – we contribute to projects that aren’t even in our own distribution just to help out the community.” Meanwhile, VMware’s own OpenStack solution – VMware Integrated OpenStack (VIO) – offers, a DefCore-compliant OpenStack distribution specifically configured to use open source drivers to manage VMware infrastructure technologies, further aiding the adoption process for developers already familiar with vSphere, vRealize, and NSX.

 

For more information on VIO, check out the VMware Integrated OpenStack (VIO) product homepage, or the VIO Hands-on Lab. If you hold a current license for vSphere Enterprise Plus, vSphere Operations Management, or vSphere Standard with NSX Advanced, you can download VIO for free.

VMware Integrated OpenStack Video Series: Heat Orchestration

OpenStack includes an orchestration service (Heat) that allows users to define their application infrastructure via one or more template files. Users can either leverage the native OpenStack Heat Orchestration Template (HOT) format or the Amazon Web Services (AWS) CloudFormation format.

You may be wondering, “What’s the point of using Heat when I already have access to the OpenStack APIs\CLIs for automation purposes?” Well, a significant benefit of using Heat is infrastructure lifecycle management.

Let’s discuss what that means by examining the virtual infrastructure that could be used to host a multi-tier application that consists of a web server, an application server, and a database server.

Multi-Tier Application Infrastructure

Multi-Tier Application Infrastructure

It is reasonable to simply use the nova API directly to deploy three instances in the infrastructure. However, there are other application components to consider. Most likely, these instances will be on private networks (perhaps one network per application tier). The application developer also needs to account for the router to connect to the outside world, and the floating IP that will be assigned to the web server so that users can access the application.

So, with this simple application infrastructure, the number of components is already piling up:

  • Three instances
  • One router
  • Three tenant networks
  • One floating IP

Making one-off API\CLI calls to deploy these components is fine during development. However, what happens when you’re ready to go to production? What if performance tests shows that our deployment requires multiple instances at each infrastructure tier?

It would be great to have a single deployment mechanism to provision the application infrastructure from detailed, static files that leaves zero room for error. Due to the simplicity of the YAML format, your HOT files can also be used as a documentation source for IT operation runbooks. These are just a couple benefits that can come from using Heat for your application infrastructure deployments.

The following video provides a detailed walkthrough of using the OpenStack orchestration service.

 

Stay tuned for the next installment covering OpenStack security groups! In the meantime, you can learn more on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.