Home > Blogs > OpenStack Blog for VMware > Tag Archives: API

Tag Archives: API

Barbican Consumption and Operational Maintenance

VMware Integrated OpenStack(VIO) announced the official support for Barbican, OpenStack secrets manager,  in version 5.1.  With Barbican, cloud Operators can offer Key Management as a service by leveraging Barbican API and command line(CLI) to manage X.509 certificates, keys, and passwords. Basic Barbican workflow is relatively simple –  invoke the secrets-store plugin to encrypt a secret on the store and decrypt a secret on retrieval. In addition to generic secrets management, some OpenStack projects integrate with Barbican natively to provide enhanced security on top of its base offering.  This blog will introduce Barbican consumption and operation maintenance through the use of Neutron Load Balancer as a Service (LBaaS).

Understanding Policies

Barbican scopes the ownership of a secret at the OpenStack project level.  For each API call, OpenStack will check to ensure the project ID of the token matches the project ID stored as the secret owner.  Further, Barbican uses roles and policies to determine access to secrets. Following roles are defined in Barbican::

  • Admin – Project administrator. This user has full access to all resources owned by the project for which the admin role is scoped.
  • Creator – Users with this role are allowed to create and delete resources.  Users with this role cannot delete other user’s resources managed within same project. They are also allowed full access to existing secrets owned by the project in scope.
  • Observer – Users with this role are allowed access to existing resources but are not allowed to upload new secrets or delete existing secrets.
  • Audit – Users with this role are only allowed access to the resource metadata. So users with this role are unable to decrypt secrets

VIO 5.1 ships with “admin” and “creator” role out of the box.  A project member must be assigned with the creator role to consume barbican.  Based on the above roles, Barbican defines a set of rules or policies for access control. Only operations specified by the matching rule will be permitted.

While the policy framework works well, but secrets management is never one size fits all, and there are limitations with the policy framework if fine-grain control is required.  Scenarios such as grant specific user access to a particular secret or upload a secret for which only the uploader has access needs OpenStack ACLs.  Please refer to ACL API User Guide for full details.

Supported Plugin

The Barbican key manager service leverages secret-store plugins to allow authorized users to store secrets.   VIO 5.1 supports two type of plugins, simple crypto and KMIP enabled. Only a single plugin can be active for a VIO deployment.  Secret stores can be software-based, such as a software token, or hardware devices such as a hardware security module (HSM).

Simple crypto plugin

The simple crypto plugin uses a single symmetric key, stored locally on the VIO controller in the /etc/barbican/barbican.conf file to encrypt and decrypt secrets.  This plugin also leverages local Barbican database and stores user secrets as encrypted blobs in the local database.    The reliance on local text file and database for storage is considered insecure, and therefore upstream community considers simple crypto plugin to be suitable for development and testing workloads only.

Secret store KMIP plugins

The KMIP plugin stores secrets securely in an external KMIP-enabled device. The Barbican database, instead of storing encrypted secrets, maintain location references of secrets for later retrieval. Client certificate-based authentication is the recommended approach to integrate the plugin with the KMIP enabled device.

A cloud operator must use the VIOCLI to specify a plugin:

KMIP:

sudo viocli barbican –secret-store-plugin KMIP \

–host kmip-server –port kmip-port  \

–ca-certs ca-cert-file [–certfile local-cert-file –keyfile local-key-file –user kmip-user –password kmip-password]

Simple Crypto:

sudo viocli barbican –secret-store-plugin simple_crypto

Example Barbican Consumption:

One of the most commonly requested use case specific to VIO is Barbican integration with Neutron LBaaS to offer HTTPS offload.  This is a five step process, we will review each step in detail.

  1. Install KMIP server (Greenfield only)
  2. Integrate KMIP using VIOCLI
  3. ACL update
  4. Workflow to create secret
  5. Workflow to create LBaaSv2

Please note, you must leverage OpenStack API or CLI for step #4.  Horizon support for Barbican is not available.

Install KMIP server

Production Barbican deployment requires a KMIP server.  In a greenfield deployment, Dell EMC CloudLink is a popular solution VMware vSAN customers leverage to enable vSAN storage encryption.  CloudLink includes both a key management server (KMS) as well as the ability to control, monitor and encrypt secrets across a hybrid cloud environment. Additional details on CloudLink is available from VMware solution exchange.

Integrate KMIP using VIOCLI

To integrate with CloudLink KMS or any other KMIP based secret store, simply login into the VIO OMS server and issue the following VIOCLI command;

Configure Barbican to use the KMIP plugin.

viocli barbican –secret-store-plugin KMIP

–user viouser \

–password VMware**** \

–host <KMIP host IP> \

–ca-certs /home/viouser/viouser_key_cert/ca.pem \

–certfile /home/viouser/viouser_key_cert/cert.pem \

–keyfile /home/viouser/viouser_key_cert/key.pem –port 5696

Successful completion of VIOCLI command performs following set of actions:

  • Neutron.conf update to include Barbican specific service_auth account.
  • Barbican environment specific information provided via VIOCLI
  • Barbican service endpoints definition on the HAproxy

ACL updates based on consumption

Neutron LBaaS relies on a Barbican service account to read and push certificates and keys stored in the Barbican containers to a load balancer.  The Barbican service user is an admin member of the service project, part of the OpenStack Local domain. Default Barbican security policy does not allow admin or member of one project to access secrets stored in a different project. In order for Barbican service user to access and push certificate and keys, tenant users must grant access to the service account.  There are two ways to allow access:

Option 1:

1). Tenant creator gives Barbican service user access using the OpenStack ACL command.  Cloud administrator needs to supply the UUID of the Barbican service account.

openstack acl user add -u <barbican_service_account UUID > $(openstack secret list | awk ‘/ cert1 / {print $2}’)

Repeat this command with each certificate, key, and container you want to provide Neutron access to.

Option 2:

2.) If cloud administrators are comfortable providing Neutron with access to secrets without users granting access to individual objects, cloud administrators may elect to modify the Barbican policy file. Implementing this policy change means that tenants won’t need to add the Neutron barbican service_user to every object, which makes the process of creating TERMINATED_HTTPS listeners easier. Administrators should understand and be comfortable with the security implications of this action before implementing this approach. To perform the policy change, use a custom-playbook to change the following line in the Barbican policy.json file:

From:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read”,

To:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read or role:admin”,

Please refer to my previous blog on custom-playbook.

Workflow to Create Secret:

This step assumes you have pre-created certificates and keys.  If you have not created keys and certificates before, please refer to this blog for details.  To follow steps outlined below, make sure to name your output files accordingly (server.crt and server.key).  To upload a certificate:

openstack secret store –name=’certificate’ \

–payload=”$(cat server.crt)” \

–secret-type=passphrase

Most of options are fairly self-explanatory, passphrase indicates a plain text.  Repeat the same command for keys:

openstack secret store –name=’private_key’ \

–payload=”$(cat server.key)” \

–secret-type=passphrase

you can confirm by listing out all secrets:

 

 

 

 

Final, create a TLS container pointing to both private key and certificate secrets:

openstack secret container create –name=’tls_container’ –type=’certificate’ \

                   –secret=”certificate=$(openstack secret list | awk ‘/ certificate / {print $2}’)” \

                   –secret=”private_key=$(openstack secret list | awk ‘/ private_key / {print $2}’)”

Workflow to create LBaaSv2

With Barbican service up and running,  ACL configured to allow retrieval of secret keys, let’s start to create a Load balancer and upload a certificate and key from the KMS server.  Load balancer creation workflow does not change with Barbican.  When creating a listener, be sure to specify TERMINATED_HTTPS as the protocol, and URL of the TLS container stored in Barbican.

Please note:  

  1. If you are testing Barbican against NSX-T, NSX-MGR must be running at least version 2.2 or higher.
  2. Example assumes pre-created test VMs,  t1 router, logical switch and subnets.
  • Create TLS enabled LB:

neutron lbaas-loadbalancer-create \

$(neutron subnet-list | awk ‘/ {subnet name} / {print $2}’) \

–name lb

 

 

 

 

 

 

  • Create listener with TLS

neutron lbaas-listener-create –loadbalancer lb1 \

–protocol-port 443 \

–protocol TERMINATED_HTTPS \

–name listener1 \

–default-tls-container=$(openstack secret list | awk ‘/ tls_container / {print $2}’)

 

 

 

 

 

 

  • Create pool:

neutron lbaas-pool-create \

–name pool1 \

–protocol HTTP \

–listener listener1 \

–lb-algorithm ROUND_ROBIN

  • Add members:

neutron lbaas-member-create pool1 \

–address <address1> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)

neutron lbaas-member-create pool1 \

–address <address2> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)

 

 

 

 

 

 

 

 

 

 

 

 

you can associate a floating IP address with the Loadbalancer VIP for services requiring external access.

 

 

 

 

 

 

 

 

To test out the new LB service, simply curl the URL using the floating IP:

viouser@oms:~$  curl -k https://192.168.120.130

 

 

Tired of Waiting? Deploy OpenStack in 15 Minutes or Less

Watch this video to learn how to deploy OpenStack in Compact Management Mode in under 15 minutes


If you’re ready to try VIO, take it for a spin with the Hands on Labs that illustrates a step-by-step walkthrough of how to deploy OpenStack in Compact Management Mode in under fifteen minutes.

Deploying OpenStack challenges even the most seasoned, skilled IT organizations. With integrations, configurations, testing, re-testing, stress testing and more. For many, deploying OpenStack appears as an IT ‘Science Project’, wherein the light at the end of the tunnel dims with each passing month.

VMware Integrated OpenStack takes a different approach, reducing the redundancies and confusion of deploying OpenStack with the new Compact Management Control Pane. In the Compact Mode UI, wait minutes, not months. Enterprises seeking to evaluate OpenStack or those that are ready to build OpenStack clouds in the most cost efficient manner now have the ability to deploy in as little as 15 minutes quickly.

 

The architecture for VMware Integrated OpenStack is optimized to support compact architecture mode, reducing the need for support, overall resource costs, and the operational complexity keeping an enterprise from completing their OpenStack adoption.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.


If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


You’ll be surprised what you can accomplish in 15 minutes

Apples To Oranges: Why vSphere & VIO are Best Bests for OpenStack Adoption

OpenStack doesn’t mandate defaults for compute, network and storage, which frees you to select the best technology. For many VMware customers, the best choice will be vSphere to provide OpenStack Nova compute capabilities.

 

It is commonly asserted that KVM is the only hypervisor to use in an OpenStack deployment. Yet every significant commercial OpenStack distro supports vSphere. The reasons for this broad support are clear.

Costs for commercial KVM are comparable to vSphere. In addition, vSphere has tremendous added benefits: widely available and knowledgeable staff, vastly simplified operations, and proven lifecycle management that can keep up with OpenStack’s rapid release cadence.

 

Let’s talk first about cost. Traditional, commercial KVM has a yearly recurring support subscription price. Red Hat OpenStack Platform-Standard 2 sockets can be found online at $11,611/year making the 3 year cost around $34,833[i]. VMware vSphere with Operations Management Enterprise Plus (multiplied by 2 to match Red Hat’s socket pair pricing) for 3 years, plus the $200/CPU/year VMware Integrated OpenStack SnS is $14,863[ii]. Even when a customer uses vCloud Suite Advanced, costs are on par with Red Hat. (Red Hat has often compared prices using VMware’s vCloud Suite Enterprise license to exaggerate cost differences.)

 

 

When 451 Research[iii] compared distro costs based on a “basket” of total costs in 2015 they found that commercial distros had a cost that was close to regular virtualization. And if VMware Integrated OpenStack (VIO) is the point of comparison, the costs would likely be even closer. The net-net is that cost turns out not to be a significant differentiator when it comes to commercial KVM compared with vSphere. This brings us to the significant technical and operational benefits vSphere brings to an OpenStack deployment.

 

In the beginning, it was assumed that OpenStack apps would build in the resiliency that used to be assumed from a vSphere environment, thus allowing vSphere to be removed. As the OpenStack project has matured, capabilities such as VMware vMotion and DRS (Distributed Resource Scheduler) have risen in importance to end users. Regardless of the application the stability and reliability of the underlying infrastructure matters.

 

There are two sets of reasons to adopt OpenStack on vSphere.

 

First, you can use VIO to quickly (minutes or hours instead of days or weeks) build a production-grade, operational OpenStack environment with the IT staff you already have, leveraging the battle-tested infrastructure your staff already knows and relies on. No other distro uses a rigorously tested combination of best-in-class compute (vSphere Ent+ for Nova), network (NSX for Neutron), and storage (VSAN for Cinder).

 

Second, only VMware, a long-time (since 2012), active (consistently a top 10 code contributor) OpenStack community member provides BOTH the best underlying infrastructure components AND the ongoing automation and operational tools needed to successfully manage OpenStack in production.

 

In many cases, it all adds up to vSphere being the best choice for production OpenStack.

 


[i] http://www.kernelsoftware.com/products/catalog/red_hat.html
[ii] http://store.vmware.com/store/vmware/en_US/cat/ThemeID.2485600/categoryID.66071400
[iii] https://451research.com/images/Marketing/press_releases/CPI_PR_05.01.15_FINAL.pdf


This Article was written by Cameron Sturdevant,  Product Line Manager at VMware

Next Generation Security Services in OpenStack

OpenStack is quickly and steadily positioning itself as a great Infrastructure-as-a-Service solution for the Enterprise. Originally conceived for that proverbial DevOps Cloud use case (and as a private alternative to AWS), the OpenStack framework has evolved to add rich Compute, Network and Storage services to fit several enterprise use cases. This evolution can be evidenced by the following initiatives:

1) Higher number of commercial distributions are available today, in addition to Managed Services and/or DIY OpenStack.
2) Diverse and expanded application and OS support vs. just Cloud-Native apps (a.k.a “pets vs. cattle”).
3) Advanced network connectivity options (routable Neutron topologies, dynamic routing support, etc.).
4) More storage options from traditional Enterprise storage vendors.

This is definitely great news, but one area where OpenStack has lagged behind is security. As of today, the only robust option for application security offered in OpenStack are Neutron Security Groups. The basic idea is that OpenStack Tenants can be in control of their own firewall rules, which are then applied and enforced in the dataplane by technologies like Linux IP Tables, OVS conntrack or, as it is the case with NSX vSphere, a stateful and scalable Distributed Firewall with vNIC-level resolution operating on each and every ESXi hypervisor.

Neutron Security Groups were designed for intra and inter-tier L3/L4 protection within the same application environment (the so-called “East-West” traffic).

In addition to Neutron Security Groups, projects like Firewall-as-a-Service (FWaaS) are also trying to onboard next generation security services onto these OpenStack Clouds and there is an interesting roadmap taking form on the horizon. The future looks great, but while OpenStack gets there, what are the implementation alternatives available today? How can Cloud Architects combine the benefits of the OpenStack framework and its appealing API consumption model, with security services that provide more insight and visibility into the application traffic? In other words, how can OpenStack Cloud admins offer next generation security right now, beyond the basic IP/TCP/UDP inspection offered in Neutron?

The answer is: With VMware NSX.

NSX natively supports and embeds an in-kernel redirection technology called Network Extensibility, or NetX. Third party ecosystem vendors write solutions against this extensibility model, following a rigorous validation process, to deliver elegant and seamless integrations. Once the solution is implemented, the notion is simply beautiful: leverage the NSX policy language, the same language that made NSX into the de facto solution for micro-segmentation, to “punt” interesting traffic toward the partner solution in question. This makes it possible to have protocol-level visibility for East-West traffic. This approach also allows you to create a firewall rule-set that looks like your business and not like your network. Application attributes such as VM name, OS type or any arbitrary vCenter object can be used to define said policies, irrespective of location, IP address or network topology. Once the partner solution receives the traffic, then the security admins can apply deep traffic inspection, visibility and monitoring techniques to it.

screen-shot-2

How does all of the above relate to OpenStack, you may be wondering? Well, the process is extremely simple:

1) First, integrate OpenStack and NSX using the various up-streamed Neutron plugins, or better yet, get out-of-the-box integration by deploying VMware’s OpenStack distro, VMware Integrated OpenStack (VIO), which is free for existing VMware customers.
2) Next, integrate NSX and the Partner Solution in question following documented configuration best practices. The list of active ecosystem partners can be found here.
3) Proceed to create an NSX Security policy to classify the application traffic by using the policy language mentioned above. This approach follows a wizard-based provisioning process to select which VMs will be subject to deep level inspection with Service Composer.
4) Use the Security Partner management console to create protocol-level security policies, such as application level firewalling, web reputation filtering, malware protection, antivirus protection and many more.
5) Launch Nova instances from OpenStack without a Neutron Security Group attached to them. This step is critical. Remember that we are delegating security management to the Security Admin, not the Tenant. Neutron Security Groups do not apply in this context.
6) Test and verify that your security policy is applied as designed.

screen-shot-1

This all assumes that the security admin has relinquished control of the firewall from the Tenant and that all security operations are controlled by the firewall team, which is a very common Enterprise model.

There are some Neutron enhancements in the works, such as Flow Classifier and Service Chaining, that are looking “split” the security consumption between admins and tenants, by promoting these redirection policies to the Neutron API layer, thus allowing a Tenant (or a Security admin) to selectively redirect traffic without bypassing Neutron itself. This implementation, however, is very basic when compared to what NSX can do natively. We are actively monitoring this work and studying opportunities for future integration. In the meantime, the approach outlined above can be used to get the best of both worlds: the APIs you want (OpenStack) with the infrastructure you trust (vSphere and NSX).

In the next blog post we will show an actual working integration example with one of our Security Technology Partners, Fortinet, using VIO and NSX NetX technology.

Author: Marcos Hernandez
Principal Engineer, CCIE#8283, VCIX, VCP-NV
hernandezm@vmware.com
@netvirt

Issues With Interoperability in OpenStack & How DefCore is Addressing Them

Interoperability is built into the founding conception of OpenStack. But as the platform has gained popularity, it’s also become ever more of a challenge.

“There’s a lot of different ways to consume OpenStack and it’s increasingly important that we figure out ways to make things interoperable across all those different methods of consumption,” notes VMware’s Mark Voelker in a presentation to the most recent OpenStack Summit titled: “ (view the slide set here).

 

Voelker, a VMware OpenStack architect and co-chair of the OpenStack Foundation’s DefCore Committee, shares the stage with OpenStack Foundation interoperability engineer Chris Hoge. Together they offer an overview of the integration challenges OpenStack faces today, and point to the work DefCore is doing to help deliver on the OpenStack vision. For anyone working, or planning to work, with VMware Integrated OpenStack (VIO), the talk is a great backgrounder on what’s being done to ensure that VIO integrates as well with non-VMware OpenStack technologies as it does with VMware’s own.

Hoge begins by outlining DefCore’s origins as a working group founded to fulfill the OpenStack Foundation mandate for a “faithful implementation test suite to ensure compatibility and interoperability for products.” DefCore has since issued five guidelines that products can now be certified as following, allowing them to carry the logo.

After explaining what it takes to meet the DefCore guidelines, Hoge reviews issues that remain unresolved. “The good news about OpenStack is that it’s incredibly flexible. There are any number of ways you can configure your OpenStack Cloud. You have your choice of hypervisors, storage drivers, network drivers – it’s a really powerful platform,” he observes. But that very richness and flexibility also makes it harder to ensure that two instances of OpenStack will work well together, he explains.

 

Among areas with issues are image operations, networking, policy and configuration discovery, API iteration, provability, and project documentation, reports Voelker. Discoverability and how to map capabilities to APIs are also a major concern, as is lack of awareness about DefCore’s guidelines. “There’s still some confusion about what kind of things people should be taking into account when they are making technical choices,” Hoge adds.

The OpenStack Foundation is therefore working to raise the profile of interoperability as requirement and awareness of the meaning behind the “OpenStack Powered” logo. DefCore itself is interacting closely with developers and vendors in the community to address the integration challenges they’ve identified and enforce a measurable standard on new OpenStack contributions.

 

“Awareness is half the battle,” notes Voelker, before he and Hoge outline the conversations DefCore is currently leading, outcomes they’ve already achieved, and what DefCore is doing next – watch for a report on top interoperability issues soon, more work on testing, and a discussion on new guidelines for NFV-ready clouds.

 

If you are interested in how VMware Integrated OpenStack (VIO) conforms with DefCore standards, you can more find information and experts to contact on our product homepage. You can also check out our Hands-on Lab, or try VIO for yourself and download and install VMware Integrated OpenStack direct.

Simplified Certificate Management with VMware Integrated OpenStack

SSL certificates allow developers to interact with an OpenStack cloud with the confidence that their communications are encrypted. In VMware Integrated OpenStack, we enable SSL encryption, by default, for users to access the various endpoints securely.  In addition, we make it easy to generate your certificate signing request (CSR) and to apply the certificate after it is received from your trusted Certificate Authority (CA).

When you first install VMware Integrated OpenStack, it is running with a self-signed certificate. In order to work with the CLI or API, you would need to use the OS_CACERT parameter during authentication. In addition, your web browsers will report that the identity of the site is not verified and will not trust the certificate that the OpenStack dashboard presents.

We strongly recommend that users obtain a certificate from a trusted CA for their production deployments. To that end, we make the CSR generation process easy for our users. The user simply logs in to the VMware Integrated OpenStack management server VM via SSH, and runs the following command:
sudo viocli deployment cert-req-create

The CSR will be generated and displayed to the screen.  Copy the CSR output, including the “BEGIN” and “END” lines, and paste it to a file. Submit this file to your CA.

When the signed certificate is returned, use the following syntax to apply it:
sudo viocli deployment cert-update -p -f /Your_Certificate_Path/cert.crt

The VMware Integrated OpenStack automation code then proceeds to deploy the certificate for use in your environment. When the process is complete, you can use the OpenStack CLIs and APIs without the OS_CACERT attribute, and your web browsers will trust the OpenStack dashboard as shown in Figure 1.

OpenStack SSL certificate requests are simplified with VIO

Figure 1: SSL certificate applied to a VMware Integrated OpenStack deployment

You can learn more about VMware Integrated OpenStack on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.