Author Archives: Adrian Roberts

VMware Cloud on AWS Base Reference Architecture for Managed Service Providers

Reference Architecture

VMware Cloud on AWS is an on-demand cloud service that enables you to run applications consistently across VMware vSphere-based cloud environments across AWS’s global infrastructure with additional access to a broad range of native AWS services. Powered by VMware Cloud Foundation, this service integrates vSphere, vSAN and NSX along with VMware vCenter management, and is optimized to run on dedicated, elastic, bare-metal AWS infrastructure. With this service, IT teams can manage their cloud-based resources with familiar VMware tools and processes wherever they are running.

With the recent release of VMware Cloud on AWS through the Managed Service Provider program, MSP’s can now add the cloud service to their portfolio and offer it to their end-customers with advanced consulting and managed services to help accelerate successful adoption of the platform.

The reference architecture represents a base on which to offer VMware Cloud on AWS as part of your broader VMware Cloud Provider Platform services and your end-customers on-premises datacenter environments. The solution leverages IPsec VPN connectivity for both management and compute layers but could easily be adapted to leverage L2 VPN connectivity or direct-connect where required.

Offering VMware Cloud on AWS through the MSP program gives both the cloud provider and their end-customers additional choice, flexibility, geographical coverage, elastic scalability in a pay as you go model. This, coupled with the advanced services available from native AWS makes VMware Cloud on AWS a fantastic choice to expand your managed services portfolio.

Integrating in to cloud provider platform hosted vDC’s

The Cloud Provider Platform is the core cloud services platform that the cloud provider offers to their end-customers to consume compute, containers, networking, security, storage, applications, disaster recovery, backup and recovery etc.

The CPP platform is based on the same technologies as VMware Cloud on AWS (vSphere, NSX and vSAN) with the addition of vCloud Director for multi-tenancy. With vCloud Director each Virtual Datacenter is connected to an edge gateway for north / south network routing and advanced networking services. The cloud provider can connect the end-customers edge services gateway to the VMware Cloud on AWS’s compute gateway over either layer 2 (L2 VPN) or layer 3 (IPsec VPN). In the reference architecture we have leveraged layer 3 (IPsec VPN) for simplicity.

Integrating in to managed on-premises

The on-premises environment simply needs to be running VMware vSphere and have access to a VPN termination point. The VPN termination point can either be something that exists in the end-customer’s environment or we can leverage the NSX standalone edge device to provide VPN services.

To support advanced features such as hybrid linked mode the on-premises vSphere version would need to be at vSphere 6.0 Update 3 patch c and later.

Building professional services portfolio

Once the cloud provider has a reference topology of how you are going to connect your customers in to their newly provisioned VMware Cloud on AWS SDDC’s you can start to think about what professional services you would like to deliver to accelerate your customers on-boarding and success leveraging the cloud service.

Here are a few examples:

  • Connectivity and readiness – which is helping your customers connect their networking in to the target environment leveraging their existing investments.
  • Architecture and design – supporting your customers in architecting their cloud deployments to maximize their business impact.
  • Develop, deploy and build – support your customers in enhancing their development lifecycles, environments management, build processes, application modernization etc.
  • Plan and migrate – support your customers on-boarding workloads to the new VMware Cloud on AWS SDDC environments.

Building managed services portfolio

A key differentiator of working with a cloud provider is being able to take advantage of their advanced managed services portfolio, which can now be extended across VMware Cloud on AWS.

Here are a few examples:

  • Application support – as well as providing the support for the VMware Cloud on AWS environment, the MSP can offer advanced support and SLA’s across their customers applications.
  • Patching and lifecycle – support customers with lifecycle of applications.
  • Proactive reporting – plug service in to existing OSS and BSS systems to offer advanced capacity and performance reports.
  • Operate and optimize – support the customer by operating the whole environment for them and optimizing for cost and performance.

Architecting for the core MSP use-cases

VMware Cloud on AWS is a unique cloud service that enables many use-cases that meet many of your customers business drivers, from existing applications through to a new cloud native applications.

Here are a few example use-cases that you can help your customers architect as part of their cloud adoption business drivers:

  • Application migrations
  • Geographic expansion
  • Vertical extension
  • Disaster recovery
  • Elastic scalability
  • Application development
  • Application modernization

Call to action

To get started with VMware Cloud on AWS please visit or contact your VMware partner business manager to discuss how you could add this managed service to your portfolio.

Service Provider Multi-Tenant vRealize Operations (Managed Service)

VMware vRealize Operations™ is a key component of a vCloud Air Network powered cloud service offering. It provides a simplified yet extensible approach to operations management of  the cloud infrastructure. It helps service providers maximize profitability by optimizing efficiency and differentiates their service offerings by increasing customer satisfaction and  delivering to SLAs.
VMware vRealize Operations also enables service providers to generate new revenue streams by expanding their footprint to offer VMware vRealize Operations™ as a service to give their tenants a deeper insight in to the health, capacity and performance of their hosted environments.
This can either be delivered on a dedicated per-tenant basis as part of a private cloud solution offering alternatively the vCAN Service Provider can offer a shared vRealize Operations™ platform as a managed service.
Conceptual Overview:

In this scenario, the service provider operates a centralized vRealize Operations Manager instance to collect all data generated by the resource cluster. Both service provider personnel and tenants will access the same instance of vRealize Operations, and data access will be controlled with RBAC. This scenario allows for easy management and deployment.

This approach is especially attractive for service providers who can operate their complete environment within one vRealize Operations Manager environment.

Advantages include the following:

  • Easy to deploy and manage
  • No additional data/configuration distribution for dashboards, policies, and so on is needed
  • Only one instance to maintain (software updates, management packs, and so on)

Disadvantages involve the following:

  • Role-based access control requires careful maintenance
  • Objects can only be operated under one policy, removing the ability to limit alert visibility for a customer/tenant
  • Sizing can get complex and larger environments could be limited by sizing parameters. A possible workaround could be to build instances per larger resource group.


This is just one way a vCloud Air Network provider can differentiate their service portfolio with  vRealize Operations™ by extending the consumption to your end-customers as a managed service.

For more information on common deployment models for vCloud Air Network Service Providers, please visit the vCloud Architecture Toolkit for Service Providers


Live Workload Mobility to a vCloud Air Network IaaS Provider

Solution Introduction

VMware vCloud Air Network providers are uniquely positioned to become a seamless extension of their existing customers on-premises datacenters, offering a true unified hybrid cloud experience for applications and cloud infrastructure management.

With the introduction of NSX 6.2 and vSphere 6.0, VMware introduced the concept of cross vCenter Network and Security between vCenter servers that are within 150ms RTT. This raises some excellent opportunities for vCloud Air Network Providers to offer live workload mobility and business continuity services as an extension of their end-customers on-premises data centers.

This blog post will introduce a solution which can be offered by VMware’s vCloud Air Network partners to enable live workload mobility between an end-customer’s on-premises data center and a VMware vCloud Air Network provider. A follow-up blog post in the coming weeks will explain how vCloud Air Network provider’s can also very easily introduce business continuity services for their customers on top of this solution.

The full solution will be published as part of vCloud Architecture Toolkit for Service Providers during the first quarter of 2016.

Key Business Drivers

  • To provide a seamless extension to the end-customer’s data center, enabling ease of migration between customer and provider data centers.
  • To provide additional ‘burstable’ capacity to end-customers to support emerging projects, based on business demand.
  • To provide consistent security policies enforcement and micro-segmentation to all end-customer workloads, whether based on-premises or within the hosting provider’s data center.
  • To provide a managed mobility service to end-customers, where the provider executes mobility requests.
  • To offer a self-service workload mobility, disaster recovery and disaster avoidance solution to the end-customers.


  • Network connectivity between datacenters is established and out of scope for this blog post.
  • vMotion networks are configured at both provider and customer data centers.

Architecture Overview

The design below highlights a vCloud Air Network provider managed solution, where an end-customer datacenter is connected to a vCloud Air Network provider data center via a federated vSphere and NSX management domain. This architecture introduces “Universal Objects” in NSX, which are objects that span across vCenter server objects. The following sections will highlight the management components required and which NSX universal objects have been configured with basic configuration considerations.

 Workload Mobility Image v1-2

Software Bill of Materials

Management Components

  • VMware vCenter Server at each site with mirrored release versions:
    • Both vCenter Server instances should be members of the same SSO domain for operations carried out through the UI. However, separate SSO domains can be supported if vMotion operations are executed through the API with appropriate authentication.
  • VMware NSX Manager at each site, paired with their local vCenter Server:
    • The primary NSX manager hosted in provider data center, and secondary NSX manager in end-customer’s data-center.

Control Plane Components

Data-Plane Components

  • Universal Transport Zone – controls the hosts that a universal object can span across – this needs to be configured across both vCenter Servers (vCAN Provider and on-premises).
  • Universal Logical Distributed Router – provides east > west routing between universal logical switches.
  • Universal Logical Switches – Layer 2 segment which spans the universal transport zone. This is where the provider and customer will attach the virtual machine network.

Service Offerings

This solution has several potential service offerings that the vCloud Air Network provider can offer to their end-customers:

  • Hosted Virtual Infrastructure – the provider can offer their existing virtual hosted infrastructure portfolio as its foundation offering, with the required scale and distribution the end-customer requires for their new initiatives, or to support migration.
  • Network Connectivity between provider and end-customer – with support for higher levels of latency, up to 150ms, the options which the provider can offer their end-customers could range from direct connected networks, to VPN connectivity across the internet, leveraging NSX services such as L2, SSL or IPSec VPN.
  • Advanced Hybrid Networking Services – the provider can offer their end-customers additional hybrid software-defined networking services, ranging from NAT, DHCP, Firewall, Routing (dynamic / static) and Load-Balancing services.
  • Portable Security Services – the provider, or end-customer, can build security policies and groups with dynamic membership, which work at a per-VM level across the provider and end-customer’s data centers.
  • Live Workload Mobility Services – with this architecture, the hosting provider can enable live workload mobility services between the end-customer and the provider data centers.
  • Disaster Avoidance Services – with this architecture, the provider can build true hybrid applications, maintaining Layer 2 network connectivity between application components hosted on-premises and with the provider.


As we have seen outlined above, by including the VMware NSX 6.2 into a vCloud Air Network provider’s hosting portfolio, the service provider can offer a unified hybrid platform which enables the provider to become a strategic extension of their end-customer’s data center. By extending network and security services across these data centers, we can enable numerous use-cases around workload mobility, disaster avoidance and disaster recovery, which will be covered in more detail with a follow up blog post.

For more information on how a vCloud Air Network Provider can leverage long-distance vMotion to enhance their user experience, please refer to the vCAT-SP document: Architecting a Hybrid Mobility Strategy for vCloud Air Network.

Introducing VMware vCloud Architecture Toolkit for Service Providers

Just in time for VMworld Europe, we are pleased to announce the first release of the VMware vCloud® Architecture Toolkit for Service Providers (vCAT-SP). vCAT-SP is a set of reference documents and architectural notes designed to help VMware vCloud® Air™ Network Partners construct cloud platforms and service offerings leveraging current technologies, recommended practices and innovative tools that have been provisioned in real-world cloud service provider environments. Written by the vCloud Air Network Global Cloud Practice Architecture team and experts across VMware, vCAT-SP provides cloud service provider IT managers and architects recommended design and support solutions attested, validated and optimized – they represent the most efficient examples to help you make the right choices for your business.

Solution Stacks

VMware vCAT-SP is supported by the ‘VMware Cloud Service Provider Solution Stacks’, these are recommended solution stacks aligned to common service provider delivery models; Hosting, Managed Private Cloud and Public Cloud. The solution stacks provide recommendations in terms of which VMware products should be included as part of a VMware powered Hosting, Managed Private Cloud or Public Cloud platform.

You can download the vCAT-SP – Public Cloud Solution Stack

The Hosting and Private Cloud Solution Stacks will be released shortly.

Service Definitions

The service definition documents will help vCloud Air Network service providers define their cloud service requirements across compliance, SLA and OLA’s, recoverability business continuity, integration requirements for OSS and BSS systems, and service offerings use cases. The documents will provide example service definitions that can be leveraged as a starting point.

The initial release of vCAT-SP will provide a service definition example document for Public Cloud. Additional service definition example documents for Hosting and Managed Private Cloud will be provided at a later date.

Architecture Domains

vCAT-SP has been broken down in to seven architecture domains:

  • Virtualization Compute – Documents that detail specific design considerations for the virtualization platform across cloud service offerings for service providers.
  • Network and Security – Documents that detail network and security specific use-cases and design considerations for service providers.
  • Storage and Availability – Documents detailing design considerations for storage platforms and availability solutions for service providers.
  • Cloud Operations and Management – Documents detailing the design considerations and use-cases around opertions management of cloud platforms and services.
  • Cloud Automation and Orchestration – Documents detailing the design considerations and use-cases for automation and orchestration within cloud platforms.
  • Unified Presentation – Documents detailing design considerations and use-cases around presentation of cloud services through UI’s and API’s and the available options to service providers.
  • Hybridity – Documents detailing design considerations and use-cases for hybridity ranging from hybrid applicaton architectures to hybrid network design considerations.

Each domain contains architecture documents that cover off the core platform architecture design considerations, key service provider use cases and operational considerations.

Solution Architecture Examples

Solution Architecture Examples are reference solution designs that have taken the architecture domains in to consideration to formulate a holistic cloud solution. This will include design decisions that have been driven by key requirements and constraints specific to a particular deployment scenario. We are aiming to provide solution architecture examples for Hosting, Managed Private Cloud and Public Cloud platforms initially.

Solutions and Services Examples

Solutions and Services Examples are reference architecture blueprints detailing how to implement a particular cross functional service offering such as DR as a Service which requires a core cloud platform solution and configuration across network and security, storage and availability and compute virtualization domains. This area is where we will publish additional value-add service offerings that the cloud service providers can plug in to their core architectures.

More Information

The initial release of vCAT-SP zip file will be available for download during VMworld Barcelona from:

For any feedback or requests for additional materials, please contact the team at:



vCloud Director for Service Providers (VCD-SP) and RabbitMQ Security

Let us start with what is RabbitMQ and how does RabbitMQ fit into vCloud Director for Service Providers (VCD-SP)?

RabbitMQ provides robust messaging for applications, in particular vCloud Director for Service Providers (VCD-SP).  Messaging describes the sending and receiving of data (in the form of messages) between systems. Messages are exchanged between programs or applications, similar to the way people communicate by email, but with select-able guarantees on delivery, speed, security and the absence of spam.

A messaging infrastructure (a.k.a. message-oriented middle-ware or enterprise service bus) makes it easier for developers to create complex applications by decoupling individual program components. Rather than communicating directly, the messaging infrastructure facilitates the exchange of data between components.  The components need know nothing about each other’s status, availability or implementation, which allows them to be distributed over heterogeneous platforms and turned off and on as required.

In a vCloud Director for Service Provider deployment, VCD-SP uses the open standard AMQP protocol to publish messages associated with Blocking Tasks or Notifications. AMQP is the wire protocol natively understood by RabbitMQ and many similar messaging systems, and defines the wire format of messages, as well as specifying the operational details of how messages are published and consumed. VCD-SP also uses AMQP to communicate with extension services: – vCloud Director for Service Provider API Extensions are implemented as services that consume the API requests from a RabbitMQ queue. The API request (http request is serialized and published as an AMQP message. The API implementation consumes the messages, performs the business logic and then replies with an AMQP message. In order to publish and consume messages, you need to configure your RabbitMQ exchange and queues.


A RabbitMQ server or _broker_, runs within the vCloud Director for Service Provider network environment, and for example is deployed into the VCD-SP underlying vSphere installation as a virtual appliance, or vApp. Clients (in this case vCloud Director for Service Provider cells belonging to the vCloud Director Service Provider (VCD-SP) infrastructure itself, as well as other applications interested in notifications) connect to the RabbitMQ broker. Such clients then publish messages to, or consume messages from the broker. The RabbitMQ broker is written in the Erlang programming language and runs on the Erlang virtual machine. Notes on Erlang-related security and operational issues are presented later in this vCAT-SP blog.


The Base Operating System Hosting the RabbitMQ Broker

Securing the RabbitMQ broker in a vCloud Director for Service Provider environment begins with securing the base operating system of the computer (bare metal or virtualized) on which Rabbit runs.  Rabbit runs on many platforms, including Windows and multiple versions of Linux.  As of this writing, commercial versions of RabbitMQ are sold by VMware as part of the vFabric suite and supported on Windows and RPM-based Linux distributions in the Fedora/RHEL family, as well as in a tar.gz-packaged Generic Linux edition. Please see : for purchasing details.

It is generally recommended in a vCloud Director Service Provider (VCD-SP) deployment that a Linux distribution of RabbitMQ be used.  VMware expects to eventually provide a pre-packaged vApp with a Linux installation, the necessary Erlang runtime, and a RabbitMQ broker, although this form factor is not yet officially released. The VMware RabbitMQ virtual appliance undergoes, as part of its build process, a security hardening regime common to VMware-produced virtual appliances.

If a customer is deploying RabbitMQ on a Linux of their own choosing, whether running on bare-metal OS, or as part of a virtual appliance they have created themselves, the VMware’s security team recommends the following guidelines be adopted for securing the base Operating System in question:

The hardening discipline applied to the VMware produced RabbitMQ virtual appliance is based on DISA STIG recommendations above.


General networking concerns

Exposing the AMQP traffic occurring between vCloud Director for Service Provider cells and other interested applications in one’s cloud infrastructure outside of the private networks meant for cloud management can expose a VCD-SP provider to security threats. Messages published on an AMQP broker like RabbitMQ are sent for events that happen when something in vCloud Director for Service Provider changes and thus may include sensitive information. Thus, AMQP ports should be blocked at the network firewall protecting the DMZ to which vCloud cells are connected. Code that consumes AMQP messages from the broker must also be connected to same DMZ.  Any such piece of code should be controlled, or at least audited to the point of trustiness, by the vCloud Director Service for Provider.

It is also worth mentioning that AMQP is not exposed to any Cloud tenants and is only used by the Service Provider.

* The Erlang runtime

** What is Erlang?

Erlang is a programming language developed and used by Ericsson in its high-end telephony and data routing products.  The language and its associated virtual machine supports several features leveraged by RabbitMQ, including:

  • support for highly concurrent applications like RabbitMQ
  • built-in support for distributed computing, thus enabling easier clustering of RabbitMQ systems
  • built-in process monitoring and control, for ensuring that a RabbitMQ broker’s subsystems remain running and healthy
  • Mnesia: a performant distributed database
  • high-performance execution.

That RabbitMQ is written in Erlang matters relatively little to a system administrator responsible for deploying, configuring and securing the broker, with only a few small exceptions:

  • Erlang distribution has certain open port constraints.
  • Erlang distribution requires a special “cookie” file to be shared between hosts participating in distributed Erlang communication; this cookie must be kept private.
  • Some RabbitMQ configuration files are represented with Erlang syntax, of which one must be mindful when placing delimiters (like ‘[‘, ‘{‘, and ‘)’) and certain punctuation marks (notably the comma and the period).


Running Erlang securely for RabbitMQ

When clustered, RabbitMQ is a distributed Erlang system, consisting of multiple Erlang virtual machines communicating with one another.  Each such running virtual machine is called a *node*.  In such a configuration, the administrator must be aware of two basic Erlang ideas: the Erlang port mapper daemon, and the Erlang node magic cookie.


epmd:  The Erlang port mapper daemon

The Erlang port mapper daemon is automatically started at every host where an Erlang node (such as a RabbitMQ broker) is started.  The appearance of a process called ‘epmd’ is not to be viewed with alarm. The Erlang virtual machine itself is called ‘beam’ or ‘beam.smp’ and at least one of these will be seen on a machine running the RabbitMQ server. The Erlang port mapper daemon listens, by default on TCP port 4369. The host system’s firewall should leave this port open as a result.


Node magic cookies

Each Erlang node (as defined above) has its own magic cookie, which is an Erlang atom contained in a text file.  When an Erlang node tries to connect to another node (this could be a pair of RabbitMQ brokers connecting in a clustered RabbitMQ implementation, or the rabbitmqctl

utility connecting to a broker to perform some administrative function upon it) the magic cookie values are compared.  If the values of the cookies do not match, the connected node rejects the connection.

A node magic cookie on a system should be readable only by those users under whose id Erlang processes that need to communicate with one another are expected to run.  The Unix permissions of cookie files should typically be 400 (read-only by user).

For most versions of RabbitMQ, cookie creation and installation is handled automatically during installation.  For an RPM-based Linux distribution of RabbitMQ such as that for RHEL/Fedora the cookie will be created and deposited in /var/lib/rabbitmq, called ‘.erlang.cookie’ and given permissions 400 as described above.

* Rabbit server concepts

** Rabbit security:  the OS-facing side

*** OS user accounts

**** RPM-based Linux

In an RPM-based Linux distribution such as the vFabric release of RabbitMQ or the RabbitMQ virtual appliance, the Rabbit server runs as a daemon, started by default at OS boot time.  On such a platform the server is set up to run as system user ‘rabbitmq’.  The Mnesia database and log files must be owned by this user.  More will be said about these files in subsequent sections.

To change whether the server starts at system boot time use:

$ chkconfig rabbitmq-server on


$ chkconfig rabbitmq-server off

An administrator can start or stop the server with:

$ /sbin/service rabbitmq-server stop|start|restart


Network ports

Unless configured otherwise, the RabbitMQ broker will listen on the default AMQP port of 5672.  If the management plugin is installed to provide browser-based and HTTP API-based management services, it will listen on port 55672.

*Any firewall configuration should be certain to open these two ports. *

Strictly speaking, you only need port 5672 open for VCD-SP to work. You open port 55672 only if you want to expose the management interface to the outside world.

Also, as noted above, the Erlang port mapper daemon port, TCP 4369, must also be open.


Rabbit security: The broker-facing side

When considering the security of the RabbitMQ broker itself it’s helpful to divide one’s thinking into the consideration of the face Rabbit shows to the outside world, in terms of how communication with clients can optionally be authenticated and secured against eavesdropping and the ways in which RabbitMQ’s internal structures like exchanges, queues and the bindings between them that determine message routing are governed.

For the former consideration, a RabbitMQ broker can be configured to communicate with clients using the SSL protocol.  This can provide channel security for client-broker communications and optionally the verification of the identities of communicating parties.


TLSv1.2 and RabbitMQ in vCloud Director for Service Providers (VCD-SP)

In the context of vCloud Director Service Provider (VCD-SP), the administrator can configure vCloud Director Service Provider (VCD-SP) to use secure communication based on TLSv1.2 when sending messages to the AMQP broker. TLSv1.2 can also be configured to verify the presented broker’s certificate to authenticate its identity. To enable secured communication, you need to log in to vCloud Director Service Provider (VCD-SP) as a system administrator. In the ‘Administration’ section of the user interface, you must open the ‘Blocking Tasks’ page and select ‘Settings’ tab. In the ‘AMQP Broker Settings’ section there is checkbox labelled ‘Use SSL.’  Turn this option on. You can now select whether to accept all certificates – turn “Accept All Certificates” option on or to verify presented certificates. To configure verification of presented broker’s certificates you need either to create a Java KeyStore in JCEKS format that contains the trusted certificate(s) used to sign the broker’s certificate or you can directly upload the certificate if it is in PEM format.  Under this same ‘AMQP Broker Settings’ section use either the ‘Browse’ button for single SSL Certificate or for SSL Key Store. If you upload keystore you need to provide also SSL Key Store Password. If neither keystore or certificate are uploaded, then default JRE truststore is used.


Securing RabbitMQ AMQP communication with SSL

Full documentation on setting up the RabbitMQ broker’s built-in SSL support can be found at:

The documentation at this site covers:

  • the creation of a certificate authority using OpenSSL and the generation of signed certificates for both the RabbitMQ server and its clients.
  • enabling SSL support in RabbitMQ by editing the broker’s config file (for its location on a specific Rabbit platform see


Broker virtual hosts and RabbitMQ users

A RabbitMQ server internally defines a set of AMQP users (with passwords), which are stored in its Mnesia database.  *NOTE:* A freshly installed RabbitMQ broker starts life with a user account called ‘guest’ and endowed with the password ‘guest’.  We recommend that this password be changed, or this account deleted when RabbitMQ is first set up.

A RabbitMQ broker’s resources are logically partitioned into multiple “virtual hosts.”  Each virtual host provides a separate namespace for resources such as exchanges and queues.  When clients connect to a broker, they specify the virtual host with which they plan to interact at connection time.  A first level of access control is enforced at this point, with the server checking whether the user has sufficient permissions to access the virtual host.  If not, the connection is rejected.

RabbitMQ offers _configure_, _read_, and _write_ permissions on its resources.  Configure operations create or destroy resources, or modify their behavior.  Write operations inject messages into a resource, and read operations retrieve messages from a resource.

It is important to note that VCD-SP requires to have all these permissions granted for its AMQP user.

Details on RabbitMQ virtual hosts, users, access control and permissions can be found here:

The setting of permissions using the ‘rabbitmqctl’ utility is described in:

One should stick to a policy of least privilege in the granting of permissions on broker resources.


The rabbitmqctl utility

The rabbitmqctl (analogous to apachectl or tomcatctl) utility is one of the primary points of contact for administering RabbitMQ.  On Linux systems a man page for rabbitmqctl is typically available specifying its many options.  The contents of this page can also be found online at:


The Rabbit broker:  Where things are and how they should be protected

The following are true for a RabbitMQ server installed on an RPM-based Linux distribution such as RHEL/Fedora.  Permissions are given for top level directories where named.  Data files within them may have more liberal permissions set, particularly group/other authorized to read/write.


Erlang cookie

Ownership:    rabbitmq/rabbitmq

Permissions:  400

Location: /var/lib/rabbitmq/.erlang.cookie


RabbitMQ logs

Ownership:    rabbitmq/rabbitmq

Permissions:  755

Location: /var/log/rabbitmq/

|– rabbit@localhost-sasl.log

|– rabbit@localhost.log

|– startup_err

`– startup_log


Mnesia database location, plugins and message stores

Ownership:    rabbitmq/rabbitmq

Location: /var/lib/rabbitmq/mnesia

|– rabbit@localhost

|   |– msg_store_persistent

|   `– msg_store_transient

`– rabbit@localhost-plugins-expand


Configuration files location and permissions

RabbitMQ’s main configuration file, as well as environment variables that influences its behavior are documented here:

Note that the contents of the rabbitmq.config file are an Erlang term, and it is thus important to be mindful of delimiters and line ending symbols, so as not to produce a syntactically invalid file that will prevent RabbitMQ from starting up.


Privileges required to run broker process and rabbitmqctl

Ownership:    root/root

Permissions:  755/usr/sbin/rabbitmqctl

The rabbitmqctl utility must be run as root, and maintain ownership and permissions as above.

The broker can be started, stopped, restarted or status checked by an administrator running:

$ /sbin/service rabbitmq-server stop|start|restart|status



VMware vFabric Cloud Application Platform (with purchase links for commercial RabbitMQ):

NSA operating systems security guidelines:

US DoD Information Assurance Support Environment Security Technical Implementation Guides for operating systems:

RabbitMQ broker configuration:

RabbitMQ administration guide:

RabbitMQ broker/client SSL configuration guide:

RabbitMQ configuration file reference:

Configuring access control with rabbitmqctl:

Rabbitmqctl man page:


Authored by Michael Haines – Global Cloud Practice

Special thanks to Radoslav Gerganov and Jerry Kuch for their help and support.

VMware vCloud Architecture Toolkit (vCAT) is back!

Introducing VMware vCloud Architecture Toolkit for Service Providers (vCAT-SP)

The current VMware vCloud® Architecture Toolkit (3.1.2) is a set of reference documents that help our service provider partners and enterprises architect, operate and consume cloud services based on the VMware vCloud Suite® of products.

As the VMware product portfolio has diversified over the past few years with the introduction of new cloud automation, cloud operations and cloud business products plus the launch of our VMware’s own hybrid cloud service; VMware vCloud Air™, VMware service provider partners now have many more options for designing and building their VMware powered cloud services.

VMware has decided to create a new version of vCAT specifically focused on helping guide our partners in defining, designing, implementing and operating VMware based cloud solutions across the breadth of our product suites. This new version of vCAT is called VMware vCloud Architecture Toolkit – Service Providers (or vCAT-SP).

What are we attempting to achieve?

What VMware intends to do through the new vCAT-SP is provide prescriptive guidance to our partners on what is required to define, design, build and operate a VMware based cloud service… aligned to the common service models that are typically deployed by our partners. This will include core architectures and value-add products and add-ons.

VMware vCAT-SP will be developed using the architecture methodology shown in the following graphic. This methodology takes real-world service models, use cases, functional, non-functional requirements and implementation examples that have been validated in the real world.

Architecture Methodology


Which implementation models will be covered?

The new vCAT-SP initially focuses on two implementation models: Hybrid Cloud Powered and Infrastructure as a Service (IaaS) Powered. These in turn align to common cloud service models, such as Hosting, Managed Private Cloud, and Public/Hybrid Cloud.

Hybrid Cloud Powered

To become hybrid cloud powered, the service provider’s cloud infrastructure must meet the following criteria:

  • The cloud service must be built with VMware vSphere® and VMware vCloud Director for Service Providers.
  • The vCloud APIs must be exposed to the cloud tenants.
  • Cloud tenants must be able to upload and download virtual workloads packaged with Open Virtualization Format (OVF) version 1.0.
  • The cloud provider must have an active rental contract of 3600 points of more with an aggregator.

This implementation model is typically used to build large scale multi-tenant public or hybrid cloud solutions offering a range is IaaS, PaaS or SaaS services to the end-customers.

Infrastructure as a Service Powered

To design an Infrastructure as a Service powered cloud infrastructure, the solution must meet the following criteria:

  • The cloud service must be built with vSphere.
  • The cloud provider must have an active rental subscription with an aggregator.

This implementation model is typically used to build managed hosting and managed private cloud solutions with varying levels of dedication through compute, storage and networking layers, again offering a range of IaaS, PaaS and SaaS services to the end customers.

The vCloud Architecture Toolkit provides all the required information to design and implement a hybrid powered or IaaS powered cloud service, and to implement value-added functionality for policy based operations management, software-defined network and security, hybridity, unified presentation, cloud business management, cloud automation and orchestration, software-defined storage, developer services integration etc.

For more information please visit:

Modular and iterative development framework

Modularity is one of the key principles when starting to develop the new vCAT-SP architecture framework. Our modular approach makes it easier to iterate upon, by having smaller building blocks that can be checked out of the architecture, have the impact assessed against other components, updated, then re-inserted in to the architecture with minimal impact to the larger solution landscape.

What will vCAT-SP contain?

VMware vCAT-SP provides the following core documents:

Introductory Documents

Within this section there will be a document map, which details all the available documents and document types that are contained within vCAT-SP. There will also be an introduction document that provides the partners with guidance on how to get the most out of vCAT-SP as a consumer.

Service Definitions

The service definition document(s) provide the information needed to create an effective service definition document. They contain use cases, SLAs, OLAs, business drivers, and the like, that are required to build a hybrid cloud powered or IaaS powered cloud service. The initial vCAT-SP efforts will focus on the hybrid cloud powered service definition, with IaaS Powered following shortly after.

Architecture Documents

The vCAT-SP architecture documents detail the logical design specifics, architecture options available to the designing architect, design considerations for availability, manageability, performance, scalability, recoverability, security, and cost.

Implementation Examples

The implementation example documents detail an end-to-end specific implementation of a solution aligned to an implementation model and service definition. These documents highlight which design decisions were taken and how the solution meets the use cases and requirements identified in a service definition.

Additionally, there will be implementation examples for pluggable value-added services that are developed through the VMware vCloud Air Network. For example, Disaster Recovery as a Service (DRaaS), these components can be plugged in to the core architecture.

Emerging Tools, Solutions and Add-Ons

This area is not just for documentation, but also allows for the team to capture and store Useful software tools and utilities, such as, scripts, plugins, workflows, etc. and which can be used to enhance a particular implementation model. For example, how my cloud platform can present cloud-native applications, such as Project Photon. The development of these documents / add-ons will be iterative and not aligned to the core documentation releases.

The following figure shows the map of documentation currently planned. This is subject to change.

Document Map

When can I get a copy of vCAT-SP?

We are planning to launch the first pdf-based release of vCAT-SP on around the VMworld EMEA time-frame, we will be publishing in to web format shortly afterwards… so watch this space!

VMware vCAT-SP will be developed iteratively, with a published road-map. This will be in-line with our major software releases where possible, to ensure there is effective service and solution focused architectural guidance available to VMware service provider partners as close to GA dates as possible.

Who is the vCAT-SP development team?

The Global Cloud Practice – vCloud Air Network team, led by Dan Gallivan, is a team of specialist service provider-focused cloud architects that work throughout the vCloud Air Network within the VMware Cloud Services Business Unit.

The team is a global team with many years experience helping our service provider partners build world-class cloud products based on VMware software. The team also contains five certified VCDX architects and three members of the VMware CTO Ambassadors program.

Over the next couple of months we will be releasing frequent technical preview blogs across the technology domains as we approach VMworld EMEA.


Be sure to subscribe to the vCAT blog, vCloud blog, follow @VMwareSP on Twitter or ‘like’ us on Facebook for future updates.