Author Archives: vCO RD Team

[vCO PowerShell plugin] How to set up and use Kerberos authentication

Updated version of vCO PowerShell 1.0.1 plugin presents support for Kerberos authentication. Using Kerberios authentication allows usage of domain users when using WinRM to communicate with PowerShell host.


WinRM service configured for Kerberos authentication.

  • Make sure that Kerberos authentication is enabled on WinRm service.
winrm g winrm/config/service/auth
  • Use following command to enable it
winrm s winrm/config/service/auth @{Kerberos="true"}
  • Verify connection  with WinRM service using Kerberos.
winrm id -a:Kerberos -p:

Configuring vCO PowerShell Plugin for Kerberos Authentication

krb5.conf file must be created and placed in {ORCHESTRATOR_INSTALATION_FOLDER]/jre/lib/security/krb5.conf.

The krb5.conf file contains Kerberos configuration information, including the locations of KDCs and admin servers for the Kerberos realms of interest, defaults for the current realm and for Kerberos applications, and mappings of hostnames onto Kerberos realms. More details for the format of "krb5.conf" can be found here .

Sample krb5.conf content may look like the following:

   default_realm = SOMEDOMAIN.COM
   udp_preference_limit = 1
      kdc =
      default_domain =

Note : is the address of the key destribution center for the provided Kerberos realm. Usually it is on the same machime as the domain controller.

PowerShell host configuration

  • Run "Add a PowerShell host workflow"
  • Provide hostname for Host/IP. Kerberos authentication is not supported with IP.
  • Choose WinRM for PowerShell remote host type.
  • New field "Authentication" will appear
  • Choose Kerberos as authentication mechanisym
  • Provide domain user with the following syntax user@DOMAIN.COM.




Troubleshooting guide

  • No valid credentials provided (Mechanism level: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7)))
    • The error can be caused by domain/realm mapping problems or it can be the result of a DNS problem where the service principal name is not  being built correctly. Server logs and network traces can be used to determine what service principal is actually being requested.
    • Kerberos authentication cannot be used when the destination is an IP address. Specify a DNS destination
    • Invalid host name.
  • Pre-authentication information was invalid (24)
    • This indicates failure to obtain ticket, possibly due to the client providing the wrong password
  • Clock Skew
    • Time differences are a common factor when dealing with Kerberos configuration. Kerberos requires that all the computers in the environment have system times within 5 minutes of one another. If computers that a client is attempting to use for either initial authentication (the Kerberos server) or resource access (including both the application server and, in a cross-realm environment, an alternate Kerberos server) have a delta greater than 5 minutes from the client computer or from one another, the Kerberos authentication will fail.
  • Cannot get kdc for realm SOMEREALM.COM
    • Check [libdefaults] and [realms] section of krb5.conf for typos.

Little big updates for our swiss-knife plug-ins

Three important vCO plug-ins have seen their first update release recently. These plug-ins are:

  • AMQP plug-in
  • HTTP-REST plug-in
  • SOAP plug-in

Most of the work done for those updates comes from the feedback collected from the users, through both our public communities and our Socialcast group.


And these are the main new features included within the update releases:

AMQP plug-in

  • SSL support has been added for RabbitMQ brokers.
  • The current API has been extended with delete and unsubscribe operations.
  • Other minor improvements and bug fixes.


HTTP-REST plug-in

  • NTLM authentication support has been added to the existing BASIC and DIGEST.
  • Other minor improvements and bug fixes.


SOAP plug-in

  • NTLM authentication support has been added to the existing BASIC and DIGEST.
  • SSL support has been extended to the WSDL file download and parsing process.
  • A flexible SOAP request/response interception mechanism has been added and it’s available from scripting.
  • SOAP element attributes defined inside the WSDL file now are also available from the workflow presentations, both when invoking SOAP operations directly or when generating a workflow from them.
  • Access to root request/response element attributes has been enabled.
  • Other minor improvements and bug fixes.


Enjoy the new releases and don’t hesitate to provide your valuable feedback!

SNMP plug-in for vCenter Orchestrator

The SNMP plug-in allows vCenter Orchestrator to connect and receive information from SNMP enabled systems and devices.

These devices could include communication equipment (routers, switches, etc.), network printers, UPS devices and many others. Events from vCenter can also be received over the SNMP protocol.

The SNMP plug-in provides two different manners of communication with the SNMP devices – queries for the values of specific SNMP variables and listening for events (SNMP traps) that are generated from the devices and pushed to the registered SNMP managers.


The SNMP plug-in adds inventory objects to vCO, that consist of a trap host and a set of SNMP devices.


The trap host node represents vCO listening for SNMP traps. It holds the basic configuration of vCO, acting as SNMP manager. It can be either online or offline, which is configurable with workflows.

The list of devices that follow the trap host holds configuration information that is needed for the access to these devices.

Each device can have a set of specific queries, which can be started, in order to obtain data from the device.

Device management

The list of SNMP devices is managed by the workflows in the Device Management section of the vCO workflow library.


They reflect the whole lifecycle of an SNMP device:

1. Register an SNMP device


With this workflow SNMP devices can be added to the vCO inventory. Device address is the most important parameter of the workflow. All the others are optional or have default values. It can be either IP address or DNS name, although using IP address is recommended, because SNMP is often used as diagnostic and problem-alerting protocol, and the dependency on DNS decreases it’s level of reliablity.

The name parameter is used to define user-friendly name. If skipped, the device address is used to generate a name autmatically.

By default, devices are registered for SNMP v2c version, on port 161, with community name “public”. In advanced mode these settings can be changed.

Supported versions are v1, v2c and v3. The support for v3 is limited to AuthPriv security level, with MD5 authentication and privacy with DES pass-phrase same as the MD5 password.

2. Edit an SNMP device


The “Edit an SNMP device” workflow allows to change the properties of an already registered SNMP device. It has the same fields as the “Register an SNMP device” workflow, with the exclusion of the advanced mode radio button.

3. Unregister an SNMP device


This is a very simple workflow with only one field – a chooser of the device to unregister. When a device is unregistered, all the queries attached to it are lost.

Query management

Each device can have a list of queries attached to it.


They hold settings of object identifiers, query types, etc. They can be used as building blocks in more complex workflows.

1. Add a query to an SNMP device


This workflow creates an SNMP query and attaches it to an SNMP device in the vCO inventory.

The allowed types are GET, GETNEXT and GETBULK. OID is the object identifier of the variable that we want to query. Only numeric OIDs are supported, with the single exception of OIDs that start with “iso”.

Examples of supported types of OIDs are: “”, “.”, “iso.”.

If the name parameter is skipped, a name is automatically generated, using the type and the OID, like “GET”.

2. Copy an SNMP query


This is a convenient workflow, that allows to copy existing queries between registered devices.

3. Edit an SNMP query


This workflow allows to modify existing SNMP queries. It has the same parameters as the “Add a query to an SNMP device” workflow.

4. Remove a query from an SNMP device


This is a single parameter workflow, that allows to delete queries that are no longer necessary.

5. Run an SNMP query


With this workflow, an SNMP query can be run. The result is retrieved as an array of properties in the following format (which is also logged to the vCO system log):
Element 1:



type: String

snmp type: Octet String

value: myhostname

The type of the result is a high-grain selection between String, Number and Array. More specific type can be retrieved from the snmpType property, where the original type of the result is stored.
If more detailed result information is needed, any custom workflow may run queries in the same manner as “Run an SNMP query” and work directly with the returned SnmpResult object, which has the following structure:


Trap host management

These workflows handle how vCO is listening for SNMP Traps.


1. Set the SNMP trap port


This workflow stops the trap host, sets the new port and then starts the trap host. It is important to note that the default port for SNMP traps is 162, but in Linux systems, it is not possible to open ports bellow 1024, without super user permissions. That’s why the default port for listening to SNMP traps in the SNMP plug-in is 4000. It can be changed to other one with this workflow, if 4000 is unavailable, or 162 is accessible.

2. Start the trap host

Parameterless workflow, that starts the trap host.
3. Stop the trap host

Parameterless workflow that stops the trap host.

Generic SNMP request workflows

They perform the basic SNMP requests, without the need to create a specific query.

1. Get SNMP value


Performs basic SNMP GET request, with the provided object identifier.

2. Get next SNMP value

Very common to the “Get SNMP value”, this workflows performs SNMP GETNEXT request.

3. Get bulk SNMP values


Performs SNMP GETBULK query. Specific for this workflow is the “Number of results” field, which specifies how many result elements will be retrieved in one GETBULK request. The default is 10.

SNMP traps

There are two ways to receive SNMP traps in the SNMP plug-in. With workflow, which is waiting for a single trap message, or with policy, which can handle traps continuously.

1. Wait for a trap on an SNMP device


This workflow features a trigger, which stops the execution of the workflow and waits for an SNMP trap to continue. When such a trap is received, the workflow is resumed. It can be used as part of more complex workflows, or as a sample that can be customized or extended for a specific need. The OID field identifies either the Enterprise OID of the trap, or any variable OID. If no OID is provided, the workflow resumes after receiving any trap from the specified SNMP device. Otherwise, it is waiting for a trap with the provided OID.

2. SNMP trap policy


A policy can be used if it is necessary to continuously listen for traps from an SNMP device. For that purpose, the “SNMP Trap” policy template must be applied. After this, a policy with the specified name appears in the Policies group. To start listening for traps, this policy must also be started. If necessary, it’s “Startup” option may be edited, to allow starting the policy on server startup.


Then a specific workflow or scripting code may be associated with this policy for integration in a more complex scenario.

SNMP traps can be sent to other systems with the “Send an SNMP trap” workflow.


The manager address and port fields point to the receiving system. If the port field is left empty, it will be substituted to 162.

The enterprise OID is not mandatory. It identifies the type of the device that is sending the trap.

Type can be String, Number or Array. String values are sent as SNMP Octet String. Number values are sent as Gauge32. And the Array values are sent as multiple variable binding traps of Octet String SNMP type. Array values are represented as comma-separated list of oid:value pairs in the Value field of the workflow.


Configuration Elements revisited

What are Configuration Elements?

A configuration element is a list of attributes you can use to configure constants across a whole Orchestrator server deployment. That’s what the vCO documentation states. In other words, the configuration elements are the easiest way offered by vCO to organize and establish a set of constant values which will be accessible from any key element of vCO (workflows, policies and web views).


A configuration element is a vCO entity composed basically of a list of attributes which are defined by a name, a type, and (once it’s configured) a value. Moreover, configuration elements support versioning, user permissions and access rights like other vCO entities.

How to create Configuration Elements

The creation of configuration elements is detailed on the official documentation, so there’s no need to describe it here (see

How to use Configuration Elements

As example let’s define a workflow with some inputs, attributes and a scripting block that require to access to some configuration element to get the value from its attributes. The sample workflow would be used to send e-mail notifications from vCO, for example when some external event occurs or some specific condition is satisfied.

So on the one hand the workflow “Send notification message” defines these inputs:

  • To: the addressee of the notification (an e-mail address)
  • Subject: the subject of the notification e-mail
  • Body: the notification message itself

Also it contains this attribute:

  • From: the sender of the notification (a default e-mail address)

And furthermore, before sending the notification the workflow attaches a predefined footer to the body of the e-mail.


On the other hand the configuration element “Email” defines these attributes:

  • default_sender: the sender’s e-mail address, which will be different for different environments (e.g. development, integration or production)
  • default_subject: the default subject of the notification e-mail
  • default_footer: the default footer of the notification message, which may contain for example a legal notice text


Now let’s match the workflow elements with the configuration element attributes.

To set the value of an attribute
In that case it’s a workflow attribute but it’s exactly the same process for attributes of policies and web views.
You have to go to the General tab of the workflow in edit mode and select the option of linking the value of the attribute to a configuration element attribute. Then you choose the proper configuration element and the desired attribute.


Once you select the attribute it appears linked on the workflow’s attribute value.


In this way you linked the attribute “from” of the workflow to the value of the attribute “default_sender” of the configuration element “Email”. And after that you can use the attribute “from” like any other attribute inside the workflow.

To set the default value of an input parameter
The easiest way starts like setting the value of an attribute. You create an attribute called “default_subject” in the workflow and you link it to the value of the attribute “default_subject” of the configuration element “Email”. After that you go to the Presentation tab of the workflow in edit mode, select the input “subject” and add the property ”Default value”. Then you link the value of that property to the workflow attribute “default_subject” that you have just created.


Once you select the attribute it appears set on the “Default value” property value.


In this way you linked the input “subject” of the workflow to the value of the attribute “default_sender” of the configuration element “Email”.

To set the value of a variable inside a scripting block
The easiest way again starts like setting the value of an attribute. You create an attribute called “footer” in the workflow and you link it to the value of the attribute “default_footer” of the configuration element “Email”. After that you go to the “Schema” tab of the workflow in edit mode, select the Scriptable task element and add as a local input parameter the workflow attribute “footer”.


In this way you linked the scripting task input “footer” to the value of the attribute “default_footer” of the configuration element “Email”.

And once you have all the inputs of the Scriptable task element set properly you can actually write the code that will send the email (for example you could use the Mail plug-in and its EmailMessage object).


How to access Configuration Elements directly via scripting

The previous section describes how you can access to configuration elements via workflow attributes. That’s the easiest way but it has some minor drawbacks (or major if you use configuration elements a lot from your workflows). The two main drawbacks are:

  • You must define an extra workflow attribute for each configuration element attribute that you want to use inside that workflow.
  • You must set those workflow attributes as input parameters of each Scriptable task which you can get the attribute’s value from.

Alternatively, to avoid using extra workflow attributes, you can make us of a custom action that implements the logic for accessing to the proper configuration element attribute and getting its value. For example you can define an action like this, inside the module “”:


This action receives as parameters the path to find the configuration element, the name of the configuration element and the name of the attribute that you want to get from the configuration element. With that information the action tries to find the proper configuration element and return the value of the desired attribute.

The best is that you can invoke the action very easily from the presentation of a workflow (e.g. the case of the “Default value” property):


And you can invoke it also very easily from inside any Scriptable task without passing any extra input parameter to the Scriptable task itself:


The main benefits of that method are:

  • You avoid the extra workflow attributes.
  • You can invoke the action directly from workflow presentation elements.
  • You can invoke the action directly from any Scriptable task.
  • You could replace the logic of the action that gets the values from the configuration elements and use, for example, an external properties file or a database. Since you are writing the code here you have infinite possibilities.

And the only drawbacks are:

  • You have to make sure that the action is included inside your package.
  • You have to make sure that the proper configuration elements are included inside your package as well.

How to import/export Configuration Elements

You can import/export configuration elements in two ways:

  • Import/export a single configuration element
  • Import/export a set of configuration elements inside a package

The first way, import/export a single configuration element, is not very common. Here you probably want to import or export some specific configuration settings at development time to try them somewhere else. In that case, when you export the configuration element you get a file which contains the definition of the configuration element with both the list of attributes (names and types) and the values for those attributes. And when you import that file you create a new configuration element in vCO with again both the list of attributes and their values.



The second way, import/export a set of configuration elements inside a package, is the most usual because the configuration elements are used from other vCO entities. That’s why if you create a package containing a workflow, action, policy, or web view that uses an attribute from a configuration element, vCO automatically includes the configuration element in the package. Nevertheless there is a small difference with exporting a single configuration element, the difference is that in that case the values of the attributes are not exported! In another words, if you import a package containing a configuration element into another vCO, the configuration element attribute values are not set. This is because the configuration elements are supposed to define vCO server-specific settings. And for example, if you set the server-specific attributes directly in a workflow, the workflow probably won’t work with the same settings if you import it into a different server or environment. That’s why after importing a package that contains configuration elements you have to set them with values appropriate to the new server, otherwise some elements could fail (workflows, policies, etc.) because they might not find the attribute values that are required.


The configuration elements are a powerful mechanism offered by vCO to define constant values across multiple vCO entities. They are easy to create and easy to use in many common scenarios. And the only thing to be aware of is that when exporting and importing them inside a package, their attributes need to be set to the proper values of the new environment.

SQL plug-in comes on the stage to leverage basic database operations


Are you still excited about SAOP and Rest plug-ins? Another powerful plug-in has already come on the stage! The VMware vCenter Orchestrator SQL plug-in provides fast and straightforward way to perform basic database operations like insert, select, update and delete of table records.

Let's learn more about its core features.

Packaged workflows

The SQL plug-in provides a complete set of workflows that allow you to:

  • Perform plug-in configuration
  • Generate basic create, read, update and delete record workflows for every table


Plugin-in configuration

The SQL plug-in is configured by "Add/Update/Remove a database" workflows. In order to add a database we need to provide database name, type, connection URL and authentication credentials. We can also choose whether to provide username and password or to use the current vCO user credentials.


After submitting all required information, the new database should appear on the Inventory.


The inventory tree consists of all databases that have been configured so far. Under each database it is possible to see all tables in the default database schema and all table columns. Apart from adding, updating and deleting database configurations we are able to mange the list of tables that we want to see on the database inventory tree manually via "Add tables to database" and "Remove а table from database" workflows.

Generation of basic CRUD workflows for a specified table

Having your database properly configured you are able to generate basic create,read,update and delete workflows for each table. Let's choose the "ip_list" table.


Choose "Generate CRUD workflows for a table" workflow.


Choose Destination directory and columns you will never populate with values (Read-only columns) if any.


The generated work workflows should appear on the workflows view in the "Generated" folder.

Perform database CRUD operations directly

Once we have generated the CRUD workflows for the tables we need, we are able to manage table records as simple as running workflows.

  • Creating an "ip_addresses" record

Run the Create active record for 'ip_addresses' workflow and fill in the necessary information.


If we want to be sure that there is no such record in the ip_addresses table the "Validate for record uniqueness" radio button should be selected.


  • Reading records from "ip_addresses" table

Run the Read active record for 'ip_addresses' workflow and fill in all fields to search by.


We need to fill in all fields we want to search by. There is also an option to guarantee unique result. If more than one records match the search criteria the workflow execution will fail with exception.

  • Updating a record from "ip_addresses" table

Run the Update active record for 'ip_addresses' workflow. First we have to fill in at least one field and then to click on the "Yes" load record button.


If unique result is found, the record values are populated. We can modify some of the values and then to click on the Submit button.


  • Deleting "ip_addresses" records

Run the Delete active record for 'ip_addresses' workflow. It will delete all records that match values we fill in the input fields.


All generated workflows could be used in higher level workflows when designing complete business scenarios. It is also possible to go with the plug-in scripting API in order to gain more flexibility needed in some complex use cases.

For additional information on this plug-in and to download it, please visit the following sites:


vCO Multi-Node plug-in


Because no vCO can be left behind…

In many cases we need more than one vCO to manage different infrastructures with similar means (for example one vCO per datacenter). However this brings additional overhead in using  different vCOs and keeping them up to date. The answer to these  problems is the newly released vCO Multi-Node plug-in. It covers the following  use cases:

  • Remote vCO Management – In order to remotely  deploy and delete packages or workflows.
  • Remote Workflow Execution –  To execute workflows  on the remote vCOs.

vCO Servers Configurations

Add vCO Server

Before start working with a vCO server you need to add it to the local vCO. This is done using the Add a vCO server workflow. Hit start workflow and you will see the following workflow presentation:


Here you need to add ip/host, port (standard is being selected) and optionally user and poassword (if using the vCO server in shared mode).

The difference between shared and not shared mode is which user credentials are used to connect to the other vCO

  • Shared Mode – in this mode all users are using the same credentials to connect to the remote vCO
  • Session Per User – in this mode the currently logged user credentials are used to connect to the remote vCO

When add a vCO server the vCO Multi-Node plug-in generates proxy workflows for the entrie set of workflows residing on the remote vCO. These workflows can be found under the folder with name VCO@HOST:PORT Note: Because of workflow generation it can take up to 1 minute to add a vCO server.

Update vCO Server

If there is a need to reconfigure a vCO Server the Update a vCO server workflow can be used. Here is how it looks:


Delete vCO Server

To delete a vCO Server start Delete a vCO server workflow.


Here only server to delete needs to be selected Note: When delete a vCO server the vCO Multi-Node plug-in deletes also the generated remote workflows

Remote vCO Management

vCO has functionality to import/export packages from one vCO server to another. This functionality currently is available through vCO client, but it is limited to single vCO server. There are certain scenarios when it is needed to update multiple vCO servers with the same package. Example of such a scenario is moving from development to production environment. Using functionality available the user will need to repeat package import step for each individual production servers. vCO Multi-Node plug-in provides a set of workflows to automate the process of deploying packages/workflows from one vCO to another. Those workflows can be found in Library/Orchestrator/Remote Management.


Deploy package on remote vCO server

The vCO Multi-Node plug-in provides following workflows for automation of package deployment

  • Deploy package from local server – used to deploy package from master vCO server to remote one;
  • Deploy package’s from local server – used to deploy multiple packages at once.

What follows is example of package deployment using Deploy package from local server. Parameters that must be provided are:

  • Package – package which will be deployed. The package must be available on the master vCO server;
  • Remote vCO servers – list of servers where the package will be deployed;
  • Override – if the package already exists on the target server and this parameter has been set to “Yes”, the old content of the package will be deleted before to start deploying the package.


Result of deployment can be checked in workflow’s log.


After successful deployment package will appear in remote vCO server inventory tree under System/Packages node.


Delete a package

The easiest way to delete package from remote vCO server is to locate it in the inventory tree and execute workflow Delete a package.



Delete package, installed on multiple remote vCO servers

Workflow Delete a package by name is used to delete package that is installed on more than one remote vCO servers. This workflow expects as parameter the name of the package to be deleted and list of remote vCO servers to be processed.


Manage Remote Workflows

In addition there are two workflows available for managing  workflows separately from the package.

  • Deploy workflow from local server
  • Delete Remote Workflow

Remote Workflow Execution

The challenge in execution of remote workflows is in dealing with their input and output parameters. These are generally speaking of types that the local vCO server does not know of and can not handle. The way in which the vCO Multi-Node plug-in addresses this challenge is to generate locally so called “proxy workflows” for remote workflows. A proxy workflow takes input parameters from the inventory of the vCO Multi-Node plug-in and when executed, converts these to the types required by the remote workflow and invokes the remote workflow.

Proxy Workflow Creation

A proxy for individual remote workflow is created by the workflow Library/Orchestrator/Remote Execution/Create a proxy workflow. When this worfklow is executed it displays the following dialog:


When the workflow is executed, it creates a local proxy workflow with the same name as the selected remote workflow. The proxy is located under a local folder named VCO@ – e.g. VCO@ The path of the generated proxy relative to this server specific folder is the same as the path of the remote workflow relative to the root of the remote workflow tree.

Creation of Proxies for a Remote Workflow Folder and Server

Generation of proxy workflows for a big number of remote workflows by the procedure described above is doable but tedious. Therefore the vCO Multi-Node plug-in provides means to generate proxies for a whole remote workflow folder and for all workflows on a remote vCO server. Generation of proxies for a remote folder is done by the workflow Create Proxy Worfklows from Folder as seen below


The “Include subfolders” checkbox determines whether the selected folder will be processed recursively (default) or not.

Proxy Workflow Execution

When a proxy workflow is executed, its input parameter objects must be selected from the same server where the correspondent remote workflow resides. For example a virtual machine parameter must be selected from the local representation of the inventory of a vSphere plug-in installed on the said remote vCO server. Type checking of input parameters during selection is somewhat limited by the fact that all objects from the inventories of remote plugins have the same local type. So it is possible for example to select a cluster object instead of a virtual machine object. Types are however checked more rigorously when the proxy workflow is started and if a mismatch is found then the proxy will fail before starting the remote workflow.

Remote and Proxy Workflows Maintenance

If/when remote workflows change, there may be need to bring local proxies up to date or to entirely discard them if/when not needed any more. For the purposes of such maintenance the vCO Multi-Node plug-in provides some utility workflows in the already mentioned folder Library/Orchestrator/Remote Execution/Server Proxies, namely :

  • Refresh Proxy Workflows for VCO Server – Ensures that local proxy workflows for the selected server are up to date with the remote workflows that they represent.
  • Cleanup Proxy Workflows for VCO Server – Deletes all local proxies for workflows residing on the selected server.
  • Delete All Finished Worfklow Runs – Delete all finished workflow tokens for a remote workflow.

Multi Workflow Execution

As part of vCO Multi-Node plug-in there is a possibility to execute a workflow on many vCO servers. Due its complexity this task is separated in two steps

Step 1 – Generate a multi proxy action

In order to execute a workflow on many vCO servers first we need to generate a proxy action which can do this. Select Create a multi-proxy action workflow and run it:


The parameters of this workflow are:

  • Action Name – the name of the action to be created NOTE: The action name must contains only alpha-numeric characters whithout separators NOTE: Always a new actions is generated, even if action with the same name already exists
  • Action Module – the module where the action should be put
  • Is remote workflow? – should the workflow which is source of the proxy action should be retrieved from the local vCO or from remote
  • Remote workflow – the workflow for which the proxy will be generated

The action generated accepts the same parameters as the source workflow, but promoted to arrays (multi-selection). The values in this array should go by index. (For example in case of Rename VM – the new name of the first selected VM is the first selected name, etc) The vCO server on which the actual execution happens is deduced by the values of the parameters.

Step 2 – Use the generated action as part of bigger block (workflow)

The action generated will be something like:


This action can be now embedded directly to a local workflow.

For more info about VMware vCenter Orchestrator Multi-Node Plug-In: release notesdocumentation and download


Auto Deploy plug-in


The Auto Deploy plug-in allows simplified and automated provisioning of physical hosts with ESXi software by interacting with Auto Deploy server. The user is able to browse rules and rule sets defined within the Auto Deploy server, configured public depots and all available host profiles within the vCenter Server that can be used. The plug-in provides a set of pre-defined workflows for Auto Deploy hosts configuration, public depots configuration, rules management, answer files management and reprovisioning of ESXi hosts.

Using Auto Deploy plug-in together with vCenter Server plug-in users can benefit from pre-defines workflows by decreasing the effort and time of provisioning and reprovisioning of stateless hosts with ESXi software.


  • Configured Auto Deploy server registered to a specific vCenter Server must be available.
  • Make sure that the network where the vCO, Auto Deploy server and public depots reside allows enough bandwidth to transfer large amount of data (ESXi software packages).
  • It may take some time to complete some workflows like creating, modifying rules or reprovisioning of ESXi hosts with image profiles that have not been used at least once. The second time an image profile is used will go much faster as the the software packages from the depots will be cached by the Auto Deploy server.
  • Make sure that Auto Deploy server is configured to reuse connections:


On the Linux version you need to change the following properties in the /etc/vmware-rbd/httpd/conf/httpd.conf file and restart Auto Delpoy server:

  1. KeepAlive = On
  2. MaxKeepAliveRequest = 0
  3. KeepAliveTimeout = 300 (or more)


Depot configuration

Only online depots are supported currently which means that depots should be accessible through a URL. The plug-in provides workflows for configuring access to such public depots so that later they can be used during the process of provisioning and reprovisioning of stateless hosts.


After completion of the 'Add a depot' workflow the depot will be added to the Inventory.


Auto Deploy host configuration

The plug-in provides a way of configuring Auto Deploy hosts by simply pointing to the associated vCenter Server host. The plug-in will automatically discover the registered Auto Deploy server (if available) and it will appear in the plug-in's inventory.


After completion of the 'Add an Auto Deploy host' workflow the Auto Deploy host will be added to the Inventory.


Rule management

Actually the way Auto Deploy works is via rule management on Auto Deploy server. The plug-in provides an easy way for creating, modifying or activating rules within the rule engine of Auto Deploy server. There are also predefined workflows for retrieving rules that impact a specific host and workflows for testing and repairing the rule set compliance.


Host profiles and answer files

The plug-in provides a way for managing answer file corresponding to a certain host profile using a XML format. The user should know the actual keys of the parameters in order to modify specific values from the answer files for a specific host.

In case the host doesn't have an answer file, a simple XML template will be presented so that the user can fill the empty placeholders with real values and modify the XML content. In case an answer file already exists for the particular host it will be displayed and ready for edition. The user can also provide a XML content as an already prepared XML file. Answer files will be saved and later used when a host profile with user interactions is applied on the specific host.


Provision an ESXi host

In order to provision a stateless ESXi host for the first time the host must be configured to perform PXE boot. Provided that Auto Deploy server and infrastructure environment has been configured properly the following additional steps must be performed:

  1. Create a deploy rule that impacts the target ESXi host.
  2. Activate the new created rule so that the rule engine evaluates the rule when receiving requests from the target ESXi host.
  3. Reboot the ESXi host to initiate the boot process.

Create rule

Use the workflow 'Create a deploy rule' to create a new rule. It won't be active.


Select an image profile from the already configured depots:


Optionally, select a datacenter, host folder or cluster within the vCenter Server inventory where the ESXi host will be registered:


After clicking of Submit button the process of creating the new rule will start. If the image profile is used for the first time it may take some time to finish the workflow. Of course later, if the same image profile is used again in some other workflow it will finish much faster. After successful completion of the workflow the new rule will be shown in the plug-in's inventory.


The next step is to activate the rule using the workflow 'Activate a deploy rule and a working set' so that it becomes active. Actually the rule will be added to the working rule set first and then the whole working rule set will be activated. It is not possible just to activate a single rule. This means that all rules in the order they are placed in the working rule set will be added to the active rule set.


After activation the rule is added both to the working and active rule set. It becomes non-editable and from now on it is evaluated by the rule engine on requests from the ESXi hosts. If you want to modify already activated or non-editable rule you must use 'Copy a deploy rule' workflow which actually hides the original rule and replaces it with completely new one.


After rebooting the ESXi host it will perform PXE boot and Auto Deploy server will provision it with the software images defined by the created rule. As we set also a vCenter Server location in the rule, the ESXi host will be registered in the specified host folder after booting.



Reprovision an ESXi host with a new image

This use case shows a simple use case when by modifying a single rule the ESXi host can be provisioned with a new software images (virtual infrastructure bundles). There are predefined workflows for reprovision ESXi hosts with new image, answer files and a new location. All of them assume that a modification on single rule is needed. Use the workflow 'Reprovision a host with a new image' to reprovision ESXi host with a new image.


If the image profile is used for the first time the workflow will take more time to complete. The workflow will reboot the ESXi host automatically and the host will load the new specified image:



The Auto Deploy plug-in provides basic building blocks as workflows and actions for interacting with Auto Deploy server and managing stateless ESXi hosts. The workflows can be used 'out of the box' or custom high level workflows can be composed according to the real use cases in order to automate  and greatly decrease the time and effort to provision and reprovision stateless ESXi hosts.


More info about VMware vCenter Orchestrator Plug-In for vSphere Auto Deploy: release notesdocumentation and download


Seamless integration with PowerCLI and PowerShell plug-in

The initial setup and main use cases of Powershell Plug-in for vCenter Orchestrator could be found in the previous post here .

In addition, PowerShell plug-in being able to run any PowerShell script doesn’t need some special things to work with PowerCLI scripts. The only thing one should do is to call the “AddPsSnapin” with “VMware.VimAutomation.Core” and it would be possible to use such cmdlets and scripts.



…But probably you would not like to throw away the work you have already done by creation of custom Workflows using the VC Plug-in. You will more probably like to extend it with PowerShell/PowerCLI scripts.

The good news is that you can mix both. This is made possible by a small module we call “Converter”. It actually converts PowerCLI objects into VC Plug-in objects and vice versa. Almost every object that can be seen into the VC Plug-in inventory can be converted.
Exemplary workflows that demonstrate the conversion functionality you will find in “Library/PowerShell/Samples/Converter”. There are a lot of building blocks if you look at its schema, but do not be scared. You only need to call a single action to achieve the conversion.

“convertToVcoObj” action converts the input to a VCO object. And “convertToPsObj” action converts VC:<Object> to PowerShellRemotePSObject.

What does this mean you would say.

This means that if RemotePSObject representing VM is passed as argument the “convertToVcoObj” will return Array/Any with size 1 and VC:VirtualMachine at index 0, which is reffering the same VM that the original RemotePSObject is reffering. Next you can use this object as an argument to any WF/Action that accepts VC:VirtualMachine or you can directly call methods on it.

The following diagram demonstrates this.    



The following table shows which types the Converter supports:

PowerCLI type vCO Object Type
VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl VC:VirtualMachine
VMware.VimAutomation.ViCore.Impl.V1.Inventory.TemplateImpl VC:VirtualMachine
VMware.VimAutomation.ViCore.Impl.V1.Inventory.DatacenterImpl VC:Datacenter
VMware.VimAutomation.ViCore.Impl.V1.DatastoreManagement.DatastoreImpl VC:Datastore
VMware.VimAutomation.ViCore.Impl.V1.Inventory.ClusterImpl VC:ClusterComputeResource
VMware.VimAutomation.ViCore.Impl.V1.Inventory.VMHostImpl VC:HostSystem
VMware.VimAutomation.ViCore.Impl.V1.Inventory.ResourcePoolImpl VC:ResourcePool
VMware.VimAutomation.ViCore.Impl.V1.VM.SnapshotImpl VC:VirtualMachineSnapshot
VMware.VimAutomation.ViCore.Impl.V1.Inventory.FolderImpl VC:DatastoreFolder
VMware.VimAutomation.ViCore.Impl.V1.Inventory.FolderImpl VC:DatacenterFolder
VMware.VimAutomation.ViCore.Impl.V1.Inventory.FolderImpl VC:HostFolder
VMware.VimAutomation.ViCore.Impl.V1.Inventory.FolderImpl VC:VmFolder

If you wonder what actually the converter does behind the scenes look at the diagram below.



  • More info about VMware vCenter Orchestrator Plug-In for Microsoft Windows PowerShell: release notes, documentation, blog post and download

    vCO PowerShell plug-in

    vCenter Orchestrator + PowerShell plug-in = vCenter Orchestrator on steroids. Windows PowerShell is command-line shell and scripting language designed especially for system administration, as such he has wide-spread industry support. There are PowerShell scripts already written for most of the task you will ever need. Enabling vCO user to use and reuse those scripts is one of the most exiting feature of vCO. In short vCO PowerShell plug-in is used to call PowerShell scripts and cmdlets from Orchestrator actions and workflows, and to work with the result.

    PowerShell host configuration

    One of the drawbacks of PowerShell is that it is windows dependant. That's why we need a Windows machine with PowerShell instaled on it (PowerShell host). Connection between the PowerShell plug-in and PowerShell host machine is established using WinRM or OpenSSH. To configure PowerShell plugin make sure that winrm service is installed on the PowerShell host and run trough following configuration steps.


    • Run the following command to set the default WinRM configuration values.
      c:\> winrm quickconfig
    • (Optional) Run the following command on the WinRM service to check whether a listener is running, and verify the default ports.
      c:\> winrm e winrm/config/listener The default ports are 5985 for HTTP, and 5986 for HTTPS.
    • Enable basic authentication on the WinRM service.
      • Run the following command to check whether basic authentication is allowed.
        c:\> winrm get winrm/config
      • Run the following command to enable basic authentication.
        c:\> winrm set winrm/config/service/auth @{Basic="true"}
    • Run the following command to allow transfer of unencrypted data on the WinRM service.
      c:\> winrm set winrm/config/service @{AllowUnencrypted="true"}
    • Enable basic authentication on the WinRM client.
      • Run the following command to check whether basic authentication is allowed.
        c:\> winrm get winrm/config
      • Run the following command to enable basic authentication.
        c:\> winrm set winrm/config/service/auth @{Basic="true"}
    • Run the following command to allow transfer of unencrypted data on the WinRM client.
      c:\> winrm set winrm/config/client @{AllowUnencrypted="true"}
    • [Updated] Run the following command to enable winrm connections from vCO host.
      c:\> winrm set winrm/config/client @{TrustedHosts ="vco_host"} 
    • Run the following command to test the connection to the WinRM service.
      c:\> winrm identify -r:http://winrm_server:5985 -auth:basic -u:user_name -p:password -encoding:utf-8

    Before start working with a PowerShell host you need to register it in vCO.

    Add a PowerShell host validates connection to PowerShell and registers the host only if connection is successful. The difference between shared and not shared mode is which user credentials are used to connect to PowerShel host

    • Shared Mode – in this mode all users are using the same credentials
    • Session Per User – in this mode the currently logged user credentials are used

    Invoke a PowerShell script

    Having an existing PowerShell script you can invoke it without any modifications. Invoke a PowerShell script workflow is suitable for single invocation of script. The result from execution will be available into vCO log tab. This workflow requires you to specify the target host, and the script to be executed. For example we will invoke following script trough vCO:

    # Get set of adapters
    $adapters  = [System.Net.NetworkInformation.NetworkInterface]::GetAllNetworkInterfaces()
    # For each adapter, print out DNS Server Addresses configured
    foreach ($adapter in $adapters) {
        $AdapterProperties = $Adapter.GetIPProperties()
        $dnsServers        = $AdapterProperties.DnsAddresses
        if ($dnsServers.Count -gt 0){
    foreach ($IPAddress in $dnsServers) {
    "  DNS Servers ............................. : {0}" -f $ipaddress.IPAddressToString

    This script first gets all the interface objects, then iterates throught them to get the DNS address(es) configured for each one.


    Check script output in Log tab.


    Invoke an External PowerShell script

    Invoke an external script workflows is suitable for running external “.ps1” scripts available on the host machine (.PS1 being the file extension for Windows PowerShell scripts). Required parameters for this workflow are “Name” and “Argument”. The “Name” parameter can be simply the name of the script for example “test.ps1” (if it is available on host machine “Path”) or full path “c:\SomeDirectory\test.ps1”. Script arguments are provided through “Arguments” parameter and the syntax is the same as the one of PowerShell.exe console.


    Generate an action from a PowerShell script

    PowerShell plug-in allows you to preserve PowerShell script as an action that could be used later in your custom workflows, and even executed on different PowerShell hosts. To achieve this run “Generate an action from a PowerShell script” workflow providing the script. Script customization can be achieved using placeholders. The syntax for defining a placeholder is {#ParamName#}. For each placeholder corresponding action parameter of type string is created in generated action. During action invocation the placeholder is replaced with actual value provided as action parameter.


    Generated action looks like.


    Sample workflow for running the generated action will be generated if “Generate Workflow” option is “Yes”. The workflow will be generated in provided folder and the name of the workflow will be “Invoke Script ” followed by name of generated action.


    Generate an action for a PowerShell cmdlet

    Another feature of the vCO PowerShell plug-in is the ability to generate action based on PowerShell cmdlet. This way you are able to use functionality that is already available in PowerShell inside vCO. To generate action for given PowerShell cmdlet select the cmdlet from inventory tree, and specify which parameter set will be used during action generation.



  • More info about VMware vCenter Orchestrator Plug-In for Microsoft Windows PowerShell: release notes, documentation and download


    vCenter Orchestrator videos!!!

    The Orchestrator Technical Publications team is happy to announce that the first training videos for vCO are now available on the VMwareTV Youtube channel.

    These first video series are intended for novice vCO workflow developers that already have basic knowledge about what a workflow is and what its components are. The goal of the video series is to show you how to develop your own workflow step-by-step from scratch. The example in the video is of a workflow which powers on a virtual machine and sends an e-mail with the result to a specified address.

    In the first video of the series you will learn the purpose of the simple workflow, how to create it, define its input parameters and lay out the schema. In the second video you will see how to bind the workflow elements to input parameters and attributes. The third video will show you how to add more detailed logic in the scriptable task elements, create the presentation, validate the workflow, and of course run it!

    Follow the links below to the how to “Develop your first vCenter Orchestrator workflow” training videos:
    Video 1:

    Video 2:

    Video 3:

    After watching these videos you will be able to create and run your first workflow!

    Note that more videos will be coming soon. We are also seeking feedback and ideas on additional “how to” videos so please write us at with your suggestions.
    PS. For additional sources of vCO videos, please be sure to also visit