In this blog we will learn about using vRO dynamic types for supporting vRA custom resource types (or CRTs, for short) and will do by implementing a real use case from scratch, AKE EKS Clusters. This post is very detailed and includes all the code, scripts, flows, etc used to complete it, it is also divided in several steps, separating specific concepts, that way you can read it at your own pace and come back to specific sections at any time.
Now, this the most common custom resource types scenario, the dynamic types plugin allows you to define your own custom business logic and wrap it in a vRO type, which is similar to a POJO (in the sense that you have an object, which can be passed around and you can do operations over it). The exact use case implemented here will be managing AWS EKS Clusters. We are going to create several dynamic types, which will create and manage EKS clusters, and then use those to create a custom resource type. From then on, you can add that CRT to your cloud templates and take advantage of other vRA features, such as placement, expiration policies and day2 operations.
Throughout this post/tutorial, we will be referring to the EKS PDF documentation (found at https://docs.aws.amazon.com/eks/latest/APIReference/eks-api.pdf). We will model our dynamic types based on the “Data Types” in the PDF and the “action” workflows will be based on “Actions”. We will also take advantage of one of vRO’s newer features – Polyglot. It allows users to write scripts in vRO in languages other than plain javascript. In this blog post, part of the code will be written in python3. We’ll also demonstrate how you can create scripts which use external libraries, by using the boto3 AWS Python SDK: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html. We will also demonstrate how you can workaround one of the limitations of the vRO Polyglot feature – the lack of access to the vRO API within the scripting.
Note: The aforementioned limitation is due to the fact that non-JS actions are run in containers so they don’t have any callback or access to the vRO SDK. They are standalone actions. If you need to do something with the vRO SDK in a script action, then you will need 2 actions. One which uses Polyglot and does the real work, and the other, written in JS which has access to the SDK, which will then call your Polyglot action. This can be seen in practice within this post.
Dynamic Types: Foreword
Dynamic Types is one of vRO’s most (if not the most) popular and used plugins. Its purpose is to allow you to create a “plugin”, without really developing one from scratch. Since native vRO plugins are Java-based, you would need to get acquainted with the vRO plugin SDK so that you know when and where to plug your common operations. And there’s also the fact that you need to recompile your plugin and deploy it to all your vRO instances every time you make a change in its logic. This also includes workflows – usually they are also compiled, from XML, and imported as part of the plugin. If you have it installed on several vRO instances, then you cannot escape compiling your workflows as well, or you risk having different versions of the same workflow across your orchestrator servers. Of course, having a plugin for larger integrations is preferable since it gives you much greater control over the lifecycle of objects. But for the simple proof-of-concept we are going to implement in this post – it would be an overkill.
Dynamic Types alleviates most of those requirements. All you need to do is run a couple of built-in workflows and write the scripting code (in Javascript or any of the other languages supported by Polyglot) to implement your business logic, and you’re done! Any changes you make are instantly applied since you’re just writing scripts. All of the plumbing is done by the Dynamic Types plugin, which acts as a layer of abstraction over the vRO plugin SDK and is able to plug your content dynamically, without the need for restarting.
Step 1: Creating the Credentials Dynamic Type
We are going to create a simple DynamicTypes hierarchy that mostly does REST requests (via the boto3 python library), but also has a type which stores data in a ConfigurationElement just for the demo.
Creating the EKS Namespace
Before creating a Dynamic Type, we must first have a namespace. This is roughly the equivalent of a plugin in vRO. A namespace holds a collection of types which are related to one another. For example, let’s say you wanted to re-implement, for whatever reason, your own vCenter plugin with DynamicTypes. Your namespace would most likely be named vCenter and all the types there would be types of objects found in vCenter – Machine, Network, Host, Disk, etc. Even if you’re not sure of a good name for your namespace, don’t worry – you can change it later using the Update Namespace built-in workflow. But note that changing your namespace, or type, name after creating a custom resource type will break the latter. It is not possible to update the name of the type in the CRT definition, so make sure you’re happy with the names you’ve chosen before you proceed with creating the custom resource type. However, if you delete and re-create the custom resource type, after a vRO data collection has passed, you should be able to see the new type/namespace names.
Since we’re going to create an EKS cluster, we are going to create the EKS namespace. In it we will create types relevant to EKS. To create the namespace, you must run the Define Namespace workflow.
Defining the Credentials Dynamic Type
Once the namespace has been created, types can be defined within it. The DynamicTypes’ types are the model. Also, a type cannot exist by itself, in isolation, and must be a part of a namespace. Although not relevant to vRA and custom resource types, Dynamic Types can be structured in a hierarchy of parent-child types. However, this is not an “inheritance” hierarchy, but rather a “has-a” or ”contains” (i.e a vCenter endpoint has a host). Using this hierarchy is only useful in the vRO inventory view and does not improve the custom resources experience (as of the publishing of this blog post).
Before running the workflow, for better organization, you should create a directory where you want the auto-generated workflows for the Credentials type to be placed. In this post, we will use Library/AWS/EKS/Credentials. Library already exists and you will have to create the rest of the folders.
To define the Credentials type, run the Define Type workflow. This is the type which will be “persisted” in a vRO ConfigurationElement. Its properties will be:
- accessKey
- accessSecret
- region (optional – you might want to use some credentials for only a specific region)
Note: By default, every dynamic types object contains the idand name properties, so there’s no need to re-declare them when defining the type.
Ideas for Improvement: Besides the key and the secret, you could also add the region in which the credentials should be used. That way, you could achieve better isolation. This blogpost assumes that clusters are in the same default region every time.
It is important that you add all of the property names you need before creating the custom resource type (by doing so while creating the dynamic type or updating it after it has been created). These property names are used to generate the schema of the custom resource type, which determines what is being shown about the resource once provisioned to a deployment.
Implementing the Credentials Dynamic Type
Now that we have defined the type, let’s go ahead and “implement” it. It is good practice to first implement the finder workflows.
Typically, you would implement the find by ID workflow first, and then the find All workflow. In some cases, your find-all might reuse the find-by ID logic (by calling DynamicTypesManager.getObject(), for instance). However, since searching for a specific ConfigurationElement instance is done by iterating through all of the currently available instances and matching the names, it would lead to n-squared complexity. Hence, for our Credentials type we are not going to do that. But for types for which the “find” operations will perform REST requests, reusing is the way to go (as you will see with the Cluster type).
Since we are going to be creating all of the entries for our Credentialsobjects in the Library/EKS/Credentials path, we will be getting the ConfigurationElementCategory with that path and creating dynamic type objects from them.
Create the ConfigurationElement category
In this post, we are going to store the credentials in ConfigurationElements. They reside in “categories” which are roughly equivalent to directories in a file system. Before proceeding, you should make sure that all of the folders in the path Library/EKS/Credentials exist for ConfigurationElements. Otherwise, the finder workflows will not work properly, as well as the Create workflow which won’t be able to create a new ConfigurationElement instance in a category which doesn’t exist.
Implementing “Find all”
The general idea is to get the ConfigurationElementCategory where we store the Credentials and make one dynamic type object per entry. Since this is a “finder” workflow, we can use DynamicTypesManager.makeObject() because vRO will be able to properly process the request and return a wrapper containing a non-null internal object. The makeObject() method should only ever be called from finder workflows/actions. Any usage outside of those will result in null objects being created. This is a very common misconception with the dynamic types plugin – people using makeObject() and getObject() interchangeably or not even using the latter.
Find All EKS-Credentials Expand source
resultObjs = new Array();
const credentialsPropertyNames = ['accessKey', 'accessSecret'];
const credentialsCategory = Server.getConfigurationElementCategoryWithPath('Library/EKS/Credentials');
Server.log("Found ConfigurationElement category for Credentials: " + credentialsCategory);
if (!!credentialsCategory) {
const allCredentials = credentialsCategory.allConfigurationElements;
Server.log("Fond a total of " + allCredentials.length + " Credentials ConfigurationElements");
for (var i = 0; i < allCredentials.length; i++) {
const current = allCredentials[i];
/**
* NOTE: you can only use makeObject inside finder workflows. In any other workflow/action
* you should use the 'getObject' method. Finder workflows are handled differently
* by the dynamic types plugin, which allows you to "make" an object inside them.
*/
// note: 'name' and 'id' are the same value here: 'current.name'
const asDynamicType = DynamicTypesManager.makeObject('EKS', 'Credentials', current.name,
current.name, credentialsPropertyNames);
for (var n = 0; n < credentialsPropertyNames; n++) {
addPropertiesToDType(asDynamicType, current, credentialsPropertyNames[n]);
}
resultObjs.push(asDynamicType);
}
}
function addPropertiesToDType(dynamicType, configElement, propertyName) {
dynamicType.setProperty(propertyName, configElement.getAttributeWithKey(propertyName).value);
}
Implementing find by ID
The code is going to be very similar to the Find All, except you have to stop once you find the ConfigurationElement matching the requested ID, if there is a match.
Find EKS-Credentials By Id Expand source
resultObjs = new Array();
const credentialsPropertyNames = ['accessKey', 'accessSecret'];
const credentialsCategory = Server.getConfigurationElementCategoryWithPath('Library/EKS/Credentials');
Server.log("Found ConfigurationElement category for Credentials: " + credentialsCategory);
if (!!credentialsCategory) {
const allCredentials = credentialsCategory.allConfigurationElements;
Server.log("Fond a total of " + allCredentials.length + " Credentials ConfigurationElements");
for (var i = 0; i < allCredentials.length; i++) {
const current = allCredentials[i];
Server.log('Checking whether "' + current.name + '" is same as: ' + id);
if (current.name == id) {
Server.log('Found matching configuration element for Credentials ID: ' + id);
/**
* NOTE: you can only use makeObject inside finder workflows. In any other workflow
* you should use the 'getObject' method. Finder workflows are handled differently
* by the dynamic types plugin, which allows you to "make" an object.
*/
// note, 'name' and 'id' are the same value here: 'current.name'
resultObj = DynamicTypesManager.makeObject('EKS', 'Credentials', current.name,
current.name, credentialsPropertyNames);
for (var n = 0; n < credentialsPropertyNames.length; n++) {
addPropertiesToDType(asDynamicType, current, credentialsPropertyNames[n]);
}
break;
}
}
}
function addPropertiesToDType(dynamicType, configElement, propertyName) {
dynamicType.setProperty(propertyName, configElement.getAttributeWithKey(propertyName).value);
}
Adding a workflow which creates Credentials instances
Even though we could add Credentials by just creating new ConfigurationElements in the ‘Library/EKS/Credentials‘ path, we’re going to make our lives easier by creating a workflow that does it. This will not only hide the implementation logic of where we should save them (perhaps you have 5 admins, not all of them need to know where to place the configuration elements for eveything to work), but will allow you to potentially create Custom Resource Types based on the Credentials dynamic type. However, we will not be doing that in this walkthrough.
In the Workflows > Library > AWS > EKS > Credentials workflow folder, add a new workflow. We’ll name it Add EKS Credentials. It should have 3 inputs:
- name – the name/alias for these credentials so it’s easy to identify them later. We’ll use this as the name of the ConfigurationElement which will be created.
- accessKey
- accessSecret
All three of the inputs should be of type string. There should also be an output of our newly created DynamicTypes:EKS.Credentials type. This is not mandatory for our Custom Resource Types use case, but it could allow you to implement some more advanced use cases in the future. For instance, you could have one complex workflow which creates credentials for a user, then uses those credentials to initialize a cluster.
Now all we have to do is to create a new ConfigurationElement in the designated category and the new credentials will be immediately picked up by the finder workflows when users search for them.
First add a new scriptable task and bind all of the workflow’s inputs and output:
The script to create a Credentials ConfigurationElement is:
Add EKS Credentials Expand source
const credentialsCategory = Server.getConfigurationElementCategoryWithPath('Library/EKS/Credentials');
const newCredentialsConfiguration = Server.createConfigurationElement(credentialsCategory, name);
newCredentialsConfiguration.setAttributeWithKey('accessKey', accessKey);
newCredentialsConfiguration.setAttributeWithKey('accessSecret', accessSecret);
newCredentialsConfiguration.saveToVersionRepository();
/**
* At first it might seem that getObject doesn't return an instance of our dynamic type.
* This is because it actually returns a wrapper around the actual object, since the
* namespace and type are parameters to the function. When getObject gets called and
* its result assigned to a specific Dynamic Type, it will get unwrapped and correctly
* set under the hood.
*/
newCredentials = DynamicTypesManager.getObject('EKS', 'Credentials', name);
Caution Before Proceeding
As already mentioned, ConfigurationElements are not the best place to store sensitive data. In a real world scenario, you should retrieve credentials from a secret store. Consequently, the implementation of the Credentials dynamic type would also be different. In this walkthrough, the type is implemented as it is since it is a quick impelmentation and allows you to start working with EKS entities almost immediately. It is a good solution for a POC and testing on a dev environment, but we strongly discourage you from using it on a production environment.
Defining the Cluster Dynamic Type
Before running the Define Type workflow, make sure that you have the Workflows > Library > AWS > EKS > Cluster folder created. We are going to select that folder when generating the finder workflows. The CRUD workflows should also be created there.
The type definition of a Cluster can be found at https://docs.aws.amazon.com/eks/latest/APIReference/eks-api.pdf#%5B%7B%22num%22%3A2323%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C72%2C712.8%2Cnull%5D. We are going to put all of the properties from this section into our dynamic type definition. Before clicking on Run, your workflow request form should look something like:
The properties’ names are:
- arn
- certificateAuthority
- clientRequestToken
- connectorConfig
- createdAt
- encryptionConfig
- endpoint
- identity
- kubernetesNetworkConfig
- logging
- name
- platformVersion
- resourcesVpcConfig
- roleArn
- status
- tags
- version
Implementing the Cluster Dynamic Type
Setting up our local workspace
When you want to create a python action in vRO, which has external dependencies, you will have to develop it on your machine first. This is due to the fact that you cannot specify a list of dependencies when creating it. So to circumvent that constraint, the vRO team have added the ability of uploading a zip file containing your scripts alongside their dependencies.
In use case, we will have one main directory. In that directory, we will have several sub-directories, each representing an action that is going to be uploaded to vRO. Every action will have its own dependencies that should be installed to a folder named lib. You can read more about the Polyglot feature and uploading ZIP bundles as actions at the vRO documentation.
In essence, our directory structure will look something like
- vro-python-eks
- find-cluster-by-id
- handler.py
- credentials.py
- boto_client.py
- find-all-clusters
- handler.py
- credentials.py
- boto_client.py
- create-cluster
- handler.py
- credentials.py
- boto_client.py
- delete-cluster
- handler.py
- credentials.py
- boto_client.py
- find-cluster-by-id
When installing dependencies (like boto3, which we will be using), remember to use pip’s -t argument and specify that they are to be installed in the lib/ folder because that is where the Polyglot container is going to look for them.
Implementing a Couple Utility Scripts
There will be common operations in our actions, so let’s develop those first. Handling credentials and creating EKS clients using boto3 are two of those things. On a production environment, it would be best if you created a package with theaw common files and installed it as a dependency to your scripts. In this blog post, however, we are simply going to copy-paste them in the sub-directories.
Handling Credentials
For every python action we create, we will need to have the credentials with which to create the boto3 AWS client. Since the non-JS actions in vRO are run in a container, we do not have control over env variables. So we cannot set the credentials in that manner. We also cannot provide a default config file which will be available on every container spun up for a specific action. So we are going to have to pass the credentials for every operation, as inputs. To make this process simpler, we could use a class, for instance. The class would look through the inputs dict and try to find the credentials properties and extract the values. If they are not present, an error will be raised and the whole action will fail. In this post we will be using the class below. However, this is not needed at all. You can just retrieve the key and the secret from the inputs map directly. I’ve done it with a class because I think it is a bit clearer this way
credentials.py Expand source
KEY_ACCESS_KEY = "aws_access_key_id"
KEY_ACCESS_SECRET = "aws_secret_access_key"
class Credentials(object):
'''Create a Credentials instance from the inputs provided to the handler'''
def __init__(self, inputs):
if KEY_ACCESS_KEY not in inputs or KEY_ACCESS_SECRET not in inputs:
raise ValueError('Provided dict should have both {} and {} properties'.format(KEY_ACCESS_KEY, KEY_ACCESS_SECRET))
aws_access_key_id = inputs[KEY_ACCESS_KEY]
aws_secret_access_key = inputs[KEY_ACCESS_SECRET]
self.access_key = aws_access_key_id
self.access_secret = aws_secret_access_key
I think passing around an instance of Credentials is a bit clearer than having to fish out the credentials every time we need them.
Handling client creation
The following python script depends on the boto3 lib. It performs the manual creation of the client based on passed credentials.
boto_client.py Expand source
import boto3
DEFAULT_SERVICE = 'eks'
DEFAULT_REGION_NAME = 'us-east-2'
def getClient(credentials, service_name = DEFAULT_SERVICE, region_name = DEFAULT_REGION_NAME):
return boto3.client(service_name,
region_name = region_name,
aws_access_key_id = credentials.access_key,
aws_secret_access_key = credentials.access_secret)
We can use this method to easily obtain clients for different AWS services, but will only be using for EKS in this blogpost
These scripts will be zipped along the actual script which will become a vRO action.
For the implementation of the finder workflows, we are going to take advantage of Python Polyglot actions in vRO. This is one of the newer features of vRO, and it allows us to write actions in languages other than vanilla JS. The biggest advantage of this is the ability to use 3rd party libraries. In our case, we will be using boto3, the AWS Python SDK.
Implementing the Find EKS-Cluster by Id Workflow
We are going to implement this workflow first, because it will also be used in the find-all workflow. This is because the list clusters operation only returns an array of the existing cluster names. So we are going to get the list of all clusters’ names first and then use those names to retrieve the details.
Creating Python Actions with External Dependencies
In the vRO documentation, there is a good example of how you can include 3rd party libraries in your vRO actions. The short version is – you install the dependency in the same directory as your script, named handler, and then ZIP everything together and upload it to vRO in the Action UI. If you want to read more about this, you can use the following resources:
- Documentation: https://docs.vmware.com/en/vRealize-Orchestrator/8.6/com.vmware.vrealize.orchestrator-using-client-guide.doc/GUID-822B361C-3720-4124-A0F3-F3A16F0C0137.html
- Samples: https://developer.vmware.com/samples/7325/vro-polyglot-scripts
Creating the python action
First, let’s create an action named findEksClusterById. For clarity, let’s place it in a module named com.aws.eks. We are going touse the vRO Polyglot feature to upload a python3 action, including its dependencies, as a ZIP file. The main script (named handler.py as per the vRO documentation. You could name it something different but you would also have to specify the filename + entry method name when uploading the ZIP action)
After you’ve created the vRO action, in your console, navigate to the find-eks-cluster-by-id folder and install the boto3 dependency. You will need to do this in the other 3 action folders (see previous sections):
# Run this inside a folder which contains an action which will be uploaded to vRO (as a zip)
pip3 install boto3 -t lib/
Note: The credentials and boto_client are files which reside in the same directory as the following script. To see their implementations, go back to the previous section,
find-eks-cluster-by-id/handler.py Expand source
import datetime
import json
from . import credentials
from . import boto_client
SERVICE_EKS = "eks"
KEY_CLUSTER_NAME = "cluster_name"
# Datetime is not convertable to JSON by default so we have to use this method to replace it with its string representation
def myconverter(o) -> str:
if isinstance(o, datetime.datetime):
return o.__str__()
return "___UNABLE_TO_CONVERT_ACTUAL_VALUE___"
# Return JSON string representation of the cluster details
def handler(context, inputs) -> str:
currentCredentials = credentials.Credentials(inputs)
eksClient = boto_client.getClient(currentCredentials, SERVICE_EKS)
clusterName = inputs[KEY_CLUSTER_NAME]
try:
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/eks.html?highlight=eks#EKS.Client.describe_cluster
clusterDescription = eksClient.describe_cluster(name = clusterName)["cluster"]
return json.dumps(clusterDescription, default = myconverter)
except:
print("There is no cluster named '{}'".format(clusterName))
return None
Finding out which credentials to use
One of the implementation details of dynamic types is that you can search for objects only by an ID (which you determine, it is not necessarily the ID of the object as on the remote system). When you have standalone types, such as Credentials, that is no issue. But then there are types which will be dependent on other information. For instance we need to know what credentials to use to retrieve the details of a specific cluster. And we have to understand that only from the ID. So the first thing we realize is that the ID of the dynamic type object, representing the cluster, cannot be the same as the cluster ID (which is the cluster name) on EKS. We will have to construct our own ID formats from which we determine which set of credentials to use.
For inspiration, I’ve turned to vRO’s Active Directory plugin. If you’ve used it, you might have noticed that the objects there have very long IDs, because they are composite. They are a concatenation of several identifiers. When you search for an Active Directory object, that ID gets “unwrapped”. Each of the resulting identifiers is then used to determine where to search the object. In our case, a Cluster’s ID would need to be the concatenation of a Credentials ID and that of the cluster. That way we will be able to get the needed authentication info which we will then pass on to the python scripts which can go to AWS EKS (using the boto3 lib) and retrieve the details. So the ID will have the format: <ID_OF_CREDENTIALS>#_#<ID_OF_CLUSTER>
We can then split the ID string by the delimiter #_# and use the separate substrings accordingly.
Implementing the Find EKS-Cluster by Id workflow
The findEksClusterById action by itself does nothing for the Cluster dynamic type. We need to call it from our finder workflow in order for it to be useful. Thankfully, executing Polyglot vRO actions is the same as for JS actions. However, calling the python action is not the only thing that the workflow will be doing. We will also be doing the ID splitting (see previous section) and retrieval of credentials. After all, we need to know the credentials before performing any boto3 operations.
Here is an overview of the workflow:
Before implementing anything, let’s create some variables:
The errorCode variable should be there by default, but we won’t be needing it. The ones we need to create are:
- clusterJsonString: string
- credentialsId: string
- clusterName: string
- accessKey: string
- accessSecret: SecureString
Then, let’s implement the Retrieve Credentials scripting element. It will need to be bound to the ID input and it should output the credentials:
The scripting is:
Find EKS-Cluster By Id/Retrieve Credentials Expand source
if (!!id) {
// Splitting the ID like mentioned in previous sections
const idParts = id.split("#_#");
credentialsId = idParts[0];
clusterName = idParts[1];
// In a real-world environment, always check if the result is non-null before accessing its properties
const credentials = DynamicTypesManager.getObject("EKS", "Credentials", credentialsId);
// For debugging purposes. Remove in production environments or obfuscate sensitive information
System.log(JSON.stringify(credentials.properties));
accessKey = credentials.getProperty("accessKey");
accessSecret = credentials.getProperty("accessSecret");
}
After we’ve got the credentials we need and the cluster name we are looking for, we can proceed to executing the python action. We don’t need to do anything, beyond adding the action element, selecting the action name and binding several parameters:
After that, we need to process the JSON result of the string. It’s the details about the cluster with the given name. If there is no cluster found for the combination of credentials and cluster name, then an empty object will be returned by the python action.
Our only job here is to:
- Parse the JSON string (result from previous action)
- Create a Cluster dynamic type object
- Populare the values of all the properties
Find EKS-Cluster By Id/Create Cluster Object Expand source
const cluster = JSON.parse(clusterJsonString);
const clusterPropertyNames = ["arn",
"certificateAuthority",
"clientRequestToken",
"connectorConfig",
"createdAt",
"encryptionConfig",
"endpoint",
"identity",
"kubernetesNetworkConfig",
"logging",
"name",
"platformVersion",
"roleArn",
"status",
"tags",
"version"];
if (!!cluster) {
resultObj = DynamicTypesManager.makeObject("EKS", "Cluster", id, cluster["name"], clusterPropertyNames);
for (var i; i < clusterPropertyNames.length; i++) {
setPropertyIfAvailable(clusterPropertyNames[i])
}
}
function setPropertyIfAvailable(propertyName) {
if (cluster[propertyName] !== undefined && cluster[propertyName] !== null) {
resultObj.setProperty(propertyName, JSON.stringify(cluster[propertyName]));
}
}
Essentially, we are taking the resulting JSON string from the python action execution and parsing it to extract the properties of the cluster. The reason why the properties of the dynamic type object are set after getting JSON.stringify-ed first is because JSON objects have a special internal representation in vRO. And that representation cannot be serialized. If you try:
resultObj.setProperty(propertyName, cluster[propertyName]);
instead of
resultObj.setProperty(propertyName, JSON.stringify(cluster[propertyName]));
then you will get an error and the workflow will fail. Currently, objects will be serialized to a string. If you want to keep the structure, you will have to do the conversion yourself – to a Properties object or a Composite type.
Implementing the Find All EKS-Cluster Workflow
Note that this workflow is supposed to return all instances of an EKS cluster. This means that, even if you have multiple different credentials stored, you should return all clusters from all credentials. This, in turn, means that we will need to call our python action N times, where N is the number of credentials currently stored in our vRO.
Another thing of note is that, if we see the same cluster multiple times (through API calls via different credentials), then we should return it every time. Its ID, however, is going to be different due to the fact that we are prepending the credentials ID. See section Finding out which credentials to use
Creating a Find All Python Action
Again, install the boto3 lib to the folder of this action
# Run this inside the find-all-eks-cluster (or your equivalent) directory
pip3 install boto3 -t lib/
as well as copying over the credentials and boto_client scripts.
Note: In your IDE of choice, you might also have to add the lib/ folder to your configuration so that the dependency could be found and have enabled auto-completion
find-all-eks-cluster/handler.py
from . import credentials
from . import boto_client
SERVICE_EKS = "eks"
def getClusters(client) -> list:
print("Going to get the first 100 EKS clusters")
clusters = []
clustersResponse = client.list_clusters()
print("Received response for clusters: {}".format(clustersResponse))
clusters.extend(clustersResponse["clusters"])
while "nextToken" in clustersResponse.keys() and clustersResponse["nextToken"] is not None:
print("Going to perform a request for an additional 100 clusters with nextToken '{}'".format(clustersResponse["nextToken"]))
clustersResponse = client.list_clusters(nextToken = clustersResponse["nextToken"])
clusters.extend(clustersResponse["clusters"])
return clusters
def handler(context, inputs):
currentCredentials = credentials.Credentials(inputs)
eksClient = boto_client.getClient(currentCredentials, SERVICE_EKS)
return getClusters(eksClient)
When uploading the action, don’t forget to add the inputs for the (current) credentials. Also, check the Array checkbox next to the output type, since we are returning a list:
The response of this action is going to be a list of strings which will be the names of existing clusters found for the provided credentials. This doesn’t give us much to work with. To make the Find-all workflow more complete, we could also call the find by ID workflow for every found cluster. That way we are also going to have the rest of the details for the clusters (instead of just showing an object with only a name). This can be easily achieved in the workflow’s scripting
Implementing the Find All EKS-Cluster
This should be fairly straightforward:
- Retrieve a list of all of the Credentials
- For every set of credentials, retrieve a list of all clusters visible to it
- For every cluster from the list, retrieve the full details
The workflow itself will contain only one scripting element, within which we will be calling the python action from the previous section:
Find All EKS-Cluster/Find all Cluster Details Expand source
resultObjs = [];
// First get all of the Credentials stored on this vRO
const allAwsCredentials = Server.findAllForType("DynamicTypes:EKS.Credentials");
if (!!allAwsCredentials) {
Server.log("Total number of EKS.Credentials found: " + allAwsCredentials.length);
for (var i = 0; i < allAwsCredentials.length; i++) {
// Remember to replace the module name if it is different on your environment
const clusterNames = System.getModule("com.aws.eks").findAllEksClusters(allAwsCredentials[i].accessKey, allAwsCredentials[i].accessSecret);
if (!!clusterNames) {
Server.log("For credentials with ID " + allAwsCredentials[i].name + " found a total cluster count: " + clusterNames.length);
// For each set of credentials, get a list of the cluster names visible to it
for (var j = 0; j < clusterNames.length; j++) {
// Remember that our DT cluster IDs are "constructed"
const clusterId = allAwsCredentials[i].id + "#_#" + clusterNames[j];
// For each cluster, retrieve its full info. This results in the find eks-cluster by ID workflow to get executed
const cluster = DynamicTypesManager.getObject("EKS", "Cluster", clusterId);
resultObjs.push(cluster);
}
}
}
}
Note: When calling Server.findAllForType for a dynamic type, under the hood, the Find-All workflow is called. In this case, the Find All EKS-Credentials workflow should have been called and you should be able to see it in your history. In the same manner, the Find All EKS-Cluster workflow would be called if you did: Server.findAllForType(“DynamicTypes:EKS.Cluster”), the above workflow would be called.
Step 2: Creating the Custom Resource Type
After we’ve created the cluster dynamic type, you still need to create CRUD workflows. These are specifically created workflows which are made for use in vRA custom resource types. Actually, you only really need the CREATE + UPDATE workflows, which are also the ones which we will be implementing. The UPDATE workflow is not mandatory (it only gets called when you execute the update operation on a deployment). And the READ operation is done automatically out of the box for all vRO objects (by calling the vRO inventory, which finds the object for us).
Creating a Cluster
Since the “plumbing” for the dynamic type is done, and we can find already existing objects, it’s time to implement the workflow which will be used to create cluster instances. The strategy here is the same as with the finder workflows of the previous sections. We will develop a python script and we will upload it to a vRO action which is going to be used by our Create EKS Cluster workflow to do the actual work.
Looking at the documentation for creating a cluster we can see that there are several required parameters and multiple optional ones https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/eks.html?highlight=eks#EKS.Client.create_cluster:
Head over to the documentation for the full description of the parameters
- name (required)
- version
- roleArn (required)
- resourcesVpcConfig (required)
- subnetIds
- securityGroupIds
- endpointPublicAccess
- endpointPrivateAccess
- publicAccessCidrs
- kubernetesNetworkConfig
- logging
- clientRequestToken
- tags
- encryptionConfig
These will map out to be inputs of the CREATE workflow. We will also try to match the vRO input types to be as close as possible to the boto3 method types. There are also two ways you could go about providing values for those parameters:
- every input parameter has a 1:1 type mapping with the python client’s methods’ types
- you have an input for every simple-type parameter and construct the more complex objects by yourself
The advantage of the second approach is that, in a custom form, you can bind the fields to actions which would return a list of applicable values. Instead, if you went with the first approach, you would have only a Properties object and you can only bind an action which returns Properties to it, which is not very flexible for the EKS use case. Since this is a proof of concept, we will only add the mandatory values.
An additional input which the workflow must have is a set of Credentials. In this walkthrough, we are going to let the user select a set of available credentials (out of all present on vRO). However, in a real-world scenario, you might want to implement a vRO action which returns an appropriate set of credentials based on the user performing the request. Or you might even hardcode and hide that input from your users. But letting end-users select credentials out of all available will mean that you have very little control over placement.
Implementing the python action
As with all other python actions in this post, install the boto library using pip with the -t lib/ argument and value. Again, we will be using the credentials and boto_client scripts.
create-cluster/handler.py Expand source
import json
from . import credentials
from . import boto_client
import datetime
createClusterParameterNames = ["name", "version", "roleArn", "resourcesVpcConfig",
"kubernetesNetworkConfig", "logging", "clientRequestToken", "tags", "encryptionConfig"]
def myconverter(o) -> str:
if isinstance(o, datetime.datetime):
return o.__str__()
return "___UNABLE_TO_CONVERT_ACTUAL_VALUE___"
def handler(context, inputs) -> dict:
creds = credentials.Credentials(inputs)
client = boto_client.getClient(creds)
# The argument names of the method are the same as the parameter names of the vRO action
# But first clean out any unnecessary None values and the credential parameters
filteredInputs = {
key: value
for key, value in inputs.items()
if value is not None and value != "__NULL__" and key in createClusterParameterNames }
createClusterResponse = client.create_cluster(**filteredInputs)
# You could also use 'str' as the value for default in this case
return json.dumps(createClusterResponse, default=myconverter)
This script is also pretty straightforward:
- Get the credentials from the inputs
- Create a client for EKS with the credentials
- Filter out only the create_cluster inputs (otherwise you will get an error)
- Call the create_cluster method of the newly created client.
Thanks to dict comprehensions and the **kwargs syntax, this is a short script. The rest of the work will be done by the vRO action, such as accepting the correct types of inputs
Creating the vRO Action
For this python script, let’s name the corresponding vRO action createCluster, again placed in the com.aws.eks module. Also, add all of the inputs of the create_cluster function. In this post, we are only going to pass the required values. But since we are filtering out None values, it doesn’t hurt to have them added as inputs to the action.
Also note that we are using the Properties type. While that works, in the general case, it’s not very descriptive of what data is being expected. In a Properties object you can put whatever key/value pairs you want. In a real world scenario you should create Composite Types which can represent the expected input much more precisely. However, we are only going to provide values for the required inputs, so it’s not a big deal for our scenario.
The return type is string. We’ve json dumped the result from the boto3 request (due to the datetime serialization quirks) and we are going to parse it back to JSON in the workflow (next step)
Implementing the Create EKS Cluster workflow
The workflow will have almost the same inputs as the action. The only difference will be that the credentials should be selected as an object (of type DynamicTypes:EKS.Credentials) rather than being passed as strings. Also, there should be an output of type DynamicTypes:EKS.Cluster. This is a requirement if we want to be able to use the workflow to create a custom resource type later on.
We will also need to create some variables in the workflow, so that we can pass data to the different elements:
- accessKey – extract from credentials
- accessSecret – extract from credentials
- createdClusterDefinition – the response from the createCluster python action. We are going to construct the dynamic type object from it
The workflow itself will have several elements:
- Extract the key and secret from the credentials object
- Combine the appropriate inputs into a resourcesVpcConfig Properties object. We are doing this because the vRO client Properties input field does not allow us to input more complex values, such as arrays, even though the Properties themselves support that case.
- Execute the python action
- Parse the result and build the dynamic types object
Make sure to bind the input of the scripting element to be the credentials object. The outputs should be the two workflow variables – key and secret. We are going to use those to pass on to the python action when creating the new cluster.
The scripting is also very simple:
Create EKS Cluster/Extract credentials Expand source
if (!!credentials) {
accessKey = credentials.accessKey;
accessSecret = credentials.accessSecret;
} else {
throw "The credentials are not properly set!";
}
After that, let’s combine the necessary fields in a Properties object. Once again, this is done only for convenience, since it’s easier for end-users to see specific fields which they have to input instead of a Properties which offers no hints as to what is expected to be entered.
Create EKS Cluster/Combine resourcesVpcConfig properties Expand source
resourcesVpcConfig = new Properties();
resourcesVpcConfig.put("endpointPrivateAccess", endpointPrivateAccess);
resourcesVpcConfig.put("endpointPublicAccess", endpointPublicAccess);
resourcesVpcConfig.put("publicAccessCidrs", publicAccessCidrs);
resourcesVpcConfig.put("securityGroupIds", securityGroupIds);
resourcesVpcConfig.put("subnetIds", subnetIds);
Server.log("Built resourcesVpcConfig Properties: " + JSON.stringify(resourcesVpcConfig));
The next step is to just execute the python action which actually creates the cluster. When you add the action element, all of the properties with matching names will be automatically bound. All you have to do is just bind the two inputs – key and secret and the output, so that we can use it to build the dynamic types object in the next step:
The last step is to just parse the Properties result from the python action and create an object which will be the output of this workflow. This is important because it’s needed for custom resource types to work properly.
Create EKS Cluster/Parse cluster result Expand source
const newClusterJson = JSON.parse(createdClusterDefinition);
const newClusterId = credentials.name + "#_#" + newClusterJson["cluster"]["name"];
// Do NOT use DynamicTypesManager.makeObject() outside of finder workflows. Use getObject() instead!
// This will return a dynamic types object with all of the properties' values set
newCluster = DynamicTypesManager.getObject("EKS", "Cluster", newClusterId);
Even though the cluster is not ready, it is still present on EKS with a status of Creating and we will still be able to see most of its properties. If you like, you can implement additional logic that will make the workflow complete once the cluster is ready. This will make it run longer, but, once a workflow execution is complete, you will know that the cluster is ready for use.
Implementing the Delete EKS Cluster Workflow
The DELETE workflow is the second mandatory workflow for custom resource types. It gets called when a user tries to delete a deployment which contains an instance of a custom resource type. As the name suggests, the DELETE workflow is responsible for deleting the resources. Its successful completion suggests that the resource has been deleted on the remote system. In our case, we are going to be deleting an EKS cluster. This workflow is going to be simple. We are going to create a new python vRO action which only accepts the name of the cluster we want to delete. The workflow is then going to call that action.
DELETE workflows, that are used in custom resource types, have one requirement: they must have one input of the type you want to delete. So in this case, the workflow must have one input of type DynamicTypes:EKS.Cluster. Any other inputs are not going be relevant since deleting a deployment is just a trigger operation – it does not accept any inputs. So adding any additional inputs/outputs to the workflow will have no effect whatsoever.
Let’s start by developing the python action. On your local env, in the vro-python-eks/delete-cluster/ foler, paste the two common scripts credentials.py and boto_client.py (like we did for all of the other python scripts for vRO actions). Also create the handler.py script:
delete-cluster/handler.py Expand source
import datetime
import json
from . import boto_client
from . import credentials
def myconverter(o) -> str:
if isinstance(o, datetime.datetime):
return o.__str__()
return "___UNABLE_TO_CONVERT_ACTUAL_VALUE___"
def handler(context, inputs) -> str:
creds = credentials.Credentials(inputs)
client = boto_client.getClient(creds)
response = client.delete_cluster(name=inputs["name"])
return json.dumps(response, default=myconverter)
zip the folder before creating the vRO action
Creating the deleteCluster vRO action.
Again in the com.aws.eks action module, create the deleteCluster action. There are going to be 3 inputs: the 2 for authentication and one name which will have the value of the name of the cluster that is going to get deleted:
Creating the Delete EKS Cluster workflow
The DELETE workflow is pretty simple – accept one input of type DynamicTypes:EKS.Cluster and call the python action to actually delete the cluster. There will be only one scripting element.
In there we deconstruct the ID of the cluster sot hat we can retrieve the credentials ID. After we get the credentials, with which we are going to perform the delete operation, we execute the python action. Once it returns, it means that the cluster is in DELETING status. After that it takes several minutes for the cluster to be deleted and removed from AWS.
Delete EKS Cluster/Delete Cluster Expand source
const clusterName = cluster.name;
Server.log("Going to delete cluster: " + clusterName);
const credentialsId = cluster.id.split("#_#")[0];
const credentials = DynamicTypesManager.getObject("EKS", "Credentials", credentialsId);
const actionResult = System.getModule("com.aws.eks").deleteCluster(clusterName, credentials.accessKey, credentials.accessSecret);
Server.log("AWS EKS response for delete_cluster(): " + actionResult);
Creating the Custom Resource Type Definition in vRA
After all of the dynamic types have been created. We have also created the CRUD workflows for DynamicTypes:EKS.Cluster, so can proceed with creating the custom resource type definition. To do that, open Cloud Assembly and click on the Design tab. After that navigate to the Custom Resources tab on the leftside menu. Then, click on the create button. On the screen, fill out the required information:
- name: EKSCluster (or whatever else you like)
- description: fill if necessary
- resourceType: Custom.EKS.Cluster
- activate: true
- scope: true (available for every project)
- based on: vRO Inventory
- Lifecycle actions: Select the Create EKS Cluster and Destroy EKS Cluster workflows. Update is not mandatory
At this point you might want to go to the Properties tab. There you have the ability to modify the schema of the custom resource, such as changing the display name of a property (title). This is a new feature of custom resource types, which allows you to remove/modify properties in the schema. You can also add constraints which limit the values that can be given to the property in a cloud template.
All that’s left now is to just create a cloud template which uses our newly created custom resource type.
Step 3: Putting it All Together / Provisioning and managing a cluster
After you create the custom resource type, you have two options of provisioning an instance:
- Creating a cloud template(blueprint) which contains the custom resource type
- Adding the CREATE workflow to a workflow content source in Service Broker and entitling that content source to a project.
Note: Workflows that have an output of a type which is a custom resource type in vRA will also provision an instance of the custom resource.
Since adding the type to a blueprint and deploying it is the more popular usecase, this is also what we are going to do. In the same Design tab, go to the first sub-tab on the left Cloud Templates. There you can create a new one for your EKS clusters. After that you should see the new custom resource type at the bottom of the types list.
The next step is to provide values for the different properties of the custom resource type. As you can see from the above screenshot, we’re hardcoding most of the values. In a real world usecase, you would almost surely use inputs for all of the properties. But some of the information for Clusters has to be retrieved from AWS and that means that you also need to create additional python actions which would retrieve that information (IAM roles, subnets, security groups). Also, we’re hardcoding the credentials used, since we only have one. Again, in a real world use-case you might want to switch between several different sets of credentials, based on the requesting user, for instance. That could also be achieved by creating vRO actions which perform calculations and return the appropriate instance of DynamicTypes:EKS.Credentials.
For our demo, we will only ask for the input of the cluster name. For everything else, we are going to hardcode the values. Once you’ve bound all of the inputs to the custom resource type properties, and given constant values to the rest of the properties, you can version the blueprint. This is a necessary step if you want to exopse it as a catalog item.
Creating a Catalog Item
Now that you’ve versioned your blueprint (with the release version to catalog checked), you can create a new content source of blueprints and add our newly created blueprint.
- First, navigate to Service Broker and go to the Content & Policies tab.
- Then start adding a new content source (you should already be on the Content Sources tab).
- Select VMware Cloud Templates as the type.
- After that, select the project in which you created your cloud template
- After clicking create and import, go to the Content Sharing tab
- Select a project
- Click on Add Items and select the newly created content source
- Navigate to the Catalog tab
You should now be seeing your catalog item, based on the blueprint we created in the previous section.
You can request the item. Since we’ve only exposed 1 input in the blueprint, you only have to fill in the deployment name and the cluster name:
Once you request, you should be taken to a list of all of the deployments, where yours should be at the top. Click on it to open the details
After several minutes you should see a new resource appear in the deployment, based on our CRT. You should also be able to see all of its properties on the right hand side:
We can also see that the cluster appears in the AWS console
If you delete the deployment, you should also see that the status of the cluster in the AWS console gets updated again:
And we’re done! You can easily create/delete clusters and incorporate them in more complex cloud templates. You can also create workflows which can be used for custom resource type d2ops, such as updating specific properties of the cluster (or turn on/off).