vRealize Orchestrator artifacts that drive the Essential Coach VM deployment catalog experience. These actions and the subscription that wires them together form the dynamic layer of the blueprint — the parts that run outside Aria's static blueprint graph to populate dropdowns at request time and handle post-provisioning work that Aria can't do natively.
All three actions live in the com.essential.aria vRO module.
vRO_Workflows/
├── com.essential.aria/
│ ├── addDataDisksOnDeploy.js Post-provision disk attachment
│ ├── getNetworkSegmentsAll.js Blueprint NIC1 VLAN dropdown driver
│ └── getNetworkProfileTag.js Segment-to-capability-tag router
└── README.md This file
The three actions work alongside the blueprint at three distinct points in a VM request lifecycle:
┌────────────────────────────────────────────────────────────────────┐
│ 1. USER OPENS CATALOG FORM (Service Broker) │
│ Blueprint input: networkSegment │
│ → $dynamicEnum calls getNetworkSegmentsAll │
│ → Action queries Aria IaaS API for all fabric networks │
│ → Returns only those tagged 'servicenow:visible' │
│ → User sees a pre-filtered dropdown │
└────────────────────────────────────────────────────────────────────┘
↓ user submits
┌────────────────────────────────────────────────────────────────────┐
│ 2. ARIA DEPLOYS THE VM │
│ Blueprint resource: Cloud_vSphere_Machine_1 │
│ → Placement by location tag │
│ → BlueCat IPAM allocates IP │
│ → vCenter clones, customizes, powers on │
│ → Backup tag attached to VM carries disk encoding │
│ (format: "enabled|<count>:<s1>,<s2>,<s3>,<s4>") │
└────────────────────────────────────────────────────────────────────┘
↓ compute.provision.post fires
┌────────────────────────────────────────────────────────────────────┐
│ 3. SUBSCRIPTION TRIGGERS addDataDisksOnDeploy │
│ Subscription: Essential Coach VM — Add Data Disks on Provision │
│ → Blocking (Ansible waits) │
│ → Workflow wrapper passes inputProperties to the action │
│ → Action reads Backup tag, parses disk config │
│ → Resolves VM in vCenter, finds primary datastore │
│ → Attaches each disk via reconfigVM_Task │
│ → Subscription completes → Ansible post-deploy runs │
└────────────────────────────────────────────────────────────────────┘
getNetworkProfileTag is a utility action used by alternate blueprint variants and custom forms that need to route segment-to-NSX-profile tag dynamically. It is not called by example-vm-blueprint-v8.8.3.yaml directly (which uses a static segment:<n> constraint) but is retained because it encodes the production routing logic and is referenced by development branches of the blueprint that use capability-tag-based placement.
Module: com.essential.aria
Version: 12 (FINAL)
Triggered by: Event subscription on compute.provision.post
Author: Noah Farshad
Last updated: 2026-04-16
Attaches additional data disks to a freshly-provisioned VM before Ansible post-deploy runs. Implements Phase 3 of the disk migration described in the blueprint v8.8.0 changelog — the replacement for Aria's broken blueprint-level conditional disks.
Aria's storage validator allocates Cloud.vSphere.Disk resources regardless of count: 0 expressions, which caused every deployment with additionalDiskCount = 0 to fail with "disk and compute storage are not compatible." Removing disk resources from the blueprint and attaching them post-provisioning via vCenter API bypasses the validator entirely. vCenter places each disk on the VM's already-selected datastore.
The blueprint encodes disk configuration into the VM's Backup tag using a pipe-delimited format:
<backup_status>|<count>:<size1>,<size2>,<size3>,<size4>
Examples:
"disabled|1:50,100,100,100" → backup off, 1 disk at 50 GB
"enabled|0:100,100,100,100" → backup on, no additional disks
"disabled|3:50,100,200,100" → backup off, 3 disks at 50/100/200 GB
This encoding exists because Aria's blueprint validator rejects property names starting with __ (which blocks the cleaner extraConfig passthrough) and tags are the only reliable mechanism confirmed present in the compute.provision.post event payload.
| System | Interaction |
|---|---|
| vRO inputs | inputProperties (Properties) — the entire event payload |
| vRO server | Server.findAllForType("VC:VirtualMachine") — VM lookup by name |
| vCenter API | VcVirtualMachineConfigSpec + reconfigVM_Task — disk attach |
| vCenter API | vcVm.config.hardware.device — inspect existing disks & SCSI controller |
| vRO library | com.vmware.library.vc.basic/vim3WaitTaskEnd — wait for task completion |
- Read
resourceNamesandtagsfrominputProperties - Parse the
Backuptag → extractdiskCountanddiskSizes[] - If
diskCount == 0, exit cleanly (no work needed) - Find the VM in vCenter by name
- Resolve the primary datastore from the OS disk's backing
- Discover the existing SCSI controller and enumerate used unit numbers (skipping 7)
- For each requested disk, build a
VcVirtualDisk+VcVirtualDiskFlatVer2BackingInfo(thin provisioned) and issuereconfigVM_Task - Wait for each task before proceeding to the next disk
Top of the file:
var DRY_RUN = false;Set to true to log every action without actually attaching disks — useful for validation before enabling in a new environment.
- Does not format or mount disks (Ansible does that after the subscription completes)
- Does not add SCSI controllers (only uses the existing one)
- Does not touch the OS disk or any existing disks
- Does not call the Aria API — operates entirely on the event payload and vCenter
Module: com.essential.aria
Version: 2.0.0
Triggered by: Blueprint input $dynamicEnum on form render
Return type: Array/string
Populates the NIC1 VLAN dropdown in the Service Broker catalog form. Returns only fabric networks tagged servicenow:visible so operators see a curated list instead of all 6000+ networks discovered by Aria.
Without this action, the networkSegment input in the blueprint would need to be a hardcoded oneOf list — every new network would require a blueprint edit and release. With this action plus the servicenow:visible tag, the workflow is:
- Operator adds a network name to
vlan_location_map.jsonin the Network Automation toolkit - Runs
python3 aria_mapping.py --config example_config.yaml --servicenow-tags --execute - Next time a user opens the catalog form, the new network is in the dropdown — no blueprint change
From example-vm-blueprint-v8.8.3.yaml:
networkSegment:
type: string
title: NIC1 VLAN
$dynamicEnum: /data/vro-actions/com.essential.aria/getNetworkSegmentsAll| System | Endpoint / call |
|---|---|
vRO ConfigurationElement |
AriaCredentials — reads vraHost, serviceUser, servicePassword |
vRO REST:RESTHost |
Aria-IaaS — outbound HTTP client |
| Aria CSP | POST /csp/gateway/am/api/login — initial auth, returns refresh_token |
| Aria IaaS | POST /iaas/api/login — exchange refresh_token for Bearer token |
| Aria IaaS | GET /iaas/api/fabric-networks?apiVersion=2021-07-15&$top=200&$skip=N — paginated scan |
- Load credentials from
AriaCredentialsConfigurationElement - Resolve the
Aria-IaaSREST Host - Authenticate (two-step CSP → IaaS flow)
- Page through all fabric networks (200 per page)
- For each network, check if any tag has key
servicenowand valuevisible - If yes, add the network name to the result set (deduplicated)
- Sort alphabetically and return
ConfigurationElementnamedAriaCredentialswith three attributes:vraHost(string) —vra.example.comserviceUser(string) — Aria service account usernameservicePassword(SecureString) — Aria service account password
REST:RESTHostnamedAria-IaaSpointing tohttps://vra.example.com
See the "Setup" section below for creating these from scratch.
Module: com.essential.aria
Version: 2.0.0
Triggered by: Alternate blueprint variants (not v8.8.3 production)
Return type: Array/string
Given a selected networkSegment and location, returns the NSX network profile capability tag that should be used as a placement constraint on Cloud.vSphere.Network. Implements the segment-to-profile routing logic.
| Segment pattern | Tag returned | Target profile |
|---|---|---|
G-* |
network:global-stretched |
NSX Global Stretched |
Anything else, location=TX-SDDC |
network:tx-overlay |
NSX Overlay TX |
Anything else, location=VA-SDDC |
network:va-overlay |
NSX Overlay VA |
| Empty / missing input | network:tx-overlay |
Safe default (TX overlay) |
Production v8.8.3 uses a static segment:${input.networkSegment} tag constraint on Cloud_vSphere_Network_1 — the segment name itself drives placement. That pattern works because every network is already tagged with segment:<n> by aria_mapping.py --segment-tags.
getNetworkProfileTag exists for the alternate pattern where the network profile capability tag (not the segment tag) drives placement. This is the original design used by earlier blueprint versions and by development branches experimenting with NSX Federation routing.
| Input | Type | Example |
|---|---|---|
networkSegment |
Array/string | G-CI-IDM-PROD-SEG01 |
location |
Array/string | TX-SDDC or VA-SDDC |
Three reasons:
- Documentation — encodes the production routing logic in executable form; anyone onboarding a new developer can read this file to understand the NSX profile architecture
- Fallback — if the
segment:*tag strategy ever needs to be replaced (e.g., if Essential Coach moves to pure capability-tag placement), this action is ready to wire into a blueprint - Pattern template — the Federation-aware routing logic (G-* → global, else local by DC) is the canonical pattern and is reused elsewhere
- Does not call any API — pure string logic
- Does not verify the network profile actually exists — assumes
aria_mapping.py --tagshas been run
If this repo is ever used to bootstrap a new vRO or recover from a rebuild, here's the complete setup. Import the three .js files first, then build the supporting objects in this order.
vRO Client → Library → Actions → New Module
- Name:
com.essential.aria - Description:
Essential Coach custom Aria actions
For each .js file in com.essential.aria/:
vRO Client → Library → Actions → com.essential.aria → New Action
- Name: (match the filename without extension —
addDataDisksOnDeploy,getNetworkSegmentsAll,getNetworkProfileTag) - Paste the file contents into the Script tab
- Set Return type and inputs per the action's header comment:
| Action | Inputs | Return type |
|---|---|---|
addDataDisksOnDeploy |
inputProperties (Properties) |
void |
getNetworkSegmentsAll |
none | Array/string |
getNetworkProfileTag |
networkSegment (string), location (string) |
Array/string (check Array box) |
vRO Client → Library → Configurations → New Folder → Essential Coach → New Configuration
- Name:
AriaCredentials - Add three attributes:
| Key | Type | Value |
|---|---|---|
vraHost |
string | vra.example.com |
serviceUser |
string | Aria admin/service account username |
servicePassword |
SecureString | password (rotate via Aria Secrets, update here after) |
vRO Client → Inventory → HTTP-REST → Add a REST host
- Name:
Aria-IaaS(exact — the actions search for this name) - URL:
https://vra.example.com - Authentication: None at the REST Host level (the actions handle auth via the
/csp/gateway/am/api/loginflow using credentials fromAriaCredentials) - SSL: import/trust the Aria certificate if using a self-signed or internal CA
Aria Assembler → Extensibility → Subscriptions → New Subscription
| Field | Value |
|---|---|
| Name | Essential Coach VM — Add Data Disks on Provision |
| Description | Reads additionalDiskCount + disk1-4SizeGB from VM customProperties and attaches data disks via vCenter API. Phase 2 = dry-run / log only. |
| Status | Enable subscription |
| Organization scope | Essential Coach CIO Services (provider) |
| Event Topic | Compute post provision (compute.provision.post) |
| Condition | Off (no filter — fires for all compute post-provision events) |
| Action/workflow | addDataDisksOnDeploy-workflow (the wrapper workflow that calls the action) |
| Blocking | On — blocks the deployment until disks are attached |
| Timeout | 10 min |
| Priority | 10 (default) |
Note on the workflow wrapper: Aria subscriptions call workflows, not actions directly. The subscription above references addDataDisksOnDeploy-workflow — a one-element workflow whose sole scriptable task calls System.getModule('com.essential.aria').addDataDisksOnDeploy(inputProperties). If this workflow doesn't exist in a fresh vRO, create it:
vRO Client → Library → Workflows → New Folder → Essential Coach → New Workflow
- Name:
addDataDisksOnDeploy-workflow - Add one input:
inputProperties(Properties) - Add one scriptable task with content:
System.getModule("com.essential.aria").addDataDisksOnDeploy(inputProperties);
- Save and close
Aria Assembler → Design → Blueprints → open essential-coach-vm-deployment → Inputs tab → networkSegment
The dynamic enum path should be:
/data/vro-actions/com.essential.aria/getNetworkSegmentsAll
Save, release a new version, and open the Service Broker catalog item. The NIC1 VLAN dropdown should populate with whatever networks are currently tagged servicenow:visible.
| Action | Reads from | Writes to | Called by |
|---|---|---|---|
addDataDisksOnDeploy |
Event payload, vCenter VM | vCenter VM (attach disks) | Subscription on compute.provision.post |
getNetworkSegmentsAll |
Aria IaaS API | nothing (read-only) | Blueprint input $dynamicEnum at form render |
getNetworkProfileTag |
its own inputs | nothing (pure logic) | Alternate blueprints / development branches |
| Supporting object | Used by |
|---|---|
AriaCredentials ConfigurationElement |
getNetworkSegmentsAll |
Aria-IaaS REST Host |
getNetworkSegmentsAll |
addDataDisksOnDeploy-workflow |
The subscription (wraps the action) |
Subscription Essential Coach VM — Add Data Disks on Provision |
Triggered by Aria event bus on compute.provision.post |
getNetworkSegmentsAll can't authenticate, can't reach Aria, or no networks are tagged servicenow:visible. In order of likelihood:
- Check vRO → Action Runs for the most recent
getNetworkSegmentsAllexecution — the System.log output will tell you which step failed - Verify
AriaCredentialshas all three attributes populated (especiallyservicePassword) - Verify the
Aria-IaaSREST Host responds — open it in vRO and hit "Reload" - Run
python3 aria_mapping.py --config example_config.yaml --list-networks | grep servicenowto confirm at least some networks carry the tag - If rotating the Aria admin password, update
AriaCredentials.servicePassword— Aria Secrets rotation does not propagate here
- Subscription is disabled — check Assembler → Extensibility → Subscriptions → Status toggle
- Subscription is scoped to the wrong organization — verify
Organization scope = Essential Coach CIO Services (provider) - The
addDataDisksOnDeploy-workflowwrapper is missing — rebuild per step 5 above
- Check Assembler → Extensibility → Workflow Runs for the most recent
addDataDisksOnDeploy-workflowexecution - Look for
[addDataDisksOnDeploy] Backup tag raw: ...in the log — if missing, the Backup tag didn't make it to the event payload (blueprint issue, not vRO) - Look for
VM not found in vCenter— the VM name inresourceNamesdoesn't match any VM. Usually a race condition; increase the subscription timeout above 10 minutes - Look for
No SCSI controller found— rare, but happens on certain Linux templates; add the SCSI controller to the template
Subscription is not marked Blocking. Go to the subscription, enable Block execution of events in topic, save, and redeploy a test VM.
The VM has no existing VirtualDisk with a backing — this shouldn't happen for a cloned VM, but can happen if the template has no OS disk. Falls back to vcVm.datastore[0] which may be empty on VMs without any disks. Check the template.
| Action | Version | Last Major Change |
|---|---|---|
addDataDisksOnDeploy |
12 (FINAL) | Disk data read from Backup tag pipe-delimited payload; no disk-specific tag needed |
getNetworkSegmentsAll |
2.0.0 | Switched from hardcoded list to servicenow:visible tag filtering |
getNetworkProfileTag |
2.0.0 | Federation-aware routing (G-* → global, else TX/VA by location) |
Original author: Noah Farshad (noah@essential.coach) Engagement: VMware / Aria Automation reference implementation