Production Cloud Assembly blueprint and Service Broker custom form for the Essential Coach VM self-service deployment pipeline. These two artifacts are paired — the blueprint defines the deployment logic and the custom form defines the user experience. Both must be imported together for the catalog item to behave correctly.
VM_Blueprints/
├── example-vm-blueprint-v8.8.3.yaml Cloud Assembly blueprint (deployment logic)
├── example-vm-customform-v1.0.yaml Service Broker custom form (user-facing UI)
└── README.md This file
Blueprint (example-vm-blueprint-v8.8.3.yaml) |
Custom Form (example-vm-customform-v1.0.yaml) |
|
|---|---|---|
| What it defines | The VM topology, placement, post-deploy wiring | How the request form looks and behaves |
| Lives in | Cloud Assembly → Design → Blueprints | Service Broker → Content & Policies → Custom Forms |
| Scope | Per-project | Per catalog item (attached to the blueprint via Service Broker) |
| User sees it? | No (internal topology) | Yes (the catalog request form) |
| Without it | No deployment happens | Aria falls back to an auto-generated flat form — all 26 inputs shown in one scroll, no conditional visibility, no grouping |
The blueprint has 26 inputs. Aria's default catalog form renders them in a single flat list with no grouping or conditional logic. The custom form groups them into three tabs, hides fields that aren't relevant (e.g., Windows OS picker for Linux requests), and enforces the right flow.
Name: Essential Coach VM Deployment Version: 8.8.3 Last updated: 2026-04-14 Author: Noah Farshad (noah@essential.coach)
A single unified blueprint that handles Windows, Linux, and Oracle VM deployments across both TX-SDDC and VA-SDDC datacenters. One catalog item, three operating system families, five environments, two datacenters.
- User submits catalog request (Service Broker → Cloud Assembly)
- Blueprint generates the VM name from
systemCode+ environment suffix +serverNameSuffix - Placement by
location:${input.location}tag → selects TX or VA cluster - Image selection by catalog item → Windows/Linux/Oracle template
- Primary NIC resolved by
segment:${input.networkSegment}tag constraint - BlueCat IPAM provider allocates IP from the matching range (static assignment)
- Guest customization (Windows:
aria-windows-postdeploySysprep spec; Linux: native) compute.provision.postevent fires → vRO subscription attaches any additional disks- Cloud.Ansible resource runs post-deploy playbook from
esxq-vra-ansiblecontrol node
Inputs fall into eight logical groups. Every input has a default except businessJustification and serverNameSuffix (required free-text) and networkSegment (required dynamic enum).
Identity & Ownership (2): requestFor, requestedForUser
Catalog & OS Selection (4): catalogItem, windowsOsVersion, linuxOsVersion, joinDomain
VM Naming & Placement (7): systemCode (67 options), environment, location, tier, serverNameSuffix, businessJustification, ciocTicket
Compute Sizing (2): cpuCount (2/4/8/16/32), memoryGB (4/8/16/32/64/128)
Networking (3): networkSegment (dynamic from vRO), secondNic, secondNetworkSegment
Disks (5): additionalDiskCount (0–4), disk1SizeGB–disk4SizeGB (1–2048 GB, default 100)
Operational Flags (3): monitoringRequired, backupRequired, comments
The blueprint declares five resources:
| Resource | Type | Purpose |
|---|---|---|
Cloud_vSphere_Machine_1 |
Cloud.vSphere.Machine |
The VM itself |
Cloud_vSphere_Network_1 |
Cloud.vSphere.Network |
Primary NIC (always present) |
Cloud_vSphere_Network_2 |
Cloud.vSphere.Network |
Optional second NIC (Linux/Oracle only) |
Cloud_Ansible_Windows |
Cloud.Ansible |
Runs windows_postdeploy.yml (Windows only) |
Cloud_Ansible_Linux |
Cloud.Ansible |
Runs Linux post-deploy (Linux/Oracle only) |
Three resources use the count: expression pattern to enable/disable themselves based on inputs. This pattern works for network and Cloud.Ansible resources. It did not work for Cloud.vSphere.Disk resources — that is why disks were moved to a post-provisioning vRO subscription. See the "Disk attachment architecture" section below.
Every attempt at blueprint-native conditional disks failed. Aria's storage validator allocated disk resources regardless of count: 0 expressions, and deployments with additionalDiskCount = 0 failed with:
Unable to provision disk as disk and compute storage are not compatible
Six consecutive versions tried variations (location constraints, attachedTo, stripped properties, provisioningType: thin, quoted count expressions) — all failed.
Disk resources are removed entirely from the blueprint. Disk inputs are preserved and flow into the VM via two channels:
customProperties— full set (additionalDiskCount,disk1SizeGBthroughdisk4SizeGB)- VM tags — the
Backuptag carries a pipe-delimited encoding:<backup_status>|<count>:<s1>,<s2>,<s3>,<s4>
The compute.provision.post vRO subscription reads these values from the event payload's tags field (no vSphere API lookup needed for the data), then issues VcVirtualMachine.reconfigVM_Task to attach the disks to the VM's already-selected datastore. vCenter handles placement — the Aria storage validator never sees the disks.
The Aria blueprint validator rejects property names starting with __, which blocks the extraConfig passthrough pattern. Tags are the only reliable passthrough mechanism confirmed present in the compute.provision.post event payload.
| Phase | Status | What it does |
|---|---|---|
| Phase 1 | Current (v8.8.3) | Disks removed from blueprint. Inputs flow into customProperties/tags. No disk attachment happens yet — baseline deployment is clean. |
| Phase 2 | Next | Register vRO subscription with dry-run logging to validate event firing and payload structure. |
| Phase 3 | Final | Enable actual disk attach via reconfigVM_Task. Ansible post-deploy then formats/mounts the new disks. |
The v8.8.3 blueprint is complete and in production for Phase 1. The vRO action and subscription registration ship in the vRO_Workflows/ folder alongside the addDataDisksOnDeploy JavaScript action.
Values surfaced to the user after successful deployment:
| Output | Source |
|---|---|
vmName |
resource.Cloud_vSphere_Machine_1.resourceName |
ipAddress |
resource.Cloud_vSphere_Machine_1.networks[0].address |
gateway |
customProperties.bluecatGateway (populated by BlueCat IPAM provider) |
networkSegment |
Input echoed back |
clusterName |
resource.Cloud_vSphere_Machine_1.clusterName |
datastoreName |
resource.Cloud_vSphere_Machine_1.storage.datastoreName |
additionalDiskCount |
Input echoed back (useful for ServiceNow reconciliation) |
The blueprint references two Aria Secrets — they must exist in the Aria project for the deployment to succeed:
| Secret | Used by | Purpose |
|---|---|---|
${secret.windows_local_admin} |
Cloud_Ansible_Windows |
Local Administrator password for WinRM |
${secret.ansible_password} |
Cloud_Ansible_Windows, Cloud_Ansible_Linux |
Ansible control-node service account password |
Rotate via: Aria Assembler → Infrastructure → Secrets. No blueprint change required after rotation — the ${secret.*} reference is resolved at deploy time.
VM name construction:
systemCode + envLetter + "-" + serverNameSuffix
| Environment | Windows | Linux | Oracle |
|---|---|---|---|
| DEV / SBX | D |
U |
O |
| PRD | P |
P |
P |
| QA | Q |
Q |
Q |
| DR | R |
R |
R |
Example: systemCode=AUTH, environment=DEV, catalogItem=Windows VM, serverNameSuffix=UTIL01 → AUTHD-UTIL01
vCenter folder path:
<TX-CORP-IT | VA-CORP-IT>/<systemCode>/<envFolder>
Where envFolder = PRD / QA / DR / SBX / DEV.
Example: TX-SDDC + AUTH + PRD → TX-CORP-IT/AUTH/PRD
What it does: Overrides Aria's auto-generated request form with a purpose-built layout that groups 26 inputs into three logical tabs, hides irrelevant fields based on earlier selections, and provides contextual help (signposts) on every field.
Without this form, users would see a single long scroll of every input — including Windows OS options on Linux requests, disk size fields whether or not disks are requested, and NFS NIC selection for Windows VMs where it doesn't apply. The form enforces the right flow.
The "who, what, where" tab. Always visible fields only.
- Project — Aria project (populated by Service Broker
projectsscript action) - Deployment Name — Aria deployment name (900 char max)
- Request For — "Myself" or "Someone else"
- Requested For User — conditionally visible when
Request For = Someone else - Catalog Item — Windows / Linux / Oracle (drives most of the other conditional visibility)
- Business Justification — 10–2000 chars, required
- System — 67-entry dropdown (AD, APPS, AUTH, IDM, SQL, etc.), required
- Environment — DEV / PRD / QA / DR / SBX
- Data Center Location — TX-SDDC / VA-SDDC
- Application Tier — App / DB / Web
The "how it's built" tab. Heavy use of conditional visibility.
- Server Name (Suffix) — free-text, 1–50 chars, required
- CPU — 2/4/8/16/32 vCPU
- Memory (GB) — 4/8/16/32/64/128 GB
- NIC1 VLAN — driven by the
getNetworkSegmentsAllvRO action (tag-filtered dropdown) - Add Second NIC (NFS)? — visible only when
Catalog Item = Linux or Oracle - NIC2 VLAN (NFS) — visible only when
secondNic = true; hardcoded list of 8 NFS segments (4 TX, 4 VA) - Windows OS Version — visible only when
Catalog Item = Windows - Linux OS Version — visible only when
Catalog Item = Linux or Oracle - Join Domain — WORKGROUP / corp.example.com / corpdev.example.com
- Number of Additional Disks — 0 / 1 / 2 / 3 / 4
- Disk 1 Size (GB) — visible when
additionalDiskCount ≥ 1 - Disk 2 Size (GB) — visible when
additionalDiskCount ≥ 2 - Disk 3 Size (GB) — visible when
additionalDiskCount ≥ 3 - Disk 4 Size (GB) — visible when
additionalDiskCount ≥ 4
- Is Monitoring Required? — default true
- Is Backup Required? — default false
- Comments — free-text, 2000 char max
| Field | Visible when |
|---|---|
requestedForUser |
requestFor = "Someone else" |
secondNic |
catalogItem = "Linux VM" OR catalogItem = "Oracle VM" |
secondNetworkSegment |
secondNic = true |
windowsOsVersion |
catalogItem = "Windows VM" |
linuxOsVersion |
catalogItem = "Linux VM" OR catalogItem = "Oracle VM" |
disk1SizeGB |
additionalDiskCount ∈ {1, 2, 3, 4} |
disk2SizeGB |
additionalDiskCount ∈ {2, 3, 4} |
disk3SizeGB |
additionalDiskCount ∈ {3, 4} |
disk4SizeGB |
additionalDiskCount = 4 |
Two fields pull their options from outside the form YAML:
| Field | Source |
|---|---|
project |
Aria built-in projects script action (lists projects the user has access to) |
networkSegment |
com.essential.aria/getNetworkSegmentsAll — the vRO action that queries fabric networks and filters by servicenow:visible tag |
When a user opens the form, Aria hits the vRO action in real time. Adding a new network to the dropdown = tag it servicenow:visible via aria_mapping.py --servicenow-tags --execute. No form edit or release required.
The secondNetworkSegment dropdown is the only field in the form with a hardcoded value list. It ships with 8 entries:
| Label | Value | DC |
|---|---|---|
| txdc-sddc-nfs v105 (TX) | txdc-sddc-nfs v105 |
TX |
| txdc-sddc-nfs-secondary v105 (TX) | txdc-sddc-nfs-secondary v105 |
TX |
| txdc-sddc-nfs-vms-secondary v105 (TX) | txdc-sddc-nfs-vms-secondary v105 |
TX |
| txdev-m-nfs v125 (TX Dev) | txdev-m-nfs v125 |
TX Dev |
| vadc-sddc-nfs v1105 (VA) | vadc-sddc-nfs v1105 |
VA |
| VADC-SDDC-NFS-DB-SECONDARY-V1105 (VA DB) | VADC-SDDC-NFS-DB-SECONDARY-V1105 |
VA DB |
| vadc-sddc-nfs-vms-secondary v1105 (VA) | vadc-sddc-nfs-vms-secondary v1105 |
VA |
| vadev-m-nfs v125 (VA Dev) | vadev-m-nfs v125 |
VA Dev |
To add a new NFS network, edit the secondNetworkSegment.valueList block in the form YAML, save, and publish a new form version. Consider migrating these to getNetworkSegmentsAll with an additional tag like servicenow:nfs-visible if the list grows — but for 8 relatively static entries, hardcoding is simpler.
If this repo is ever used to bootstrap a new Aria environment or recover from a rebuild, here is the end-to-end setup. Both files in this folder need to be imported, and a few other pieces need to exist first.
These are documented in detail in the Network Automation and vRO_Workflows READMEs. Summary:
- Aria infrastructure configured — cloud accounts, cloud zones with
location:TX-SDDC/location:VA-SDDCtags, network profiles withsegment:<n>tags, image mappings for all templates, customization specaria-windows-postdeploy, BlueCat IPAM integration com.essential.ariavRO module — withgetNetworkSegmentsAll,getNetworkProfileTag, andaddDataDisksOnDeployactions importedAriaCredentialsConfigurationElement andAria-IaaSREST Host in vRO- Subscription
Essential Coach VM — Add Data Disks on Provisiononcompute.provision.post - Aria Secrets
windows_local_adminandansible_passwordin the project - Ansible control node
esxq-vra-ansibleregistered as an integration with playbooks in place
Aria Assembler → Design → Blueprints → New Blueprint
- Name:
essential-coach-vm-deployment - Project: (your target project)
- Paste the contents of
example-vm-blueprint-v8.8.3.yamlinto the code editor - Save
- Release version
8.8.3
Service Broker → Content & Policies → Content Sources
Add the blueprint as a content source if it's not already published:
- Type: VMware Cloud Templates
- Project: (same project as the blueprint)
- Save and run content sharing so the blueprint appears in the catalog
Service Broker → Content & Policies → Content
Find the essential-coach-vm-deployment catalog item → Actions → Customize form
In the form editor:
- Click ACTIONS → Import (or paste the form YAML directly)
- Load
example-vm-customform-v1.0.yaml - Verify the three tabs render: "Request Details", "VM Specifications", "Monitoring & Options"
- Verify conditional visibility — toggle
catalogItembetween Windows/Linux/Oracle, confirm the OS picker and second-NIC fields show/hide correctly - Save → Enable
Service Broker → Consume → Catalog
- Find
essential-coach-vm-deployment - Click Request
- Verify the form matches the three-tab layout
- Verify the NIC1 VLAN dropdown populates (this confirms
getNetworkSegmentsAll+AriaCredentials+Aria-IaaSREST Host are all working) - Verify the Linux/Oracle → Second NIC → NFS flow works
- Verify the
additionalDiskCount→ disk size fields appearing progressively
Submit a test deployment to validate end-to-end.
For this blueprint + form combo to deploy successfully:
- Cloud zones tagged
location:TX-SDDCandlocation:VA-SDDC(viaaria_mapping.py --tags) - Network profiles containing fabric networks tagged
segment:<n>(viamapper.py --populate+aria_mapping.py --segment-tags) - Fabric networks tagged
servicenow:visiblefor any network that should appear in the dropdown (viaaria_mapping.py --servicenow-tags) - Image mappings defined per region
- Customization spec
aria-windows-postdeployregistered in vCenter - Secrets
windows_local_adminandansible_passwordpopulated in the Aria project - BlueCat IPAM provider installed and integrated as the IPAM endpoint on the network profiles (see
BlueCat_IPAM/)
com.essential.ariaaction module withgetNetworkSegmentsAllaction publishedaddDataDisksOnDeployaction + subscription + workflow wrapper (seevRO_Workflows/)
- Cloud Account
esxq-vra-ansibleregistered in Aria as an integration - Inventory file at
/home/ansible/production/ansible/playbooks/inventories/Prod/hosts - Master playbook at
/home/ansible/production/ansible/playbooks/production/windows_postdeploy.yml - Eight Ansible roles installed on the control node (see
Ansible_Windows_PostDeploy/)
- Add the template mapping in Cloud Assembly (via
aria_mapping.py --images) - Edit blueprint
inputs.windowsOsVersion(orlinuxOsVersion) to add the newoneOfentry - Edit form
schema.windowsOsVersion.valueList(orlinuxOsVersion.valueList) to add the matching entry - Release a new blueprint version AND a new form version
Two places need updates:
- Blueprint:
inputs.systemCode.oneOf - Form:
schema.systemCode.valueList
The VM name and folder path generation will use it automatically — no logic changes needed.
Three places need updates:
- Blueprint:
inputs.environment.oneOf— add the new const value - Blueprint: VM name and
folderNameexpressions onCloud_vSphere_Machine_1— add the new ternary branch - Form:
schema.environment.valueList— add the new label/value
Form only (blueprint doesn't need to know about specific NFS networks):
- Edit
schema.secondNetworkSegment.valueListin the form YAML - Release a new form version
Two places:
- Blueprint:
inputs.joinDomain.oneOf - Form:
schema.joinDomain.valueList
The value flows through as a customProperty to Ansible — no blueprint logic changes needed. The Ansible domain_join role reads joinDomain from the playbook extra-vars and acts accordingly (WORKGROUP = skip).
You're on a superseded version. Upgrade to v8.8.0 or later — disks are no longer managed by the blueprint.
The BlueCat IPAM provider failed to allocate. Check:
- The
networkSegmentselected has a BlueCat range linked (runmapper.py --ipam-map --dry-runto verify) - The
adminservice account on BlueCat has allocation permission on that range - The BlueCat IPAM provider logs in Aria Extensibility → Action Runs
The segment is not tagged servicenow:visible. Add the segment to vlan_location_map.json in the Network Automation toolkit and run:
python3 aria_mapping.py --config example_config.yaml --servicenow-tags --executeThe custom form isn't enabled or has been overridden. Go to Service Broker → Content & Policies → Content → essential-coach-vm-deployment → Actions → Customize form and confirm Enabled in the top bar. If it shows as Disabled, click Enable.
Same issue — custom form not enabled, or the catalog item is using the auto-generated form.
Domain join is handled by the Ansible domain_join role, not Sysprep. Check:
joinDomaincustomProperty is notWORKGROUP- The Ansible control node can reach the domain controller
- The domain service account credentials are valid (see
Ansible_Windows_PostDeploy/)
Cloud_vSphere_Network_2 has count: 0 for Windows catalog items by design — only Linux and Oracle support the second NIC pattern. If the request is Linux/Oracle:
- Confirm
secondNic = trueon the request - If
secondNetworkSegmentis empty, it falls back tonetworkSegment— confirm that's intended
The form's secondNetworkSegment field is only visible when secondNic = true. Users must check the "Add Second NIC (NFS)?" box first. If the checkbox itself is missing, the catalogItem is Windows — the field is hidden by design.
Full changelog lives in the YAML header (description: field). Summary of the v8.x line:
| Version | Change |
|---|---|
| 8.8.3 | Consolidated disk tags (2 instead of 5); action removes tags after processing |
| 8.8.2 | Added disk values as VM tags; confirmed presence in compute.provision.post payload |
| 8.8.1 | (superseded) attempted __ extraConfig passthrough — blocked by validator |
| 8.8.0 | Phase 1 — removed all Cloud.vSphere.Disk resources; disk inputs flow to customProperties/tags |
| 8.7.x | (superseded) attempted quoted count expressions for conditional disks |
| 8.6.10 – 8.6.20 | Disk attachment experiments — all failed Aria's storage validator |
| Version | Change |
|---|---|
| 1.0 | Initial production release. Three-tab layout, conditional visibility for OS/disk/NIC fields, getNetworkSegmentsAll script action for NIC1, hardcoded 8-entry NFS list for NIC2. |
Original author: Noah Farshad (noah@essential.coach) Engagement: VMware / Aria Automation reference implementation