Skip to content

noahfarshad/aria-vm-blueprint

Repository files navigation

VM Blueprints & Custom Form

Production Cloud Assembly blueprint and Service Broker custom form for the Essential Coach VM self-service deployment pipeline. These two artifacts are paired — the blueprint defines the deployment logic and the custom form defines the user experience. Both must be imported together for the catalog item to behave correctly.


Directory Layout

VM_Blueprints/
├── example-vm-blueprint-v8.8.3.yaml    Cloud Assembly blueprint (deployment logic)
├── example-vm-customform-v1.0.yaml     Service Broker custom form (user-facing UI)
└── README.md                          This file

The Two Artifacts, Side by Side

Blueprint (example-vm-blueprint-v8.8.3.yaml) Custom Form (example-vm-customform-v1.0.yaml)
What it defines The VM topology, placement, post-deploy wiring How the request form looks and behaves
Lives in Cloud Assembly → Design → Blueprints Service Broker → Content & Policies → Custom Forms
Scope Per-project Per catalog item (attached to the blueprint via Service Broker)
User sees it? No (internal topology) Yes (the catalog request form)
Without it No deployment happens Aria falls back to an auto-generated flat form — all 26 inputs shown in one scroll, no conditional visibility, no grouping

The blueprint has 26 inputs. Aria's default catalog form renders them in a single flat list with no grouping or conditional logic. The custom form groups them into three tabs, hides fields that aren't relevant (e.g., Windows OS picker for Linux requests), and enforces the right flow.


Part 1 — Blueprint (example-vm-blueprint-v8.8.3.yaml)

Name: Essential Coach VM Deployment Version: 8.8.3 Last updated: 2026-04-14 Author: Noah Farshad (noah@essential.coach)

A single unified blueprint that handles Windows, Linux, and Oracle VM deployments across both TX-SDDC and VA-SDDC datacenters. One catalog item, three operating system families, five environments, two datacenters.

Deployment flow

  1. User submits catalog request (Service Broker → Cloud Assembly)
  2. Blueprint generates the VM name from systemCode + environment suffix + serverNameSuffix
  3. Placement by location:${input.location} tag → selects TX or VA cluster
  4. Image selection by catalog item → Windows/Linux/Oracle template
  5. Primary NIC resolved by segment:${input.networkSegment} tag constraint
  6. BlueCat IPAM provider allocates IP from the matching range (static assignment)
  7. Guest customization (Windows: aria-windows-postdeploy Sysprep spec; Linux: native)
  8. compute.provision.post event fires → vRO subscription attaches any additional disks
  9. Cloud.Ansible resource runs post-deploy playbook from esxq-vra-ansible control node

Inputs summary (26 total)

Inputs fall into eight logical groups. Every input has a default except businessJustification and serverNameSuffix (required free-text) and networkSegment (required dynamic enum).

Identity & Ownership (2): requestFor, requestedForUser

Catalog & OS Selection (4): catalogItem, windowsOsVersion, linuxOsVersion, joinDomain

VM Naming & Placement (7): systemCode (67 options), environment, location, tier, serverNameSuffix, businessJustification, ciocTicket

Compute Sizing (2): cpuCount (2/4/8/16/32), memoryGB (4/8/16/32/64/128)

Networking (3): networkSegment (dynamic from vRO), secondNic, secondNetworkSegment

Disks (5): additionalDiskCount (0–4), disk1SizeGBdisk4SizeGB (1–2048 GB, default 100)

Operational Flags (3): monitoringRequired, backupRequired, comments

Resources

The blueprint declares five resources:

Resource Type Purpose
Cloud_vSphere_Machine_1 Cloud.vSphere.Machine The VM itself
Cloud_vSphere_Network_1 Cloud.vSphere.Network Primary NIC (always present)
Cloud_vSphere_Network_2 Cloud.vSphere.Network Optional second NIC (Linux/Oracle only)
Cloud_Ansible_Windows Cloud.Ansible Runs windows_postdeploy.yml (Windows only)
Cloud_Ansible_Linux Cloud.Ansible Runs Linux post-deploy (Linux/Oracle only)

Three resources use the count: expression pattern to enable/disable themselves based on inputs. This pattern works for network and Cloud.Ansible resources. It did not work for Cloud.vSphere.Disk resources — that is why disks were moved to a post-provisioning vRO subscription. See the "Disk attachment architecture" section below.

Disk attachment architecture (v8.8.x)

The problem (v8.6.17 through v8.7.1)

Every attempt at blueprint-native conditional disks failed. Aria's storage validator allocated disk resources regardless of count: 0 expressions, and deployments with additionalDiskCount = 0 failed with:

Unable to provision disk as disk and compute storage are not compatible

Six consecutive versions tried variations (location constraints, attachedTo, stripped properties, provisioningType: thin, quoted count expressions) — all failed.

The fix (v8.8.0 forward)

Disk resources are removed entirely from the blueprint. Disk inputs are preserved and flow into the VM via two channels:

  1. customProperties — full set (additionalDiskCount, disk1SizeGB through disk4SizeGB)
  2. VM tags — the Backup tag carries a pipe-delimited encoding: <backup_status>|<count>:<s1>,<s2>,<s3>,<s4>

The compute.provision.post vRO subscription reads these values from the event payload's tags field (no vSphere API lookup needed for the data), then issues VcVirtualMachine.reconfigVM_Task to attach the disks to the VM's already-selected datastore. vCenter handles placement — the Aria storage validator never sees the disks.

Why tags instead of extraConfig

The Aria blueprint validator rejects property names starting with __, which blocks the extraConfig passthrough pattern. Tags are the only reliable passthrough mechanism confirmed present in the compute.provision.post event payload.

Three-phase rollout

Phase Status What it does
Phase 1 Current (v8.8.3) Disks removed from blueprint. Inputs flow into customProperties/tags. No disk attachment happens yet — baseline deployment is clean.
Phase 2 Next Register vRO subscription with dry-run logging to validate event firing and payload structure.
Phase 3 Final Enable actual disk attach via reconfigVM_Task. Ansible post-deploy then formats/mounts the new disks.

The v8.8.3 blueprint is complete and in production for Phase 1. The vRO action and subscription registration ship in the vRO_Workflows/ folder alongside the addDataDisksOnDeploy JavaScript action.

Outputs (7)

Values surfaced to the user after successful deployment:

Output Source
vmName resource.Cloud_vSphere_Machine_1.resourceName
ipAddress resource.Cloud_vSphere_Machine_1.networks[0].address
gateway customProperties.bluecatGateway (populated by BlueCat IPAM provider)
networkSegment Input echoed back
clusterName resource.Cloud_vSphere_Machine_1.clusterName
datastoreName resource.Cloud_vSphere_Machine_1.storage.datastoreName
additionalDiskCount Input echoed back (useful for ServiceNow reconciliation)

Secrets

The blueprint references two Aria Secrets — they must exist in the Aria project for the deployment to succeed:

Secret Used by Purpose
${secret.windows_local_admin} Cloud_Ansible_Windows Local Administrator password for WinRM
${secret.ansible_password} Cloud_Ansible_Windows, Cloud_Ansible_Linux Ansible control-node service account password

Rotate via: Aria Assembler → Infrastructure → Secrets. No blueprint change required after rotation — the ${secret.*} reference is resolved at deploy time.

Naming conventions

VM name construction:

systemCode + envLetter + "-" + serverNameSuffix
Environment Windows Linux Oracle
DEV / SBX D U O
PRD P P P
QA Q Q Q
DR R R R

Example: systemCode=AUTH, environment=DEV, catalogItem=Windows VM, serverNameSuffix=UTIL01AUTHD-UTIL01

vCenter folder path:

<TX-CORP-IT | VA-CORP-IT>/<systemCode>/<envFolder>

Where envFolder = PRD / QA / DR / SBX / DEV.

Example: TX-SDDC + AUTH + PRD → TX-CORP-IT/AUTH/PRD


Part 2 — Service Broker Custom Form (example-vm-customform-v1.0.yaml)

What it does: Overrides Aria's auto-generated request form with a purpose-built layout that groups 26 inputs into three logical tabs, hides irrelevant fields based on earlier selections, and provides contextual help (signposts) on every field.

Without this form, users would see a single long scroll of every input — including Windows OS options on Linux requests, disk size fields whether or not disks are requested, and NFS NIC selection for Windows VMs where it doesn't apply. The form enforces the right flow.

Three tabs (pages)

Tab 1 — "Request Details"

The "who, what, where" tab. Always visible fields only.

  • Project — Aria project (populated by Service Broker projects script action)
  • Deployment Name — Aria deployment name (900 char max)
  • Request For — "Myself" or "Someone else"
  • Requested For Userconditionally visible when Request For = Someone else
  • Catalog Item — Windows / Linux / Oracle (drives most of the other conditional visibility)
  • Business Justification — 10–2000 chars, required
  • System — 67-entry dropdown (AD, APPS, AUTH, IDM, SQL, etc.), required
  • Environment — DEV / PRD / QA / DR / SBX
  • Data Center Location — TX-SDDC / VA-SDDC
  • Application Tier — App / DB / Web

Tab 2 — "VM Specifications"

The "how it's built" tab. Heavy use of conditional visibility.

  • Server Name (Suffix) — free-text, 1–50 chars, required
  • CPU — 2/4/8/16/32 vCPU
  • Memory (GB) — 4/8/16/32/64/128 GB
  • NIC1 VLAN — driven by the getNetworkSegmentsAll vRO action (tag-filtered dropdown)
  • Add Second NIC (NFS)?visible only when Catalog Item = Linux or Oracle
  • NIC2 VLAN (NFS)visible only when secondNic = true; hardcoded list of 8 NFS segments (4 TX, 4 VA)
  • Windows OS Versionvisible only when Catalog Item = Windows
  • Linux OS Versionvisible only when Catalog Item = Linux or Oracle
  • Join Domain — WORKGROUP / corp.example.com / corpdev.example.com
  • Number of Additional Disks — 0 / 1 / 2 / 3 / 4
  • Disk 1 Size (GB)visible when additionalDiskCount ≥ 1
  • Disk 2 Size (GB)visible when additionalDiskCount ≥ 2
  • Disk 3 Size (GB)visible when additionalDiskCount ≥ 3
  • Disk 4 Size (GB)visible when additionalDiskCount ≥ 4

Tab 3 — "Monitoring & Options"

  • Is Monitoring Required? — default true
  • Is Backup Required? — default false
  • Comments — free-text, 2000 char max

Conditional visibility rules reference

Field Visible when
requestedForUser requestFor = "Someone else"
secondNic catalogItem = "Linux VM" OR catalogItem = "Oracle VM"
secondNetworkSegment secondNic = true
windowsOsVersion catalogItem = "Windows VM"
linuxOsVersion catalogItem = "Linux VM" OR catalogItem = "Oracle VM"
disk1SizeGB additionalDiskCount ∈ {1, 2, 3, 4}
disk2SizeGB additionalDiskCount ∈ {2, 3, 4}
disk3SizeGB additionalDiskCount ∈ {3, 4}
disk4SizeGB additionalDiskCount = 4

External value sources

Two fields pull their options from outside the form YAML:

Field Source
project Aria built-in projects script action (lists projects the user has access to)
networkSegment com.essential.aria/getNetworkSegmentsAll — the vRO action that queries fabric networks and filters by servicenow:visible tag

When a user opens the form, Aria hits the vRO action in real time. Adding a new network to the dropdown = tag it servicenow:visible via aria_mapping.py --servicenow-tags --execute. No form edit or release required.

Hardcoded NFS list (the one exception)

The secondNetworkSegment dropdown is the only field in the form with a hardcoded value list. It ships with 8 entries:

Label Value DC
txdc-sddc-nfs v105 (TX) txdc-sddc-nfs v105 TX
txdc-sddc-nfs-secondary v105 (TX) txdc-sddc-nfs-secondary v105 TX
txdc-sddc-nfs-vms-secondary v105 (TX) txdc-sddc-nfs-vms-secondary v105 TX
txdev-m-nfs v125 (TX Dev) txdev-m-nfs v125 TX Dev
vadc-sddc-nfs v1105 (VA) vadc-sddc-nfs v1105 VA
VADC-SDDC-NFS-DB-SECONDARY-V1105 (VA DB) VADC-SDDC-NFS-DB-SECONDARY-V1105 VA DB
vadc-sddc-nfs-vms-secondary v1105 (VA) vadc-sddc-nfs-vms-secondary v1105 VA
vadev-m-nfs v125 (VA Dev) vadev-m-nfs v125 VA Dev

To add a new NFS network, edit the secondNetworkSegment.valueList block in the form YAML, save, and publish a new form version. Consider migrating these to getNetworkSegmentsAll with an additional tag like servicenow:nfs-visible if the list grows — but for 8 relatively static entries, hardcoding is simpler.


Rebuilding the Catalog Item on a Fresh Aria Instance

If this repo is ever used to bootstrap a new Aria environment or recover from a rebuild, here is the end-to-end setup. Both files in this folder need to be imported, and a few other pieces need to exist first.

Prerequisites (must exist before import)

These are documented in detail in the Network Automation and vRO_Workflows READMEs. Summary:

  1. Aria infrastructure configured — cloud accounts, cloud zones with location:TX-SDDC / location:VA-SDDC tags, network profiles with segment:<n> tags, image mappings for all templates, customization spec aria-windows-postdeploy, BlueCat IPAM integration
  2. com.essential.aria vRO module — with getNetworkSegmentsAll, getNetworkProfileTag, and addDataDisksOnDeploy actions imported
  3. AriaCredentials ConfigurationElement and Aria-IaaS REST Host in vRO
  4. Subscription Essential Coach VM — Add Data Disks on Provision on compute.provision.post
  5. Aria Secrets windows_local_admin and ansible_password in the project
  6. Ansible control node esxq-vra-ansible registered as an integration with playbooks in place

Step 1 — Import the blueprint

Aria Assembler → Design → Blueprints → New Blueprint

  • Name: essential-coach-vm-deployment
  • Project: (your target project)
  • Paste the contents of example-vm-blueprint-v8.8.3.yaml into the code editor
  • Save
  • Release version 8.8.3

Step 2 — Import and release the custom form

Service Broker → Content & Policies → Content Sources

Add the blueprint as a content source if it's not already published:

  • Type: VMware Cloud Templates
  • Project: (same project as the blueprint)
  • Save and run content sharing so the blueprint appears in the catalog

Service Broker → Content & Policies → Content

Find the essential-coach-vm-deployment catalog item → Actions → Customize form

In the form editor:

  • Click ACTIONS → Import (or paste the form YAML directly)
  • Load example-vm-customform-v1.0.yaml
  • Verify the three tabs render: "Request Details", "VM Specifications", "Monitoring & Options"
  • Verify conditional visibility — toggle catalogItem between Windows/Linux/Oracle, confirm the OS picker and second-NIC fields show/hide correctly
  • SaveEnable

Step 3 — Test from the catalog

Service Broker → Consume → Catalog

  • Find essential-coach-vm-deployment
  • Click Request
  • Verify the form matches the three-tab layout
  • Verify the NIC1 VLAN dropdown populates (this confirms getNetworkSegmentsAll + AriaCredentials + Aria-IaaS REST Host are all working)
  • Verify the Linux/Oracle → Second NIC → NFS flow works
  • Verify the additionalDiskCount → disk size fields appearing progressively

Submit a test deployment to validate end-to-end.


Dependencies Checklist

For this blueprint + form combo to deploy successfully:

Aria Cloud Assembly

  • Cloud zones tagged location:TX-SDDC and location:VA-SDDC (via aria_mapping.py --tags)
  • Network profiles containing fabric networks tagged segment:<n> (via mapper.py --populate + aria_mapping.py --segment-tags)
  • Fabric networks tagged servicenow:visible for any network that should appear in the dropdown (via aria_mapping.py --servicenow-tags)
  • Image mappings defined per region
  • Customization spec aria-windows-postdeploy registered in vCenter
  • Secrets windows_local_admin and ansible_password populated in the Aria project
  • BlueCat IPAM provider installed and integrated as the IPAM endpoint on the network profiles (see BlueCat_IPAM/)

vRealize Orchestrator

  • com.essential.aria action module with getNetworkSegmentsAll action published
  • addDataDisksOnDeploy action + subscription + workflow wrapper (see vRO_Workflows/)

Ansible Control Node (esxq-vra-ansible)

  • Cloud Account esxq-vra-ansible registered in Aria as an integration
  • Inventory file at /home/ansible/production/ansible/playbooks/inventories/Prod/hosts
  • Master playbook at /home/ansible/production/ansible/playbooks/production/windows_postdeploy.yml
  • Eight Ansible roles installed on the control node (see Ansible_Windows_PostDeploy/)

Common Modifications

Adding a new OS template

  1. Add the template mapping in Cloud Assembly (via aria_mapping.py --images)
  2. Edit blueprint inputs.windowsOsVersion (or linuxOsVersion) to add the new oneOf entry
  3. Edit form schema.windowsOsVersion.valueList (or linuxOsVersion.valueList) to add the matching entry
  4. Release a new blueprint version AND a new form version

Adding a new System Code

Two places need updates:

  1. Blueprint: inputs.systemCode.oneOf
  2. Form: schema.systemCode.valueList

The VM name and folder path generation will use it automatically — no logic changes needed.

Adding a new environment

Three places need updates:

  1. Blueprint: inputs.environment.oneOf — add the new const value
  2. Blueprint: VM name and folderName expressions on Cloud_vSphere_Machine_1 — add the new ternary branch
  3. Form: schema.environment.valueList — add the new label/value

Adding a new NFS network

Form only (blueprint doesn't need to know about specific NFS networks):

  1. Edit schema.secondNetworkSegment.valueList in the form YAML
  2. Release a new form version

Changing the domain join options

Two places:

  1. Blueprint: inputs.joinDomain.oneOf
  2. Form: schema.joinDomain.valueList

The value flows through as a customProperty to Ansible — no blueprint logic changes needed. The Ansible domain_join role reads joinDomain from the playbook extra-vars and acts accordingly (WORKGROUP = skip).


Troubleshooting

"Unable to provision disk" on a v8.6.x / v8.7.x deployment

You're on a superseded version. Upgrade to v8.8.0 or later — disks are no longer managed by the blueprint.

VM deploys but has no IP

The BlueCat IPAM provider failed to allocate. Check:

  1. The networkSegment selected has a BlueCat range linked (run mapper.py --ipam-map --dry-run to verify)
  2. The admin service account on BlueCat has allocation permission on that range
  3. The BlueCat IPAM provider logs in Aria Extensibility → Action Runs

Network dropdown is empty or missing a segment

The segment is not tagged servicenow:visible. Add the segment to vlan_location_map.json in the Network Automation toolkit and run:

python3 aria_mapping.py --config example_config.yaml --servicenow-tags --execute

Form loads but conditional fields don't hide/show

The custom form isn't enabled or has been overridden. Go to Service Broker → Content & Policies → Content → essential-coach-vm-deployment → Actions → Customize form and confirm Enabled in the top bar. If it shows as Disabled, click Enable.

Form shows all 26 inputs in a flat list

Same issue — custom form not enabled, or the catalog item is using the auto-generated form.

Windows deploy succeeds but domain join fails

Domain join is handled by the Ansible domain_join role, not Sysprep. Check:

  1. joinDomain customProperty is not WORKGROUP
  2. The Ansible control node can reach the domain controller
  3. The domain service account credentials are valid (see Ansible_Windows_PostDeploy/)

Second NIC not provisioning

Cloud_vSphere_Network_2 has count: 0 for Windows catalog items by design — only Linux and Oracle support the second NIC pattern. If the request is Linux/Oracle:

  1. Confirm secondNic = true on the request
  2. If secondNetworkSegment is empty, it falls back to networkSegment — confirm that's intended

secondNetworkSegment dropdown is empty for Linux/Oracle

The form's secondNetworkSegment field is only visible when secondNic = true. Users must check the "Add Second NIC (NFS)?" box first. If the checkbox itself is missing, the catalogItem is Windows — the field is hidden by design.


Change History

Blueprint

Full changelog lives in the YAML header (description: field). Summary of the v8.x line:

Version Change
8.8.3 Consolidated disk tags (2 instead of 5); action removes tags after processing
8.8.2 Added disk values as VM tags; confirmed presence in compute.provision.post payload
8.8.1 (superseded) attempted __ extraConfig passthrough — blocked by validator
8.8.0 Phase 1 — removed all Cloud.vSphere.Disk resources; disk inputs flow to customProperties/tags
8.7.x (superseded) attempted quoted count expressions for conditional disks
8.6.10 – 8.6.20 Disk attachment experiments — all failed Aria's storage validator

Custom Form

Version Change
1.0 Initial production release. Three-tab layout, conditional visibility for OS/disk/NIC fields, getNetworkSegmentsAll script action for NIC1, hardcoded 8-entry NFS list for NIC2.

Contact

Original author: Noah Farshad (noah@essential.coach) Engagement: VMware / Aria Automation reference implementation

About

Unified Windows/Linux/Oracle VM provisioning blueprint for VMware Aria Automation. 26 inputs, dynamic NIC dropdown, post-provision disk attach via vRO subscription

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors