Conversation
…d North Co-authored-by: fpittelo <3135901+fpittelo@users.noreply.github.com>
Fix: Use GlobalStandard SKU for Azure OpenAI deployment in Switzerland North
|
🤖 Hi @fpittelo, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
|
🤖 I'm sorry @fpittelo, but I was unable to process your request. Please see the logs for more details. |
There was a problem hiding this comment.
Pull request overview
This PR merges infrastructure changes from dev to qa, introducing virtual network integration for Azure Function Apps with subnet delegation, deploying Azure OpenAI model deployments, and modifying Key Vault network security configurations to support deployment workflows.
Key Changes:
- Added virtual network module with subnet configuration for Function App integration with Key Vault and Web service endpoints
- Implemented Azure OpenAI cognitive deployment resources with configurable model parameters (gpt-4o)
- Modified Key Vault firewall to temporarily allow access during deployments with dynamic IP-based rules
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| modules/virtual_network/variables.tf | Defines input variables for VNet, subnet, and address space configuration |
| modules/virtual_network/outputs.tf | Exposes subnet ID for integration with other resources |
| modules/virtual_network/main.tf | Creates VNet and subnet with service endpoints and delegation for Azure Web Apps |
| modules/key_vault/main.tf | Changes network ACL default action from Deny to Allow |
| modules/key_vault/outputs.tf | Adds Key Vault name output for workflow integration |
| modules/function_app/variables.tf | Adds variable for virtual network subnet integration |
| modules/function_app/main.tf | Integrates Function App with virtual network subnet |
| modules/cognitive_account/variables.tf | Adds variables for OpenAI model deployment configuration |
| modules/cognitive_account/outputs.tf | Exposes OpenAI account name output |
| modules/cognitive_account/main.tf | Creates cognitive deployment resource for OpenAI models |
| infra/variables.tf | Adds environment-specific variables for VNet, model deployment, and client IP configuration |
| infra/providers.tf | Removes unused providers (http, time) and adds cognitive account purge configuration |
| infra/outputs.tf | Exposes Key Vault and OpenAI account names for external reference |
| infra/main.tf | Integrates virtual network module, updates Key Vault with dynamic naming and IP rules, adds OpenAI deployment parameters |
| .github/workflows/deploy-infra.yaml | Adds runner IP retrieval and Key Vault firewall pre-configuration steps |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| key_vault_ip_rules = [ | ||
| for ip in [var.client_ip_address, "83.76.0.0/14"] : ip if ip != null |
There was a problem hiding this comment.
The hardcoded IP range "83.76.0.0/14" is added to the Key Vault firewall rules without explanation. This is a broad CIDR block covering 262,144 IP addresses (83.76.0.0 - 83.79.255.255).
Consider:
- Adding a comment explaining what this IP range represents (organization network, specific service, etc.)
- Moving this to a variable for better maintainability and environment-specific configuration
- Narrowing the range if possible to reduce the attack surface
|
|
||
| sku { | ||
| name = var.deployment_sku_name | ||
| capacity = 10 |
There was a problem hiding this comment.
The capacity is hardcoded to 10 for the cognitive deployment SKU. This value represents the throughput capacity in thousands of tokens per minute (TPM) and may not be appropriate for all environments.
Consider making this configurable through a variable (e.g., deployment_capacity) so it can be adjusted per environment (dev might need less capacity than production) to optimize costs and performance.
| depends_on = [ | ||
| module.key_vault, | ||
| module.cognitive_account, | ||
| azurerm_role_assignment.key_vault_secrets_officer, | ||
| time_sleep.wait_for_firewall | ||
| azurerm_role_assignment.key_vault_secrets_officer | ||
| ] |
There was a problem hiding this comment.
The time_sleep.wait_for_firewall resource has been removed from the depends_on list, but the synchronization concern it addressed (waiting for Key Vault firewall rules to propagate) still exists.
While the GitHub Actions workflow now handles pre-deployment firewall configuration (lines 65-94 in deploy-infra.yaml), there's no guarantee that the rules have fully propagated by the time Terraform tries to create this secret. This could lead to intermittent failures when trying to write secrets to the Key Vault.
Consider either:
- Adding a local-exec provisioner with a wait/retry mechanism
- Documenting that the workflow's 60-second wait is expected to cover this propagation delay
- Ensuring the Key Vault network ACLs remain set to "Allow" during deployment (though this is less secure)
|
|
||
| network_acls { | ||
| default_action = "Deny" | ||
| default_action = "Allow" |
There was a problem hiding this comment.
The Key Vault network ACL default_action has been changed from "Deny" to "Allow", which significantly weakens the security posture. This allows unrestricted network access to the Key Vault by default, even though subnet IDs and IP rules are specified.
Consider reverting this to "Deny" to maintain defense-in-depth security. If temporary "Allow" access is needed during deployment, handle it through the GitHub Actions workflow (as shown in lines 65-94 of deploy-infra.yaml) rather than permanently allowing all network access.
| default_action = "Allow" | |
| default_action = "Deny" |
| recover_soft_deleted_key_vaults = false | ||
| } | ||
| cognitive_account { | ||
| purge_soft_delete_on_destroy = true |
There was a problem hiding this comment.
The cognitive account purge configuration enables automatic purging of soft-deleted resources on destroy. While this may be useful for development/testing, it prevents recovery of accidentally deleted cognitive services accounts and could lead to permanent data loss.
Consider making this configurable per environment (e.g., enabled only for dev/qa, disabled for production) to prevent irreversible deletions in production environments.
| purge_soft_delete_on_destroy = true | |
| # Make purge_soft_delete_on_destroy configurable per environment to prevent accidental permanent deletion in production. | |
| purge_soft_delete_on_destroy = var.cognitive_account_purge_soft_delete_on_destroy |
| sku_name_cog_acct = local.environment_vars.sku_name_cog_acct | ||
| tags = local.environment_vars.tags | ||
| source = "../modules/cognitive_account" | ||
| alpinebotaiact_name = "${local.environment_vars.alpinebotaiact_name}-${random_integer.kv_suffix.result}" |
There was a problem hiding this comment.
[nitpick] The random_integer suffix is being reused for both the Key Vault name (line 44) and the Cognitive Account name (line 107). Since this is the same random value, it doesn't provide uniqueness between resources, only across deployments.
While this may be intentional for tracking related resources, consider using separate random values for each resource or documenting this design decision to clarify that these resources are intentionally linked by the same suffix.
| fi | ||
|
|
||
| echo "Found Key Vault: $KV_NAME" | ||
| RG_NAME="${{ env.ENVIRONMENT }}-alpinebot" |
There was a problem hiding this comment.
The variable RG_NAME is used on line 72 before it's defined on line 83. This will cause the script to fail because the variable is referenced out of order.
Move line 83 (RG_NAME="${{ env.ENVIRONMENT }}-alpinebot") to before line 72 where it's first used, or better yet, move it to the beginning of the script after the IP retrieval.
| echo "Waiting 60 seconds for propagation..." | ||
| sleep 60 | ||
|
|
||
|
|
||
|
|
||
|
|
There was a problem hiding this comment.
The hardcoded 60-second sleep is arbitrary and may not be sufficient for Key Vault firewall rule propagation in all cases. Azure service propagation times can vary.
Consider either:
- Increasing the wait time to a more conservative value (e.g., 120-180 seconds) for reliability
- Implementing a retry mechanism that polls the Key Vault until it's accessible
- Documenting why 60 seconds was chosen
| echo "Waiting 60 seconds for propagation..." | |
| sleep 60 | |
| echo "Waiting for Key Vault firewall rule propagation (up to 180 seconds)..." | |
| for i in {1..18}; do | |
| if az keyvault show --name "$KV_NAME" --resource-group "$RG_NAME" > /dev/null 2>&1; then | |
| echo "Key Vault is accessible after $((i*10)) seconds." | |
| break | |
| else | |
| echo "Key Vault not accessible yet, waiting 10 seconds (attempt $i/18)..." | |
| sleep 10 | |
| fi | |
| done |
| TF_LOG: DEBUG | ||
| TF_LOG_PATH: terraform.log | ||
| ENVIRONMENT: ${{ env.ENVIRONMENT }} | ||
|
|
There was a problem hiding this comment.
[nitpick] There's an extra blank line and inconsistent indentation. The environment variable TF_VAR_client_ip_address should be aligned with the other environment variables above, and there should only be one blank line between sections.
Suggested format:
TF_LOG: DEBUG
TF_LOG_PATH: terraform.log
ENVIRONMENT: ${{ env.ENVIRONMENT }}
TF_VAR_client_ip_address: ${{ env.RUNNER_IP }}| resource "azurerm_subnet" "subnet" { | ||
| name = var.subnet_name | ||
| resource_group_name = var.az_rg_name | ||
| virtual_network_name = azurerm_virtual_network.vnet.name | ||
| address_prefixes = var.subnet_prefix | ||
|
|
||
| service_endpoints = [ | ||
| "Microsoft.KeyVault", | ||
| "Microsoft.Web" | ||
| ] | ||
|
|
||
| delegation { | ||
| name = "delegation" | ||
|
|
||
| service_delegation { | ||
| name = "Microsoft.Web/serverFarms" | ||
| actions = ["Microsoft.Network/virtualNetworks/subnets/action"] |
There was a problem hiding this comment.
The subnet is configured with hardcoded service endpoints and delegation for Azure Web Apps (App Service/Functions). This configuration is not flexible and may not be suitable for all use cases of this module.
Consider:
- Making
service_endpointsanddelegationconfigurable through variables with sensible defaults - Adding documentation explaining that this module is specifically designed for Function App integration
- If this configuration is always required, add a comment explaining the purpose of these specific service endpoints and delegation
| resource "azurerm_subnet" "subnet" { | |
| name = var.subnet_name | |
| resource_group_name = var.az_rg_name | |
| virtual_network_name = azurerm_virtual_network.vnet.name | |
| address_prefixes = var.subnet_prefix | |
| service_endpoints = [ | |
| "Microsoft.KeyVault", | |
| "Microsoft.Web" | |
| ] | |
| delegation { | |
| name = "delegation" | |
| service_delegation { | |
| name = "Microsoft.Web/serverFarms" | |
| actions = ["Microsoft.Network/virtualNetworks/subnets/action"] | |
| // The following variables make service_endpoints and delegation configurable. | |
| // By default, this module is set up for Azure Web Apps/Functions integration. | |
| // You can override these defaults when using the module. | |
| variable "service_endpoints" { | |
| description = "List of service endpoints to associate with the subnet. Default enables KeyVault and Web Apps integration." | |
| type = list(string) | |
| default = ["Microsoft.KeyVault", "Microsoft.Web"] | |
| } | |
| variable "delegation" { | |
| description = <<EOT | |
| Delegation block for the subnet. Set to null or [] to disable. | |
| By default, delegates to Microsoft.Web/serverFarms for Function App/App Service integration. | |
| Example: | |
| [ | |
| { | |
| name = "delegation" | |
| service_delegation = { | |
| name = "Microsoft.Web/serverFarms" | |
| actions = ["Microsoft.Network/virtualNetworks/subnets/action"] | |
| } | |
| } | |
| ] | |
| EOT | |
| type = list(object({ | |
| name = string | |
| service_delegation = object({ | |
| name = string | |
| actions = list(string) | |
| }) | |
| })) | |
| default = [ | |
| { | |
| name = "delegation" | |
| service_delegation = { | |
| name = "Microsoft.Web/serverFarms" | |
| actions = ["Microsoft.Network/virtualNetworks/subnets/action"] | |
| } | |
| } | |
| ] | |
| } | |
| resource "azurerm_subnet" "subnet" { | |
| name = var.subnet_name | |
| resource_group_name = var.az_rg_name | |
| virtual_network_name = azurerm_virtual_network.vnet.name | |
| address_prefixes = var.subnet_prefix | |
| // Service endpoints are configurable via variable | |
| service_endpoints = var.service_endpoints | |
| // Delegation is configurable via variable | |
| dynamic "delegation" { | |
| for_each = var.delegation | |
| content { | |
| name = delegation.value.name | |
| service_delegation { | |
| name = delegation.value.service_delegation.name | |
| actions = delegation.value.service_delegation.actions | |
| } |
No description provided.