Skip to content

Remove RESTORE_KEY and BACKUP_NAME env vars after restore is complete #65

@akshaymankar

Description

@akshaymankar

Is your feature request related to a problem? Please describe.

We use terraform to create the datacenter objects in kubernetes. When a backup is applied, the medusa-operator adds RESTORE_KEY and BACKUP_NAME env variables to the medusa container. Next time we try to apply terraform this appears as a change and terraform even fails to apply this change due to field_manager conflicts.

Describe the solution you'd like

After a restore is complete, medusa-operator should remove the environment variables it has added.

Describe alternatives you've considered

None.

Additional context

The diff from terraform:

- {
    - name      = "BACKUP_NAME"
    - value     = "backup0"
    - valueFrom = {
        - configMapKeyRef  = {
            - key      = null
            - name     = null
            - optional = null
          }
        - fieldRef         = {
            - apiVersion = null
            - fieldPath  = null
          }
        - resourceFieldRef = {
            - containerName = null
            - divisor       = null
            - resource      = null
          }
        - secretKeyRef     = {
            - key      = null
            - name     = null
            - optional = null
          }
      }
  },
- {
    - name      = "RESTORE_KEY"
    - value     = "ec6b2264-9644-4ee7-b84c-bb7baf536bb7"
    - valueFrom = {
        - configMapKeyRef  = {
            - key      = null
            - name     = null
            - optional = null
          }
        - fieldRef         = {
            - apiVersion = null
            - fieldPath  = null
          }
        - resourceFieldRef = {
            - containerName = null
            - divisor       = null
            - resource      = null
          }
        - secretKeyRef     = {
            - key      = null
            - name     = null
            - optional = null
          }

Error from terraform on apply failure:

╷
│ Error: There was a field manager conflict when trying to apply the manifest for "databases/cassandra-1"
│
│   with module.cassandra_1.kubernetes_manifest.cassandra_datacenter,
│   on ../../../../tf-modules/eks-cassandra-datacenter/cassandra_datacenter.tf line 15, in resource "kubernetes_manifest" "cassandra_datacenter":
│   15: resource "kubernetes_manifest" "cassandra_datacenter" {
│
│ The API returned the following conflict: "Apply failed with 1 conflict: conflict with \"manager\" using cassandra.datastax.com/v1beta1: .spec.podTemplateSpec.spec.initContainers"
│
│ You can override this conflict by setting "force_conflicts" to true in the "field_manager" block.
╵

Using force_conflicts seems a bit dangerous given that will overwrite anything else that could be important.

┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1183
┆priority: Medium

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions