Skip to content

[BUG] - Updating a pre-existing service with a different port fails #124

@ryan109

Description

@ryan109

Describe the bug
If we create a service and expose port 8080, but then change the port to 80, we fail to create the object.

If we use a base config like this

ports:
  - 80

and update it to this

ports:
  - 8080

it fails, as we use the k8s patch operation. As per the documentation:

Patch: Patch will apply a change to a specific field. How the change is merged is defined per field. Lists may either be replaced or merged. Merging lists will not preserve ordering.
Patches will never cause optimistic locking failures, and the last write will win. Patches are recommended when the full state is not read before an update, or when failing on optimistic locking is undesirable. When patching complex types, arrays and maps, how the patch is applied is defined on a per-field basis and may either replace the field's current value, or merge the contents into the current value.

We could potentially use the replace operation, but that comes with it's own caveats:

Note: The ResourceStatus will be ignored by the system and will not be updated. To update the status, one must invoke the specific status update operation.
Note: Replacing a resource object may not result immediately in changes being propagated to downstream objects. For instance replacing a ConfigMap or Secret resource will not result in all Pods seeing the changes unless the Pods are restarted out of band.

A "hack" is to change your config to provide a name for your port, then create a second port with a name, as the failure in the deploy comes from not being able to patch the resource as we have multiple in a list with no name?

[{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[0].name"},{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[1].name"},{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[2].name"}]},"code":422}

So we could run the deployment on a config that looks like:

- ports:
  - name: http
    port: 80
  - name: proxy
    port 8080

Then we could remove the second port object and run it again, and now it would continue working. However this uncovers a further bug - as the deployment will go through but both ports will still be exposed in the cluster. This is to do with the nature of using a patch operation instead of replace.

Expected behavior

Screenshots

Sentry / Fullstory link

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions