Skip to content

[DPE-6785] Add lib spark_service_account#208

Draft
theoctober19th wants to merge 16 commits intomainfrom
spark-sa
Draft

[DPE-6785] Add lib spark_service_account#208
theoctober19th wants to merge 16 commits intomainfrom
spark-sa

Conversation

@theoctober19th
Copy link
Copy Markdown
Member

@theoctober19th theoctober19th commented Mar 11, 2025

This PR aims to add a new lib named spark_service_account -- which defines Provider and Requirer interfaces for the spark_service_account relation interface. This relation interface is used by spark-integration-hub-k8s (as provider) and kyuubi-k8s (as requirer).

The new lib is tested against juju > 3 and base=jammy only because the relation interface makes use of Juju secrets and all charms that will use the lib have jammy as base.

I couldn't get past the CodeQL Scanning Results failure -- this is something that comes from the data_interfaces lib. There are existing advisories for this here, the new ones emerged because the new lib now needs data_interface as its own dependency thus the test charms used in integration tests have added data_interface to their lib.

@theoctober19th theoctober19th changed the title Add lib spark_service_account [DPE-6785] Add lib spark_service_account Apr 3, 2025
@theoctober19th theoctober19th marked this pull request as ready for review April 3, 2025 14:57
Copy link
Copy Markdown
Contributor

@Batalex Batalex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, left a minor comment about pinning ops and ops-scenario.

I ignored spark_service_account.py since we already reviewed it in a different PR

Comment thread tox.ini
pytest<8.2.0
pytest-mock
coverage[toml]
ops-scenario
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: We need to be extra careful with the version here, as ops-scenario depends on ops' public and private stuff. Can we pin both versions (here and in requirements.txt) so that we don't have a bad surprise in a couple of months?

Copy link
Copy Markdown
Contributor

@deusebio deusebio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I only have one small, non-very-critical point

get-spark-properties:
description: Get spark properties

add-spark-property:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

niptick it is not a very strong point, but I'd rather feed spark properties via config instaed of actions, which we know it is not the UX we would like to have. It is not too critical as it is just a charm for testing, but if it is simple enough, I would "nudge" towards correct patterns in the tests too.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having this supplied via charm config requires the spark properties to be declarative, and thus addition of new properties during the test requires the test remember everything that was added up until that point, which I think would be a little of an overkill for the test charm.

Copy link
Copy Markdown
Contributor

@Batalex Batalex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, now that I must define the behavior of both provider and requirer in charm-relation-interfaces I have one substantial concern.
I'd like some clarification about what is expected to happen in the case of service account collision, as it's not clear from the initial spec document and the shared lib itself.

I can infer what could be the expected behavior from the implementations we already have, though.
As we can see from https://github.com/canonical/spark-integration-hub-k8s-operator/blob/main/src/events/provider.py#L60 the implementation we have for the provider side does not allow two requirers to request the same service account.
I confirmed this locally:

spark-client.service-account-registry create --username test1 --namespace test

spark-client.service-account-registry create --username test1 --namespace test
Could not create the service account. A serviceaccount with name 'test1' already exists.

On the other hand, Kyuubi requests a service account based on a charm configuration option but then uses what it gets from the relation: https://github.com/canonical/kyuubi-k8s-operator/blob/main/src/core/domain.py#L115.

Is that correct to say

  • "the requirer charm can tolerate a different service-account"
  • "there is no expectation on the provider side to find an alternative service account name in case of a collision"
  • "service account sharing between charms is unsupported" (◀️ but this is an issue because collisions are most likely to happen since the service account name is option-based and not relation-id-based in our first charm implementing the interface)
    ?

@theoctober19th
Copy link
Copy Markdown
Member Author

@Batalex The service account provided by the provider will always be the one requested by the requester. Either this happens or the service account creation failed in itself -- but the provider would never provide a different service account than what was requested in the first place.

"service account sharing between charms is unsupported" (◀️ but this is an issue because collisions are most likely to happen since the service account name is option-based and not relation-id-based in our first charm implementing the interface)

I agree here. There are chances of collision of the requested service account with the one that is already in the k8s cluster. In that case, I guess the current state of things is to expect user to choose a different service account than what is already there -- however we should probably relay this information to the user via some blocked state rather than having the charm crash without the user knowing what happened. In any case, this is something the provider should implement as part of handling service_account_requested, while the charm lib can remain abstinent from this.

I prefer the consumer (and thus the Kyuubi charm) to be the one to go to blocked state and inform about this collision -- however we would need a way to relay this information from the provider to the requirer.

Love to hear your thoughts here @welpaolo @deusebio

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants