[DPE-6785] Add lib spark_service_account#208
Conversation
Batalex
left a comment
There was a problem hiding this comment.
LGTM, left a minor comment about pinning ops and ops-scenario.
I ignored spark_service_account.py since we already reviewed it in a different PR
| pytest<8.2.0 | ||
| pytest-mock | ||
| coverage[toml] | ||
| ops-scenario |
There was a problem hiding this comment.
suggestion: We need to be extra careful with the version here, as ops-scenario depends on ops' public and private stuff. Can we pin both versions (here and in requirements.txt) so that we don't have a bad surprise in a couple of months?
deusebio
left a comment
There was a problem hiding this comment.
Looks good! I only have one small, non-very-critical point
| get-spark-properties: | ||
| description: Get spark properties | ||
|
|
||
| add-spark-property: |
There was a problem hiding this comment.
niptick it is not a very strong point, but I'd rather feed spark properties via config instaed of actions, which we know it is not the UX we would like to have. It is not too critical as it is just a charm for testing, but if it is simple enough, I would "nudge" towards correct patterns in the tests too.
There was a problem hiding this comment.
Having this supplied via charm config requires the spark properties to be declarative, and thus addition of new properties during the test requires the test remember everything that was added up until that point, which I think would be a little of an overkill for the test charm.
There was a problem hiding this comment.
Actually, now that I must define the behavior of both provider and requirer in charm-relation-interfaces I have one substantial concern.
I'd like some clarification about what is expected to happen in the case of service account collision, as it's not clear from the initial spec document and the shared lib itself.
I can infer what could be the expected behavior from the implementations we already have, though.
As we can see from https://github.com/canonical/spark-integration-hub-k8s-operator/blob/main/src/events/provider.py#L60 the implementation we have for the provider side does not allow two requirers to request the same service account.
I confirmed this locally:
spark-client.service-account-registry create --username test1 --namespace test
spark-client.service-account-registry create --username test1 --namespace test
Could not create the service account. A serviceaccount with name 'test1' already exists.On the other hand, Kyuubi requests a service account based on a charm configuration option but then uses what it gets from the relation: https://github.com/canonical/kyuubi-k8s-operator/blob/main/src/core/domain.py#L115.
Is that correct to say
- "the requirer charm can tolerate a different service-account"
- "there is no expectation on the provider side to find an alternative service account name in case of a collision"
- "service account sharing between charms is unsupported" (
◀️ but this is an issue because collisions are most likely to happen since the service account name is option-based and not relation-id-based in our first charm implementing the interface)
?
|
@Batalex The service account provided by the provider will always be the one requested by the requester. Either this happens or the service account creation failed in itself -- but the provider would never provide a different service account than what was requested in the first place.
I agree here. There are chances of collision of the requested service account with the one that is already in the k8s cluster. In that case, I guess the current state of things is to expect user to choose a different service account than what is already there -- however we should probably relay this information to the user via some blocked state rather than having the charm crash without the user knowing what happened. In any case, this is something the provider should implement as part of handling I prefer the consumer (and thus the Kyuubi charm) to be the one to go to blocked state and inform about this collision -- however we would need a way to relay this information from the provider to the requirer. |
This PR aims to add a new lib named
spark_service_account-- which defines Provider and Requirer interfaces for thespark_service_accountrelation interface. This relation interface is used byspark-integration-hub-k8s(as provider) andkyuubi-k8s(as requirer).The new lib is tested against
juju > 3andbase=jammyonly because the relation interface makes use of Juju secrets and all charms that will use the lib havejammyas base.I couldn't get past the
CodeQL Scanning Resultsfailure -- this is something that comes from thedata_interfaceslib. There are existing advisories for this here, the new ones emerged because the new lib now needsdata_interfaceas its own dependency thus the test charms used in integration tests have addeddata_interfaceto their lib.