Skip to content

RHCLOUD-45005: Use ResourceType/ReporterType in SchemaRepository and ResourceEvent#1333

Open
snehagunta wants to merge 1 commit intoproject-kessel:mainfrom
snehagunta:RHCLOUD-45005-tiny-types-resource-reporter-type
Open

RHCLOUD-45005: Use ResourceType/ReporterType in SchemaRepository and ResourceEvent#1333
snehagunta wants to merge 1 commit intoproject-kessel:mainfrom
snehagunta:RHCLOUD-45005-tiny-types-resource-reporter-type

Conversation

@snehagunta
Copy link
Copy Markdown
Contributor

@snehagunta snehagunta commented May 1, 2026

Summary

  • Replace raw string with model.ResourceType and model.ReporterType in SchemaRepository interface, SchemaService, and ResourceEvent interface
  • Add SchemaRepositoryKey() method to ResourceType for normalized map lookups
  • Update ResourceReportEvent and ResourceDeleteEvent to return typed values
  • Update outbox event serialization to use .String() on typed getters

Test plan

  • Build passes (go build ./...)
  • All unit tests pass
  • Schema repository tests updated for typed parameters
  • Outbox event tests verify typed values and JSON serialization

Made with Cursor

Summary by CodeRabbit

  • Refactor

    • Resource and reporter identifiers are now strongly typed across schema APIs, validation, and storage for improved consistency and runtime safety.
  • Bug Fixes

    • Schema loading, normalization, and validation now reject invalid resource/reporter identifiers and ensure consistent serialization in outgoing events.
  • Tests

    • Tests updated to use typed identifiers and to align with new validation and serialization behavior.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 1, 2026

Warning

Rate limit exceeded

@snehagunta has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 42 minutes and 59 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 149a6191-9201-48cd-8a01-44fcfbfc0853

📥 Commits

Reviewing files that changed from the base of the PR and between b009ce1 and 7ad9481.

📒 Files selected for processing (13)
  • cmd/schema/schema.go
  • internal/biz/model/resource_delete_event.go
  • internal/biz/model/resource_event.go
  • internal/biz/model/resource_report_event.go
  • internal/biz/model/schema_repository.go
  • internal/biz/model/schema_service.go
  • internal/biz/model_legacy/outboxevents.go
  • internal/biz/model_legacy/outboxevents_test.go
  • internal/biz/usecase/resources/resource_service.go
  • internal/biz/usecase/resources/resource_service_test.go
  • internal/data/schema_inmemory.go
  • internal/data/schema_inmemory_test.go
  • internal/service/resources/kesselinventoryservice_test.go
📝 Walkthrough

Walkthrough

This PR replaces raw string resource/reporter identifiers with strongly-typed domain types (ResourceType, ReporterType) across interfaces, models, repository implementations, services, CLI preload, outbox serialization, and tests.

Changes

Cohort / File(s) Summary
Event Interface & Implementations
internal/biz/model/resource_event.go, internal/biz/model/resource_delete_event.go, internal/biz/model/resource_report_event.go
Changed ResourceType() and ReporterType() signatures to return ResourceType / ReporterType instead of string; implementations now return typed fields directly.
Schema Models & Repository Interface
internal/biz/model/schema_repository.go
Switched struct fields and repository method signatures to use ResourceType / ReporterType (list/get/delete methods now accept/return typed values and typed slices).
Schema Service Validation
internal/biz/model/schema_service.go
Validation methods (IsReporterForResource, CommonShallowValidate, ReporterShallowValidate) updated to accept typed ResourceType / ReporterType; logging/error text formats call .String() where needed.
In-memory Schema Repo & Tests
internal/data/schema_inmemory.go, internal/data/schema_inmemory_test.go
Repository implementation and tests migrated to typed identifiers: storage keys, lookup, normalization, JSON/dir loading, helper signatures, and test fixtures updated; added stricter failures for invalid type strings.
Outbox Events & Tests
internal/biz/model_legacy/outboxevents.go, internal/biz/model_legacy/outboxevents_test.go
Event serialization now converts typed identifiers to strings (.String()) for payload/metadata; tests adjusted to compare typed values and stringified payload fields accordingly.
Usecase Layer & Tests
internal/biz/usecase/resources/resource_service.go, internal/biz/usecase/resources/resource_service_test.go
Validation flow now forwards typed ResourceType / ReporterType to schema service methods; tests updated to construct typed values (some missing-field assertions removed).
Service Tests / Fixtures
internal/service/resources/kesselinventoryservice_test.go
Test fixtures and pagination token handling updated to use typed model helpers for resource/reporter and continuation tokens.
CLI Schema Preload
cmd/schema/schema.go
Schema preloading and YAML normalization validate and normalize resource/reporter identifiers using typed constructors (bizmodel.NewResourceType / NewReporterType) and skip invalid directories.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~55 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.32% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the main change: replacing raw strings with typed ResourceType/ReporterType in SchemaRepository and ResourceEvent interfaces.
Description check ✅ Passed The description adequately covers the main changes (type replacements, new method, updated events and serialization) and includes test plan details, but the PR template sections (acceptance criteria, 4-eye-principle, security, deployment status) are not filled out.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 42 minutes and 59 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 1, 2026

Codecov Report

❌ Patch coverage is 66.33663% with 34 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
cmd/schema/schema.go 0.00% 12 Missing ⚠️
internal/data/schema_inmemory.go 81.25% 4 Missing and 8 partials ⚠️
internal/biz/model/schema_service.go 33.33% 6 Missing ⚠️
internal/biz/model/resource_delete_event.go 0.00% 4 Missing ⚠️
Flag Coverage Δ
main 49.63% <59.40%> (-0.08%) ⬇️
v1beta2 64.89% <75.28%> (-0.07%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
internal/biz/model/resource_report_event.go 82.43% <100.00%> (ø)
internal/biz/model/schema_repository.go 0.00% <ø> (ø)
internal/biz/model_legacy/outboxevents.go 73.68% <100.00%> (ø)
internal/biz/usecase/resources/resource_service.go 73.26% <100.00%> (-0.41%) ⬇️
internal/biz/model/resource_delete_event.go 55.55% <0.00%> (ø)
internal/biz/model/schema_service.go 68.96% <33.33%> (ø)
cmd/schema/schema.go 0.00% <0.00%> (ø)
internal/data/schema_inmemory.go 75.36% <81.25%> (-2.82%) ⬇️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

snehagunta added a commit to snehagunta/inventory-api that referenced this pull request May 1, 2026
This method was accidentally included in the ContinuationToken branch.
It belongs in PR project-kessel#1333 (ResourceType/ReporterType).

Co-authored-by: Cursor <cursoragent@cursor.com>
snehagunta added a commit that referenced this pull request May 1, 2026
* Use ContinuationToken tiny type for pagination and streaming

Replace raw string with model.ContinuationToken in pagination,
ReadTuplesItem, LookupObjectsItem, and LookupSubjectsItem. Add
DeserializeContinuationToken for trusted reconstruction from gRPC
responses and .String() at proto response boundaries.

Made-with: Cursor

* fix: use ContinuationToken type in tuples pagination conversion

The paginationFromProto function was still using *string for the
continuation token instead of *model.ContinuationToken, and
readTuplesItemToProto was passing ContinuationToken directly to
the proto field instead of calling .String().

Made-with: Cursor

* refactor: rename 'out' to 'pagination' in paginationFromProto

Address review feedback from Rajagopalan-Ranganathan.

Co-authored-by: Cursor <cursoragent@cursor.com>

* fix: remove SchemaRepositoryKey that belongs in 2D PR

This method was accidentally included in the ContinuationToken branch.
It belongs in PR #1333 (ResourceType/ReporterType).

Co-authored-by: Cursor <cursoragent@cursor.com>

---------

Co-authored-by: Cursor <cursoragent@cursor.com>
@snehagunta snehagunta force-pushed the RHCLOUD-45005-tiny-types-resource-reporter-type branch from f7a6d7c to 588fec3 Compare May 1, 2026 17:06
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/data/schema_inmemory.go (1)

266-318: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fail fast on schema read errors.

This block treats any read error as “schema missing”. A permission/I/O failure in loadCommonResourceDataSchema or loadResourceSchema will leave the repository partially loaded and silently disable validation for that resource/reporter. Only the not-found case should be ignored.

Suggested direction
+import "errors"
...
-       commonResourceSchema, err := loadCommonResourceDataSchema(resourceLabel, resourceDir)
-       if err == nil {
+       commonResourceSchema, err := loadCommonResourceDataSchema(resourceLabel, resourceDir)
+       if err != nil {
+               if !errors.Is(err, os.ErrNotExist) {
+                       return nil, err
+               }
+       } else {
                rt, perr := bizmodel.NewResourceType(resourceLabel)
                if perr != nil {
                        return nil, fmt.Errorf("invalid resource type directory %q: %w", resourceLabel, perr)
                }
                err = repository.CreateResourceSchema(ctx, bizmodel.ResourceSchema{
                        ResourceType:     rt,
                        ValidationSchema: validationSchemaFromString(commonResourceSchema),
                })
                if err != nil {
                        return nil, err
                }
        }
...
-               reporterSchema, isReporterSchemaExists, err := loadResourceSchema(resourceLabel, reporterLabel, resourceDir)
-               if err == nil && isReporterSchemaExists {
+               reporterSchema, isReporterSchemaExists, err := loadResourceSchema(resourceLabel, reporterLabel, resourceDir)
+               if err != nil {
+                       return nil, err
+               }
+               if isReporterSchemaExists {
                        ...
                } else {
                        log.Warnf("No schema found for %s:%s", resourceLabel, reporterLabel)
                }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/data/schema_inmemory.go` around lines 266 - 318, The code currently
treats any error from loadCommonResourceDataSchema and loadResourceSchema as
"schema missing" and continues, which hides I/O/permission failures; change the
error handling so that only not-found errors are ignored: for
loadCommonResourceDataSchema, if err != nil then if os.IsNotExist(err) (or
errors.Is(err, fs.ErrNotExist)) continue, else return the error; for
loadResourceSchema, if err != nil then if the error indicates not-found (use
os.IsNotExist/errors.Is) log the missing schema and continue, otherwise return
the error; keep the existing successful-path behavior that calls
bizmodel.NewResourceType/NewReporterType and
repository.CreateResourceSchema/CreateReporterSchema and
validationSchemaFromString.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/data/schema_inmemory.go`:
- Around line 446-455: findResourceTypeFromJsonKey currently matches jsonKey
against rt.SchemaRepositoryKey() raw, which fails when the cache uses
non-canonical resource labels; split the jsonKey at the first ':' to extract the
resource segment, normalize that resource segment using the same normalization
helper used elsewhere for canonical resource labels (reuse the existing resource
normalization function in bizmodel, e.g.,
NormalizeResourceLabel/NormalizeResourceSegment), then recombine with ':' and
compare against rt.SchemaRepositoryKey()+":" to determine a match; update the
function findResourceTypeFromJsonKey to parse, normalize, and then compare so
reporter cache keys like "RHEL/Host:hbi" or "K8S_CLUSTER:hbi" will match.

---

Outside diff comments:
In `@internal/data/schema_inmemory.go`:
- Around line 266-318: The code currently treats any error from
loadCommonResourceDataSchema and loadResourceSchema as "schema missing" and
continues, which hides I/O/permission failures; change the error handling so
that only not-found errors are ignored: for loadCommonResourceDataSchema, if err
!= nil then if os.IsNotExist(err) (or errors.Is(err, fs.ErrNotExist)) continue,
else return the error; for loadResourceSchema, if err != nil then if the error
indicates not-found (use os.IsNotExist/errors.Is) log the missing schema and
continue, otherwise return the error; keep the existing successful-path behavior
that calls bizmodel.NewResourceType/NewReporterType and
repository.CreateResourceSchema/CreateReporterSchema and
validationSchemaFromString.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 5a284164-d2a5-45e2-8e11-592f6113baf4

📥 Commits

Reviewing files that changed from the base of the PR and between 79b7efe and 588fec3.

📒 Files selected for processing (13)
  • internal/biz/model/common.go
  • internal/biz/model/resource_delete_event.go
  • internal/biz/model/resource_event.go
  • internal/biz/model/resource_report_event.go
  • internal/biz/model/schema_repository.go
  • internal/biz/model/schema_service.go
  • internal/biz/model_legacy/outboxevents.go
  • internal/biz/model_legacy/outboxevents_test.go
  • internal/biz/usecase/resources/resource_service.go
  • internal/biz/usecase/resources/resource_service_test.go
  • internal/data/schema_inmemory.go
  • internal/data/schema_inmemory_test.go
  • internal/service/resources/kesselinventoryservice_test.go

Comment thread internal/data/schema_inmemory.go
@snehagunta
Copy link
Copy Markdown
Contributor Author

Sanity Test Results

Branch: RHCLOUD-45005-tiny-types-resource-reporter-type

Commit: 588fec34
Result: All 16/16 tests PASSED

Full test output
[INFO] Building image: quay.io/sgunta/kessel:588fec34-sanity (platform: linux/amd64)...
time="2026-05-01T13:06:36-04:00" level=warning msg="missing \"VERSION\" build argument. Try adding \"--build-arg VERSION=<VALUE>\" to the command line"
[1/2] STEP 1/14: FROM registry.access.redhat.com/ubi9/ubi-minimal:9.7-1776833838 AS builder
[1/2] STEP 2/14: ARG TARGETARCH
--> Using cache 61a0f7ba911034999b84eb3c7e35a5515da3391f221c5b7f05b3b6ecf43f5c99
--> 61a0f7ba9110
[1/2] STEP 3/14: USER root
--> Using cache 4401f6e94c438479b2725c3123d72941ace7a5d97232ddb9c5ae83faa70c3544
--> 4401f6e94c43
[1/2] STEP 4/14: RUN microdnf install -y tar gzip make which gcc gcc-c++ cyrus-sasl-lib findutils git go-toolset
--> Using cache 17f9624dcd4bc328dfb511d9949c8d4857f8b0fd8f70dcfba685b1d5b1cc63aa
--> 17f9624dcd4b
[1/2] STEP 5/14: WORKDIR /workspace
--> Using cache f53159c269f82f9f8fef07e1f6c62d4e9eba99695e8f1ddedae1600869731dbb
--> f53159c269f8
[1/2] STEP 6/14: COPY go.mod go.sum ./
--> Using cache d98c6948363aea8f8a02dc0d6c0e775c3e128a7d8101a1d34f1333643d69a90a
--> d98c6948363a
[1/2] STEP 7/14: ENV CGO_ENABLED 1
--> Using cache 3682e7d0bb508d9b177ae734d2bc08b81e697244daedf21c20fcfbd596ed7d49
--> 3682e7d0bb50
[1/2] STEP 8/14: RUN go mod download
--> Using cache 8184aa0e57e9232bb317b68340a1ac16e71a697dba70c3212eca7ec673ffc733
--> 8184aa0e57e9
[1/2] STEP 9/14: COPY api ./api
--> Using cache b5be3802026e116ec98b1f1b8c5ba4924bc1f8a12d30a4c4826ca5b670c80a5a
--> b5be3802026e
[1/2] STEP 10/14: COPY cmd ./cmd
--> Using cache 1d7a916f11dd1f4217bf2c19280e71cfd5b5f87d418f4650aa775ff51dc3ad9f
--> 1d7a916f11dd
[1/2] STEP 11/14: COPY internal ./internal
--> 21ed9ed53fac
[1/2] STEP 12/14: COPY main.go Makefile ./
--> 31e2e0bd2b0f
[1/2] STEP 13/14: ARG VERSION
--> db7ba771768a
[1/2] STEP 14/14: RUN VERSION=${VERSION} make build
fatal: not a git repository (or any of the parent directories): .git
Makefile:110: Setting GOEXPERIMENT=strictfipsruntime,boringcrypto - this generally causes builds to fail unless building inside the provided Dockerfile. If building locally, run `make local-build`
mkdir -p bin/ && GOOS=linux GOARCH=amd64 CGO_ENABLED=1 GOFLAGS="-tags=fips_enabled" GOEXPERIMENT=strictfipsruntime,boringcrypto GOOS=linux /usr/bin/go build -gcflags="all=-trimpath=/root/go" -asmflags="all=-trimpath=/root/go" -ldflags "-X cmd.Version=" -o ./bin/ ./...
--> 456ffe590660
[2/2] STEP 1/8: FROM registry.access.redhat.com/ubi9/ubi-minimal:9.7-1776833838
[2/2] STEP 2/8: COPY --from=builder /workspace/bin/inventory-api /usr/local/bin/
--> 98da737efd3f
[2/2] STEP 3/8: EXPOSE 8081
--> c1e2d0fb36f0
[2/2] STEP 4/8: EXPOSE 9081
--> 6fa33de34cf0
[2/2] STEP 5/8: USER 1001
--> e698fbdbae5f
[2/2] STEP 6/8: ENV PATH="$PATH:/usr/local/bin"
--> 0d21c47de0b1
[2/2] STEP 7/8: ENTRYPOINT ["inventory-api"]
--> 050f23da019f
[2/2] STEP 8/8: LABEL name="kessel-inventory-api"       version="0.0.1"       summary="Kessel inventory-api service"       description="The Kessel inventory-api service"
[2/2] COMMIT quay.io/sgunta/kessel:588fec34-sanity
--> ab82d9fc38d8
[Warning] one or more build args were not consumed: [GIT_COMMIT]
Successfully tagged quay.io/sgunta/kessel:588fec34-sanity
ab82d9fc38d83dc2afc2e1aa8e28690a002d6393154b0e0b8ba1c1583ee2dd2d
[INFO] Pushing image: quay.io/sgunta/kessel:588fec34-sanity...
Getting image source signatures
Copying blob sha256:d7f219293b022845f4da59a6e90e2dcd00f16cf356858138f28db99c2a619edd
Copying blob sha256:73dd5047f87493ec5a883a7efcc591fcf9fd17284490e83ae258105607396e34
Copying config sha256:ab82d9fc38d83dc2afc2e1aa8e28690a002d6393154b0e0b8ba1c1583ee2dd2d
Writing manifest to image destination
[INFO] Image pushed successfully
[INFO] Updating bonfire config with IMAGE_TAG=588fec34-sanity, INVENTORY_IMAGE=quay.io/sgunta/kessel
[WARN] python3+pyyaml not available, using sed for bonfire config update
[INFO] Bonfire config updated
[INFO] Deploying kessel to ephemeral via bonfire...
2026-05-01 13:13:05 [    INFO] [          MainThread] running (pid 24774): oc project -q 
2026-05-01 13:13:05 [    INFO] [           pid-24774]  |stderr| error: you do not have rights to view project "ephemeral-wex1at" specified in your config or the project doesn't exist
2026-05-01 13:13:05 [ WARNING] [          MainThread] Non-zero return code ignored
2026-05-01 13:13:05 [    INFO] [          MainThread] current namespace could not be used (not reserved, expired, or not owned), reserving a new one
2026-05-01 13:13:06 [    INFO] [          MainThread] Checking for existing reservations for 'sgunta'
2026-05-01 13:13:07 [    INFO] [          MainThread] checking for available namespaces to reserve...
2026-05-01 13:13:07 [    INFO] [          MainThread] pool size limit is defined as 0 in 'default' pool
2026-05-01 13:13:07 [    INFO] [          MainThread] processing namespace reservation
2026-05-01 13:13:07 [ WARNING] [          MainThread] converted template's deprecated apiVersion 'v1' to 'template.openshift.io/v1'
2026-05-01 13:13:07 [    INFO] [          MainThread] running (pid 24780): oc apply -f - 
2026-05-01 13:13:07 [    INFO] [           pid-24780]  |stdout| namespacereservation.cloud.redhat.com/bonfire-reservation-befa35a0 created
2026-05-01 13:13:07 [    INFO] [          MainThread] waiting for reservation 'bonfire-reservation-befa35a0' to get picked up by operator
2026-05-01 13:13:08 [    INFO] [          MainThread] namespace 'ephemeral-kctrxa' is reserved by 'sgunta' for '1h' from the default pool
2026-05-01 13:13:08 [    INFO] [          MainThread] running (pid 24783): oc project ephemeral-kctrxa 
2026-05-01 13:13:08 [    INFO] [           pid-24783]  |stdout| Now using project "ephemeral-kctrxa" on server "https://api.crc-eph.r9lp.p1.openshiftapps.com:6443".
2026-05-01 13:13:08 [    INFO] [          MainThread] namespace console url: https://console-openshift-console.apps.crc-eph.r9lp.p1.openshiftapps.com/k8s/cluster/projects/ephemeral-kctrxa
2026-05-01 13:13:09 [    INFO] [          MainThread] searching for ClowdEnvironment tied to ns 'ephemeral-kctrxa'...
2026-05-01 13:13:10 [    INFO] [          MainThread] templates will be processed with parameter ENV_NAME='env-ephemeral-kctrxa'
2026-05-01 13:13:10 [    INFO] [          MainThread] processing app templates...
2026-05-01 13:13:10 [    INFO] [          MainThread] reading config from: /Users/snehagunta/.config/bonfire/config.yaml
2026-05-01 13:13:10 [    INFO] [          MainThread] fetching target env apps config using source: appsre
2026-05-01 13:13:10 [    INFO] [          MainThread] fetching app deployment configs for env 'insights-ephemeral'
2026-05-01 13:13:11 [    INFO] [          MainThread] setting git refs/image tags to match deploy config found in env: insights-production, fallback env: insights-stage
2026-05-01 13:13:11 [    INFO] [          MainThread] fetching app deployment configs for env 'insights-production'
2026-05-01 13:13:11 [    INFO] [          MainThread] fetching app deployment configs for env 'insights-stage'
2026-05-01 13:13:12 [    INFO] [          MainThread] local configuration found for apps: ['kessel']
2026-05-01 13:13:12 [    INFO] [          MainThread] diff in apps config after merging local config into remote config:
--- 

+++ 

@@ -1763,16 +1763,18 @@

                             'ref': 'master',
                             'repo': 'project-kessel/relations-api'},
                            {'hash_length': None,
-                            'host': 'github',
+                            'host': 'local',
                             'name': 'kessel-inventory',
                             'parameters': {'BACKOFFICE_HOST': 'backoffice-proxy.apps.ext.spoke.preprod.us-west-2.aws.paas.redhat.com',
+                                           'IMAGE_TAG': '588fec34-sanity',
+                                           'INVENTORY_IMAGE': 'quay.io/sgunta/kessel',
                                            'KAFKA_BOOTSTRAP_HOST': 'mq-kafka',
                                            'KAFKA_BOOTSTRAP_PORT': 29092,
                                            'TENANT_TRANSLATOR_HOST': 'gateway.3scale-dev.svc.cluster.local',
                                            'TENANT_TRANSLATOR_PORT': '8891'},
-                            'path': '/deploy/kessel-inventory-ephem.yaml',
+                            'path': 'deploy/kessel-inventory-ephem.yaml',
                             'ref': 'master',
-                            'repo': 'project-kessel/inventory-api'}],
+                            'repo': '/Users/snehagunta/git/kessel/inventory-api'}],
             'name': 'kessel'},
  'malware-detection': {'components': [{'hash_length': None,
                                        'host': 'github',
2026-05-01 13:13:12 [    INFO] [          MainThread] processing app 'kessel'
2026-05-01 13:13:12 [    INFO] [          MainThread] --> processing component kessel-inventory-consumer
2026-05-01 13:13:13 [    INFO] [          MainThread] failed to fetch git ref 'master' (http code: 404, response txt: {"message":"Not Found","documentation_url":"https://docs.github.com/rest/git/refs#get-all-references-in-a-namespace","status":"404"})
2026-05-01 13:13:13 [    INFO] [          MainThread] trying alternate: main
2026-05-01 13:13:13 [    INFO] [          MainThread] fetch succeeded for ref 'main'
2026-05-01 13:13:13 [    INFO] [          MainThread] --> processing component kessel-inventory
2026-05-01 13:13:14 [    INFO] [          MainThread] --> processing component kessel-relations
2026-05-01 13:13:14 [    INFO] [          MainThread] failed to fetch git ref 'master' (http code: 404, response txt: {"message":"Not Found","documentation_url":"https://docs.github.com/rest/git/refs#get-all-references-in-a-namespace","status":"404"})
2026-05-01 13:13:14 [    INFO] [          MainThread] trying alternate: main
2026-05-01 13:13:14 [    INFO] [          MainThread] fetch succeeded for ref 'main'
2026-05-01 13:13:14 [    INFO] [          MainThread] --> processing component kessel-kafkaconnect
2026-05-01 13:13:14 [    INFO] [          MainThread] failed to fetch git ref 'master' (http code: 404, response txt: {"message":"Not Found","documentation_url":"https://docs.github.com/rest/git/refs#get-all-references-in-a-namespace","status":"404"})
2026-05-01 13:13:14 [    INFO] [          MainThread] trying alternate: main
2026-05-01 13:13:14 [    INFO] [          MainThread] fetch succeeded for ref 'main'
2026-05-01 13:13:15 [    INFO] [          MainThread] applying app configs...
2026-05-01 13:13:15 [    INFO] [          MainThread] running (pid 24826): oc apply -f - -n ephemeral-kctrxa 
2026-05-01 13:13:15 [    INFO] [           pid-24826]  |stdout| configmap/kic-config created
2026-05-01 13:13:15 [    INFO] [           pid-24826]  |stdout| clowdapp.cloud.redhat.com/kessel-inventory-consumer created
2026-05-01 13:13:15 [    INFO] [           pid-24826]  |stdout| configmap/inventory-api-config created
2026-05-01 13:13:15 [    INFO] [           pid-24826]  |stdout| configmap/resources-tarball created
2026-05-01 13:13:15 [    INFO] [           pid-24826]  |stdout| clowdapp.cloud.redhat.com/kessel-inventory created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| cronjob.batch/business-metrics-collect created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| configmap/spicedb-schema created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| configmap/relations-api-config created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| secret/postgres-secret created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| secret/dev-spicedb-config created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| deployment.apps/postgres created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| persistentvolumeclaim/postgres-data created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| service/postgres created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| spicedbcluster.authzed.com/relations-spicedb created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| clowdapp.cloud.redhat.com/kessel-relations created
2026-05-01 13:13:16 [    INFO] [           pid-24826]  |stdout| configmap/kessel-kafka-connect-metrics created
2026-05-01 13:13:17 [    INFO] [           pid-24826]  |stdout| configmap/kessel-kafka-connect-log4j created
2026-05-01 13:13:17 [    INFO] [           pid-24826]  |stdout| kafkaconnect.kafka.strimzi.io/kessel-kafka-connect created
2026-05-01 13:13:17 [    INFO] [           pid-24826]  |stdout| kafkaconnector.kafka.strimzi.io/kessel-inventory-source-connector created
2026-05-01 13:13:17 [    INFO] [           pid-24826]  |stdout| kafkaconnector.kafka.strimzi.io/hbi-outbox-connector created
2026-05-01 13:13:17 [    INFO] [          MainThread] waiting on resources for max of 600sec...
2026-05-01 13:13:22 [    INFO] [          MainThread] checking for ClowdEnvironments to wait on...
2026-05-01 13:13:22 [    INFO] [          MainThread] will wait on ClowdEnvironment 'env-ephemeral-kctrxa' found on ClowdApp's .spec.envName
2026-05-01 13:13:22 [    INFO] [          MainThread] waiting up to 600sec on resources owned by ClowdEnvironment...
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/env-ephemeral-kctrxa-featureflags is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/env-ephemeral-kctrxa-featureflags-edge is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/env-ephemeral-kctrxa-keycloak is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/env-ephemeral-kctrxa-mbop is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/env-ephemeral-kctrxa-minio is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/env-ephemeral-kctrxa-mocktitlements is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/featureflags-db is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource deployment/keycloak-db is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource kafka/env-ephemeral-kctrxa is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] owned resource kafkaconnect/env-ephemeral-kctrxa is ready!
2026-05-01 13:13:22 [    INFO] [           thread-36] [clowdenvironment/env-ephemeral-kctrxa] resource is ready!
2026-05-01 13:13:23 [    INFO] [          MainThread] all resources being monitored reached 'ready' state
2026-05-01 13:13:23 [    INFO] [          MainThread] checking for ClowdApps to wait on...
2026-05-01 13:13:23 [    INFO] [          MainThread] waiting up to 599sec on resources owned by ClowdApps...
2026-05-01 13:13:23 [    INFO] [           thread-38] [clowdapp/kessel-inventory] waiting up to 599sec for resource to be 'ready'
2026-05-01 13:13:23 [    INFO] [           thread-39] [clowdapp/kessel-inventory-consumer] found owned resource deployment/kessel-inventory-consumer-service, not yet ready
2026-05-01 13:13:23 [    INFO] [           thread-39] [clowdapp/kessel-inventory-consumer] waiting up to 599sec for resource to be 'ready'
2026-05-01 13:13:23 [    INFO] [           thread-40] [clowdapp/kessel-relations] waiting up to 599sec for resource to be 'ready'
2026-05-01 13:13:24 [    INFO] [           thread-38] [clowdapp/kessel-inventory] found owned resource deployment/kessel-inventory-api, not yet ready
2026-05-01 13:13:24 [    INFO] [           thread-38] [clowdapp/kessel-inventory] found owned resource deployment/kessel-inventory-db, not yet ready
2026-05-01 13:14:06 [    INFO] [           thread-40] [clowdapp/kessel-relations] found owned resource deployment/kessel-relations-api, not yet ready
2026-05-01 13:14:16 [    INFO] [           thread-38] [clowdapp/kessel-inventory] owned resource deployment/kessel-inventory-db is ready!
2026-05-01 13:14:23 [    INFO] [           thread-38] [clowdapp/kessel-inventory] waiting 539sec longer
2026-05-01 13:14:23 [    INFO] [           thread-39] [clowdapp/kessel-inventory-consumer] waiting 539sec longer
2026-05-01 13:14:23 [    INFO] [           thread-40] [clowdapp/kessel-relations] waiting 539sec longer
2026-05-01 13:14:27 [    INFO] [           thread-40] [clowdapp/kessel-relations] owned resource deployment/kessel-relations-api is ready!
2026-05-01 13:14:28 [    INFO] [           thread-40] [clowdapp/kessel-relations] resource is ready!
2026-05-01 13:15:23 [    INFO] [           thread-39] [clowdapp/kessel-inventory-consumer] waiting 479sec longer
2026-05-01 13:15:23 [    INFO] [           thread-38] [clowdapp/kessel-inventory] waiting 479sec longer
2026-05-01 13:15:30 [    INFO] [           thread-38] [clowdapp/kessel-inventory] owned resource deployment/kessel-inventory-api is ready!
2026-05-01 13:15:31 [    INFO] [           thread-38] [clowdapp/kessel-inventory] resource is ready!
2026-05-01 13:15:40 [    INFO] [           thread-39] [clowdapp/kessel-inventory-consumer] owned resource deployment/kessel-inventory-consumer-service is ready!
2026-05-01 13:15:41 [    INFO] [           thread-39] [clowdapp/kessel-inventory-consumer] resource is ready!
2026-05-01 13:15:41 [    INFO] [          MainThread] all resources being monitored reached 'ready' state
2026-05-01 13:15:41 [    INFO] [          MainThread] checking for remaining namespace resources to wait on...
2026-05-01 13:15:41 [    INFO] [          MainThread] waiting up to 460sec on remaining namespace resources...
2026-05-01 13:15:41 [    INFO] [          thread-230] [replicaset/env-ephemeral-kctrxa-entity-operator-7c56458b55] owned resource pod/env-ephemeral-kctrxa-entity-operator-7c56458b55-782wp is ready!
2026-05-01 13:15:41 [    INFO] [          thread-230] [replicaset/env-ephemeral-kctrxa-entity-operator-7c56458b55] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-231] [replicaset/env-ephemeral-kctrxa-featureflags-57dbf8d6dd] owned resource pod/env-ephemeral-kctrxa-featureflags-57dbf8d6dd-8hgvw is ready!
2026-05-01 13:15:41 [    INFO] [          thread-231] [replicaset/env-ephemeral-kctrxa-featureflags-57dbf8d6dd] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-232] [replicaset/env-ephemeral-kctrxa-featureflags-edge-79d57bf95b] owned resource pod/env-ephemeral-kctrxa-featureflags-edge-79d57bf95b-skzn4 is ready!
2026-05-01 13:15:41 [    INFO] [          thread-232] [replicaset/env-ephemeral-kctrxa-featureflags-edge-79d57bf95b] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-233] [replicaset/env-ephemeral-kctrxa-keycloak-6d95dc9dcd] owned resource pod/env-ephemeral-kctrxa-keycloak-6d95dc9dcd-7b7mb is ready!
2026-05-01 13:15:41 [    INFO] [          thread-233] [replicaset/env-ephemeral-kctrxa-keycloak-6d95dc9dcd] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-234] [replicaset/env-ephemeral-kctrxa-mbop-fb5b7c77d] owned resource pod/env-ephemeral-kctrxa-mbop-fb5b7c77d-jmtc4 is ready!
2026-05-01 13:15:41 [    INFO] [          thread-234] [replicaset/env-ephemeral-kctrxa-mbop-fb5b7c77d] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-235] [replicaset/env-ephemeral-kctrxa-minio-79c6db798] owned resource pod/env-ephemeral-kctrxa-minio-79c6db798-8xsjg is ready!
2026-05-01 13:15:41 [    INFO] [          thread-235] [replicaset/env-ephemeral-kctrxa-minio-79c6db798] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-236] [replicaset/env-ephemeral-kctrxa-mocktitlements-74455dcf99] owned resource pod/env-ephemeral-kctrxa-mocktitlements-74455dcf99-fm8r8 is ready!
2026-05-01 13:15:41 [    INFO] [          thread-236] [replicaset/env-ephemeral-kctrxa-mocktitlements-74455dcf99] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-237] [replicaset/featureflags-db-694bc7889f] owned resource pod/featureflags-db-694bc7889f-qnmwg is ready!
2026-05-01 13:15:41 [    INFO] [          thread-237] [replicaset/featureflags-db-694bc7889f] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-238] [replicaset/keycloak-db-9744c9fb7] owned resource pod/keycloak-db-9744c9fb7-kv2kl is ready!
2026-05-01 13:15:41 [    INFO] [          thread-238] [replicaset/keycloak-db-9744c9fb7] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-239] [replicaset/postgres-778f49dc65] owned resource pod/postgres-778f49dc65-srk9h is ready!
2026-05-01 13:15:41 [    INFO] [          thread-239] [replicaset/postgres-778f49dc65] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-240] [deployment/env-ephemeral-kctrxa-entity-operator] owned resource replicaset/env-ephemeral-kctrxa-entity-operator-7c56458b55 is ready!
2026-05-01 13:15:41 [    INFO] [          thread-240] [deployment/env-ephemeral-kctrxa-entity-operator] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-241] [deployment/postgres] owned resource replicaset/postgres-778f49dc65 is ready!
2026-05-01 13:15:41 [    INFO] [          thread-241] [deployment/postgres] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-242] [kafkaconnect/kessel-kafka-connect] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-243] [replicaset/kessel-inventory-api-fdc6db4d4] owned resource pod/kessel-inventory-api-fdc6db4d4-kvbss is ready!
2026-05-01 13:15:41 [    INFO] [          thread-243] [replicaset/kessel-inventory-api-fdc6db4d4] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-244] [replicaset/kessel-inventory-consumer-service-67ccbf88cb] owned resource pod/kessel-inventory-consumer-service-67ccbf88cb-z72c2 is ready!
2026-05-01 13:15:41 [    INFO] [          thread-244] [replicaset/kessel-inventory-consumer-service-67ccbf88cb] waiting up to 460sec for resource to be 'ready'
2026-05-01 13:15:41 [    INFO] [          thread-245] [replicaset/kessel-inventory-db-6b4f4b5c76] owned resource pod/kessel-inventory-db-6b4f4b5c76-r5g9c is ready!
2026-05-01 13:15:41 [    INFO] [          thread-245] [replicaset/kessel-inventory-db-6b4f4b5c76] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-246] [replicaset/relations-spicedb-spicedb-74d5b78655] owned resource pod/relations-spicedb-spicedb-74d5b78655-zf97q is ready!
2026-05-01 13:15:41 [    INFO] [          thread-246] [replicaset/relations-spicedb-spicedb-74d5b78655] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-247] [deployment/relations-spicedb-spicedb] owned resource replicaset/relations-spicedb-spicedb-74d5b78655 is ready!
2026-05-01 13:15:41 [    INFO] [          thread-247] [deployment/relations-spicedb-spicedb] resource is ready!
2026-05-01 13:15:41 [    INFO] [          thread-248] [replicaset/kessel-relations-api-86486cf5b7] owned resource pod/kessel-relations-api-86486cf5b7-l7kks is ready!
2026-05-01 13:15:41 [    INFO] [          thread-248] [replicaset/kessel-relations-api-86486cf5b7] resource is ready!
2026-05-01 13:15:49 [    INFO] [          thread-244] [replicaset/kessel-inventory-consumer-service-67ccbf88cb] resource is ready!
2026-05-01 13:15:49 [    INFO] [          MainThread] all resources being monitored reached 'ready' state
2026-05-01 13:15:49 [    INFO] [          MainThread] successfully deployed to namespace ephemeral-kctrxa
2026-05-01 13:15:49 [    INFO] [          MainThread] namespace url: https://console-openshift-console.apps.crc-eph.r9lp.p1.openshiftapps.com/k8s/cluster/projects/ephemeral-kctrxa
2026-05-01 13:15:49 [    INFO] [          MainThread] resource usage dashboard for namespace 'ephemeral-kctrxa': https://grafana.app-sre.devshift.net/d/jRY7KLnVz?var-namespace=ephemeral-kctrxa
ephemeral-kctrxa
[INFO] Deployed to namespace: ephemeral-kctrxa
[INFO] Namespace: ephemeral-kctrxa
[INFO] Discovering DB credentials...
[INFO] DB credentials discovered (user=1hVUw3qUhhVjBQnv, db=kessel-inventory)
[INFO] Starting API port-forward (localhost:9000 -> svc/kessel-inventory-api:9000)...
[INFO] Starting DB port-forward (localhost:15432 -> svc/kessel-inventory-db:5432)...
[INFO] Waiting for port-forwards to be ready...
Forwarding from 127.0.0.1:15432 -> 5432
Forwarding from [::1]:15432 -> 5432
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
[INFO] Port-forwards ready
[INFO] Running sanity tests...
==============================================
Handling connection for 15432
=== RUN   TestSanity_CheckBulk_MultipleHosts

────────────────────────────────────────────────────────────────────────────────
[1] TestSanity_CheckBulk_MultipleHosts
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    bulk_test.go:38: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    bulk_test.go:38: Check returned ALLOWED_TRUE on attempt 2
    bulk_test.go:40: Check returned ALLOWED_TRUE on attempt 1
    bulk_test.go:104: CheckBulk attempt 1: results not yet consistent, retrying...
    bulk_test.go:104: CheckBulk attempt 2: results not yet consistent, retrying...
    bulk_test.go:104: CheckBulk attempt 3: results not yet consistent, retrying...
    bulk_test.go:104: CheckBulk attempt 4: results not yet consistent, retrying...
    bulk_test.go:104: CheckBulk attempt 5: results not yet consistent, retrying...
    bulk_test.go:104: CheckBulk attempt 6: results not yet consistent, retrying...
    bulk_test.go:98: CheckBulk returned expected results on attempt 7
--- PASS: TestSanity_CheckBulk_MultipleHosts (7.80s)
=== RUN   TestSanity_CheckSelf_ReturnsError

────────────────────────────────────────────────────────────────────────────────
[2] TestSanity_CheckSelf_ReturnsError
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    bulk_test.go:140: CheckSelf error (expected): rpc error: code = PermissionDenied desc = meta authorization denied
--- PASS: TestSanity_CheckSelf_ReturnsError (0.27s)
=== RUN   TestSanity_CheckSelfBulk_ReturnsError

────────────────────────────────────────────────────────────────────────────────
[3] TestSanity_CheckSelfBulk_ReturnsError
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    bulk_test.go:169: CheckSelfBulk error (expected): rpc error: code = PermissionDenied desc = meta authorization denied
--- PASS: TestSanity_CheckSelfBulk_ReturnsError (0.27s)
=== RUN   TestSanity_ReportHost_CheckAllowed

────────────────────────────────────────────────────────────────────────────────
[4] TestSanity_ReportHost_CheckAllowed
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    check_test.go:31: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    check_test.go:31: Check returned ALLOWED_TRUE on attempt 2
--- PASS: TestSanity_ReportHost_CheckAllowed (1.51s)
=== RUN   TestSanity_Check_WrongWorkspace

────────────────────────────────────────────────────────────────────────────────
[5] TestSanity_Check_WrongWorkspace
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    check_test.go:50: Check returned ALLOWED_FALSE on attempt 1
--- PASS: TestSanity_Check_WrongWorkspace (0.33s)
=== RUN   TestSanity_DeleteHost_AccessLost

────────────────────────────────────────────────────────────────────────────────
[6] TestSanity_DeleteHost_AccessLost
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    check_test.go:66: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    check_test.go:66: Check returned ALLOWED_TRUE on attempt 2
    check_test.go:77: Check attempt 1: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check attempt 2: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check attempt 3: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check attempt 4: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check attempt 5: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check attempt 6: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check attempt 7: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check attempt 8: got ALLOWED_TRUE, want ALLOWED_FALSE
    check_test.go:77: Check returned ALLOWED_FALSE on attempt 9
--- PASS: TestSanity_DeleteHost_AccessLost (9.63s)
=== RUN   TestSanity_Check_Combinations
=== RUN   TestSanity_Check_Combinations/matching_workspace

────────────────────────────────────────────────────────────────────────────────
[7] TestSanity_Check_Combinations/matching_workspace
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    check_test.go:125: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    check_test.go:125: Check returned ALLOWED_TRUE on attempt 2
=== RUN   TestSanity_Check_Combinations/non_matching_workspace

────────────────────────────────────────────────────────────────────────────────
[8] TestSanity_Check_Combinations/non_matching_workspace
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    check_test.go:125: Check returned ALLOWED_FALSE on attempt 1
=== RUN   TestSanity_Check_Combinations/no_instance_in_check_ref

────────────────────────────────────────────────────────────────────────────────
[9] TestSanity_Check_Combinations/no_instance_in_check_ref
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    check_test.go:125: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    check_test.go:125: Check returned ALLOWED_TRUE on attempt 2
=== RUN   TestSanity_Check_Combinations/with_instance_in_check_ref

────────────────────────────────────────────────────────────────────────────────
[10] TestSanity_Check_Combinations/with_instance_in_check_ref
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    check_test.go:125: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    check_test.go:125: Check returned ALLOWED_TRUE on attempt 2
--- PASS: TestSanity_Check_Combinations (4.52s)
    --- PASS: TestSanity_Check_Combinations/matching_workspace (1.40s)
    --- PASS: TestSanity_Check_Combinations/non_matching_workspace (0.33s)
    --- PASS: TestSanity_Check_Combinations/no_instance_in_check_ref (1.41s)
    --- PASS: TestSanity_Check_Combinations/with_instance_in_check_ref (1.38s)
=== RUN   TestSanity_CheckForUpdate_Combinations
=== RUN   TestSanity_CheckForUpdate_Combinations/matching

────────────────────────────────────────────────────────────────────────────────
[11] TestSanity_CheckForUpdate_Combinations/matching
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    checkforupdate_test.go:58: CheckForUpdate attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    checkforupdate_test.go:58: CheckForUpdate returned ALLOWED_TRUE on attempt 2
=== RUN   TestSanity_CheckForUpdate_Combinations/non_matching

────────────────────────────────────────────────────────────────────────────────
[12] TestSanity_CheckForUpdate_Combinations/non_matching
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    checkforupdate_test.go:58: CheckForUpdate returned ALLOWED_FALSE on attempt 1
=== RUN   TestSanity_CheckForUpdate_Combinations/after_delete

────────────────────────────────────────────────────────────────────────────────
[13] TestSanity_CheckForUpdate_Combinations/after_delete
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    checkforupdate_test.go:46: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    checkforupdate_test.go:46: Check returned ALLOWED_TRUE on attempt 2
    checkforupdate_test.go:51: Check attempt 1: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 2: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 3: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 4: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 5: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 6: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 7: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 8: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 9: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 10: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check attempt 11: got ALLOWED_TRUE, want ALLOWED_FALSE
    checkforupdate_test.go:51: Check returned ALLOWED_FALSE on attempt 12
    checkforupdate_test.go:58: CheckForUpdate returned ALLOWED_FALSE on attempt 1
--- PASS: TestSanity_CheckForUpdate_Combinations (14.24s)
    --- PASS: TestSanity_CheckForUpdate_Combinations/matching (1.32s)
    --- PASS: TestSanity_CheckForUpdate_Combinations/non_matching (0.27s)
    --- PASS: TestSanity_CheckForUpdate_Combinations/after_delete (12.65s)
=== RUN   TestSanity_CheckForUpdateBulk_MixedResults

────────────────────────────────────────────────────────────────────────────────
[14] TestSanity_CheckForUpdateBulk_MixedResults
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    checkforupdate_test.go:82: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    checkforupdate_test.go:82: Check returned ALLOWED_TRUE on attempt 2
--- PASS: TestSanity_CheckForUpdateBulk_MixedResults (1.38s)
=== RUN   TestSanity_ReportDeleteReReport_Revive

────────────────────────────────────────────────────────────────────────────────
[15] TestSanity_ReportDeleteReReport_Revive
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    lifecycle_test.go:26: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    lifecycle_test.go:26: Check returned ALLOWED_TRUE on attempt 2
    lifecycle_test.go:34: Check attempt 1: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:34: Check attempt 2: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:34: Check attempt 3: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:34: Check attempt 4: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:34: Check returned ALLOWED_FALSE on attempt 5
    lifecycle_test.go:47: Check returned ALLOWED_TRUE on attempt 1
--- PASS: TestSanity_ReportDeleteReReport_Revive (5.85s)
=== RUN   TestSanity_MultiResourceChurn

────────────────────────────────────────────────────────────────────────────────
[16] TestSanity_MultiResourceChurn
────────────────────────────────────────────────────────────────────────────────
Handling connection for 9000
    lifecycle_test.go:71: Check returned ALLOWED_TRUE on attempt 1
    lifecycle_test.go:72: Check attempt 1: got ALLOWED_FALSE, want ALLOWED_TRUE
    lifecycle_test.go:72: Check returned ALLOWED_TRUE on attempt 2
    lifecycle_test.go:73: Check returned ALLOWED_TRUE on attempt 1
    lifecycle_test.go:82: Check attempt 1: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check attempt 2: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check attempt 3: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check attempt 4: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check attempt 5: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check attempt 6: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check attempt 7: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check attempt 8: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:82: Check returned ALLOWED_FALSE on attempt 9
    lifecycle_test.go:93: Check attempt 1: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 2: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 3: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 4: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 5: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 6: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 7: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 8: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 9: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check attempt 10: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:93: Check returned ALLOWED_FALSE on attempt 11
    lifecycle_test.go:94: Check attempt 1: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:94: Check attempt 2: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:94: Check attempt 3: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:94: Check attempt 4: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:94: Check attempt 5: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:94: Check attempt 6: got ALLOWED_TRUE, want ALLOWED_FALSE
    lifecycle_test.go:94: Check returned ALLOWED_FALSE on attempt 7
--- PASS: TestSanity_MultiResourceChurn (26.50s)
PASS

════════════════════════════════════════════════════════════════════════════════
  SANITY TEST REPORT
════════════════════════════════════════════════════════════════════════════════

▸ [1] TestSanity_CheckBulk_MultipleHosts  [PASS]
  1. ReportResource host/bulk-host-a-1777655756099480000 (hbi, workspace=ws-bulk-a-1777655756099499000)
     DB: ver=0 gen=0 tombstone=false
  2. ReportResource host/bulk-host-b-1777655756099493000 (hbi, workspace=ws-bulk-b-1777655756099505000)
     DB: ver=0 gen=0 tombstone=false
  3. Check host/bulk-host-a-1777655756099480000 (workspace=ws-bulk-a-1777655756099499000)
     → ALLOWED_TRUE (attempt 2)
  4. Check host/bulk-host-b-1777655756099493000 (workspace=ws-bulk-b-1777655756099505000)
     → ALLOWED_TRUE (attempt 1)
  5. CheckBulk (3 items)
     → host/bulk-host-a-1777655756099480000+wsA=ALLOWED_TRUE, host/bulk-host-b-1777655756099493000+wsB=ALLOWED_TRUE, host/bulk-host-a-1777655756099480000+wsWrong=ALLOWED_FALSE
  6. DeleteResource host/bulk-host-b-1777655756099493000 (hbi)
     DB: ver=1 gen=0 tombstone=true
  7. DeleteResource host/bulk-host-a-1777655756099480000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [2] TestSanity_CheckSelf_ReturnsError  [PASS]
  1. ReportResource host/self-host-1777655763899079000 (hbi, workspace=ws-self-1777655763899100000)
     DB: ver=0 gen=0 tombstone=false
  2. CheckSelf (expected error)
     → error: rpc error: code = PermissionDenied desc = meta authorization denied
  3. DeleteResource host/self-host-1777655763899079000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [3] TestSanity_CheckSelfBulk_ReturnsError  [PASS]
  1. ReportResource host/selfbulk-host-1777655764172857000 (hbi, workspace=ws-selfbulk-1777655764172872000)
     DB: ver=0 gen=0 tombstone=false
  2. CheckSelfBulk (expected error)
     → error: rpc error: code = PermissionDenied desc = meta authorization denied
  3. DeleteResource host/selfbulk-host-1777655764172857000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [4] TestSanity_ReportHost_CheckAllowed  [PASS]
  1. ReportResource host/sanity-host-1777655764441784000 (hbi, workspace=ws-1777655764441799000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/sanity-host-1777655764441784000 (workspace=ws-1777655764441799000)
     → ALLOWED_TRUE (attempt 2)
  3. DeleteResource host/sanity-host-1777655764441784000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [5] TestSanity_Check_WrongWorkspace  [PASS]
  1. ReportResource host/sanity-host-wrong-1777655765952329000 (hbi, workspace=ws-right-1777655765952356000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/sanity-host-wrong-1777655765952329000 (workspace=ws-wrong-1777655765952380000)
     → ALLOWED_FALSE (attempt 1)
  3. DeleteResource host/sanity-host-wrong-1777655765952329000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [6] TestSanity_DeleteHost_AccessLost  [PASS]
  1. ReportResource host/sanity-del-host-1777655766284955000 (hbi, workspace=ws-1777655766284981000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/sanity-del-host-1777655766284955000 (workspace=ws-1777655766284981000)
     → ALLOWED_TRUE (attempt 2)
  3. DeleteResource host/sanity-del-host-1777655766284955000 (hbi)
     DB: ver=1 gen=0 tombstone=true
  4. Check host/sanity-del-host-1777655766284955000 (workspace=ws-1777655766284981000)
     → ALLOWED_FALSE (attempt 9)

▸ [7] TestSanity_Check_Combinations/matching_workspace  [PASS]
  1. ReportResource host/combo-matching_workspace-1777655775916499000 (hbi, workspace=ws-a-1777655775916581000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/combo-matching_workspace-1777655775916499000 (workspace=ws-a-1777655775916581000)
     → ALLOWED_TRUE (attempt 2)
  3. DeleteResource host/combo-matching_workspace-1777655775916499000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [8] TestSanity_Check_Combinations/non_matching_workspace  [PASS]
  1. ReportResource host/combo-non_matching_workspace-1777655777313916000 (hbi, workspace=ws-a-1777655777313917000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/combo-non_matching_workspace-1777655777313916000 (workspace=ws-b-1777655777313957000)
     → ALLOWED_FALSE (attempt 1)
  3. DeleteResource host/combo-non_matching_workspace-1777655777313916000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [9] TestSanity_Check_Combinations/no_instance_in_check_ref  [PASS]
  1. ReportResource host/combo-no_instance_in_check_ref-1777655777642926000 (hbi, workspace=ws-f-1777655777642927000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/combo-no_instance_in_check_ref-1777655777642926000 (workspace=ws-f-1777655777642927000)
     → ALLOWED_TRUE (attempt 2)
  3. DeleteResource host/combo-no_instance_in_check_ref-1777655777642926000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [10] TestSanity_Check_Combinations/with_instance_in_check_ref  [PASS]
  1. ReportResource host/combo-with_instance_in_check_ref-1777655779050056000 (hbi, workspace=ws-g-1777655779050057000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/combo-with_instance_in_check_ref-1777655779050056000 (workspace=ws-g-1777655779050057000)
     → ALLOWED_TRUE (attempt 2)
  3. DeleteResource host/combo-with_instance_in_check_ref-1777655779050056000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [11] TestSanity_CheckForUpdate_Combinations/matching  [PASS]
  1. ReportResource host/cfu-matching-1777655780435256000 (hbi, workspace=ws-cfu-a-1777655780435274000)
     DB: ver=0 gen=0 tombstone=false
  2. CheckForUpdate host/cfu-matching-1777655780435256000 (workspace=ws-cfu-a-1777655780435274000)
     → ALLOWED_TRUE (attempt 2) +token
  3. DeleteResource host/cfu-matching-1777655780435256000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [12] TestSanity_CheckForUpdate_Combinations/non_matching  [PASS]
  1. ReportResource host/cfu-non_matching-1777655781752447000 (hbi, workspace=ws-cfu-a-1777655781752447000)
     DB: ver=0 gen=0 tombstone=false
  2. CheckForUpdate host/cfu-non_matching-1777655781752447000 (workspace=ws-cfu-b-1777655781752470000)
     → ALLOWED_FALSE (attempt 1) +token
  3. DeleteResource host/cfu-non_matching-1777655781752447000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [13] TestSanity_CheckForUpdate_Combinations/after_delete  [PASS]
  1. ReportResource host/cfu-after_delete-1777655782023579000 (hbi, workspace=ws-cfu-d-1777655782023579000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/cfu-after_delete-1777655782023579000 (workspace=ws-cfu-d-1777655782023579000)
     → ALLOWED_TRUE (attempt 2)
  3. DeleteResource host/cfu-after_delete-1777655782023579000 (hbi)
     DB: ver=1 gen=0 tombstone=true
  4. Check host/cfu-after_delete-1777655782023579000 (workspace=ws-cfu-d-1777655782023579000)
     → ALLOWED_FALSE (attempt 12)
  5. CheckForUpdate host/cfu-after_delete-1777655782023579000 (workspace=ws-cfu-d-1777655782023579000)
     → ALLOWED_FALSE (attempt 1) +token

▸ [14] TestSanity_CheckForUpdateBulk_MixedResults  [PASS]
  1. ReportResource host/cfubulk-1777655794677303000 (hbi, workspace=ws-cfubulk-a-1777655794677346000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/cfubulk-1777655794677303000 (workspace=ws-cfubulk-a-1777655794677346000)
     → ALLOWED_TRUE (attempt 2)
  3. CheckForUpdateBulk (2 items)
     → host/cfubulk-1777655794677303000+wsA=ALLOWED_TRUE, host/cfubulk-1777655794677303000+wsB=ALLOWED_FALSE
  4. DeleteResource host/cfubulk-1777655794677303000 (hbi)
     DB: ver=1 gen=0 tombstone=true

▸ [15] TestSanity_ReportDeleteReReport_Revive  [PASS]
  1. ReportResource host/revive-host-1777655796056804000 (hbi, workspace=ws-revive-1777655796056848000)
     DB: ver=0 gen=0 tombstone=false
  2. Check host/revive-host-1777655796056804000 (workspace=ws-revive-1777655796056848000)
     → ALLOWED_TRUE (attempt 2)
  3. DeleteResource host/revive-host-1777655796056804000 (hbi)
     DB: ver=1 gen=0 tombstone=true
  4. Check host/revive-host-1777655796056804000 (workspace=ws-revive-1777655796056848000)
     → ALLOWED_FALSE (attempt 5)
  5. ReportResource host/revive-host-1777655796056804000 (hbi, workspace=ws-revive-1777655796056848000)
     DB: ver=0 gen=1 tombstone=false
  6. Check host/revive-host-1777655796056804000 (workspace=ws-revive-1777655796056848000)
     → ALLOWED_TRUE (attempt 1)
  7. DeleteResource host/revive-host-1777655796056804000 (hbi)
     DB: ver=1 gen=1 tombstone=true

▸ [16] TestSanity_MultiResourceChurn  [PASS]
  1. ReportResource host/churn-a-1777655801905945000 (hbi, workspace=ws-churn-1777655801906023000)
     DB: ver=0 gen=0 tombstone=false
  2. ReportResource host/churn-b-1777655801905971000 (hbi, workspace=ws-churn-1777655801906023000)
     DB: ver=0 gen=0 tombstone=false
  3. ReportResource host/churn-c-1777655801905997000 (hbi, workspace=ws-churn-1777655801906023000)
     DB: ver=0 gen=0 tombstone=false
  4. Check host/churn-a-1777655801905945000 (workspace=ws-churn-1777655801906023000)
     → ALLOWED_TRUE (attempt 1)
  5. Check host/churn-b-1777655801905971000 (workspace=ws-churn-1777655801906023000)
     → ALLOWED_TRUE (attempt 2)
  6. Check host/churn-c-1777655801905997000 (workspace=ws-churn-1777655801906023000)
     → ALLOWED_TRUE (attempt 1)
  7. DeleteResource host/churn-b-1777655801905971000 (hbi)
     DB: ver=1 gen=0 tombstone=true
  8. Check host/churn-b-1777655801905971000 (workspace=ws-churn-1777655801906023000)
     → ALLOWED_FALSE (attempt 9)
  9. DeleteResource host/churn-a-1777655801905945000 (hbi)
     DB: ver=1 gen=0 tombstone=true
  10. DeleteResource host/churn-c-1777655801905997000 (hbi)
     DB: ver=1 gen=0 tombstone=true
  11. Check host/churn-a-1777655801905945000 (workspace=ws-churn-1777655801906023000)
     → ALLOWED_FALSE (attempt 11)
  12. Check host/churn-c-1777655801905997000 (workspace=ws-churn-1777655801906023000)
     → ALLOWED_FALSE (attempt 7)

────────────────────────────────────────────────────────────────────────────────
  Total: 16  |  Passed: 16  |  Failed: 0
────────────────────────────────────────────────────────────────────────────────

ok  	github.com/project-kessel/inventory-api/test/e2e/sanity	73.842s
==============================================
[INFO] All sanity tests PASSED
[INFO] Cleaning up...
[INFO] Cleanup complete

reporterType := cmd.ReporterType.String()

if resourceType == "" {
if cmd.ResourceType.String() == "" {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not need to do this validation since this check is already done during construction of the type. That is the advantage of using the type.

@snehagunta snehagunta force-pushed the RHCLOUD-45005-tiny-types-resource-reporter-type branch from 5d2172e to 25be5b7 Compare May 1, 2026 17:35
Comment thread internal/data/schema_inmemory_test.go Outdated
bizmodel "github.com/project-kessel/inventory-api/internal/biz/model"
)

func dRT(s string) bizmodel.ResourceType {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: both these method names can be a bit verbose, not good for readability

@Rajagopalan-Ranganathan
Copy link
Copy Markdown
Contributor

/lgtm - one nitpick comment and if time permits will do another review in detail.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
internal/data/schema_inmemory.go (1)

446-455: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Normalize the resource segment in JSON keys before matching.

The current implementation compares NormalizeResourceType(rt) + ":" against the raw jsonKey. If the JSON cache contains non-canonical keys like "RHEL/Host:hbi" or "K8S_CLUSTER:hbi", the prefix check will fail because NormalizeResourceType returns lowercase with underscores (e.g., "rhel_host:"), which won't match the raw key.

Parse and normalize the resource segment from jsonKey before comparison.

🔧 Proposed fix
 func findResourceTypeFromJsonKey(jsonKey string, resourceTypes []bizmodel.ResourceType) (bizmodel.ResourceType, bool) {
+	resourceSegment, _, ok := strings.Cut(jsonKey, ":")
+	if !ok {
+		return bizmodel.ResourceType(""), false
+	}
+	normalizedSegment := NormalizeResourceType(bizmodel.DeserializeResourceType(resourceSegment))
+
 	for _, rt := range resourceTypes {
-		prefix := NormalizeResourceType(rt) + ":"
-		if strings.HasPrefix(jsonKey, prefix) {
+		if NormalizeResourceType(rt) == normalizedSegment {
 			return rt, true
 		}
 	}

 	return bizmodel.ResourceType(""), false
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/data/schema_inmemory.go` around lines 446 - 455, The prefix check
should normalize the resource segment from jsonKey before comparing; instead of
checking strings.HasPrefix(jsonKey, NormalizeResourceType(rt)+":"), split
jsonKey at the first ':' to extract the resource segment (e.g., seg :=
strings.SplitN(jsonKey, ":", 2)[0]), convert that segment into a
bizmodel.ResourceType and run it through NormalizeResourceType (segNorm :=
NormalizeResourceType(bizmodel.ResourceType(seg))), then inside the loop compare
segNorm == NormalizeResourceType(rt) and return rt when equal; if there is no
':' or no match, return the zero value and false as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/schema/schema.go`:
- Around line 121-125: The code incorrectly calls bizmodel.NewResourceType for
reporter directories; replace the constructor call with
bizmodel.NewReporterType(reporter.Name()) and then normalize appropriately
(either add/use a NormalizeReporterType in the data package or pass the reporter
type's string to the existing data.NormalizeResourceType if reporter
normalization is equivalent). Update the variable name (rpt → rptr or similar)
to reflect a reporter type and ensure subsequent code uses the
reporter-normalized value instead of data.NormalizeResourceType(rpt) if you add
a distinct data.NormalizeReporterType function.

---

Duplicate comments:
In `@internal/data/schema_inmemory.go`:
- Around line 446-455: The prefix check should normalize the resource segment
from jsonKey before comparing; instead of checking strings.HasPrefix(jsonKey,
NormalizeResourceType(rt)+":"), split jsonKey at the first ':' to extract the
resource segment (e.g., seg := strings.SplitN(jsonKey, ":", 2)[0]), convert that
segment into a bizmodel.ResourceType and run it through NormalizeResourceType
(segNorm := NormalizeResourceType(bizmodel.ResourceType(seg))), then inside the
loop compare segNorm == NormalizeResourceType(rt) and return rt when equal; if
there is no ':' or no match, return the zero value and false as before.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: f6c7ab4d-8e8e-483c-ada5-3a401963d347

📥 Commits

Reviewing files that changed from the base of the PR and between 588fec3 and d4c5b94.

📒 Files selected for processing (5)
  • cmd/schema/schema.go
  • internal/biz/usecase/resources/resource_service.go
  • internal/biz/usecase/resources/resource_service_test.go
  • internal/data/schema_inmemory.go
  • internal/data/schema_inmemory_test.go

Comment thread cmd/schema/schema.go Outdated
ReporterType() string
ResourceType() ResourceType
ReporterType() ReporterType
ReporterInstanceId() string
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These 2 need to be updated too. I'll do it in a follow on PR

@snehagunta snehagunta force-pushed the RHCLOUD-45005-tiny-types-resource-reporter-type branch 2 times, most recently from 9e36cde to 614266e Compare May 1, 2026 21:05
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
internal/service/resources/kesselinventoryservice_test.go (1)

3947-3948: 💤 Low value

Consider adding t.Helper() to newFakeSchemaRepository.

buildReporterResourceKey (line 3146) already calls t.Helper(). Adding it here would make require.NoError failures report the callsite (e.g., newTestUsecase) rather than the line inside this helper, which aids debugging.

♻️ Proposed change
 func newFakeSchemaRepository(t *testing.T) model.SchemaRepository {
+	t.Helper()
 	schemaRepository := data.NewInMemorySchemaRepository()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/resources/kesselinventoryservice_test.go` around lines 3947
- 3948, Add t.Helper() to the test helper newFakeSchemaRepository so test
failures inside it are reported at the caller site; locate the function
newFakeSchemaRepository(t *testing.T) in
internal/service/resources/kesselinventoryservice_test.go and add a single call
to t.Helper() at the top of that function before creating the schemaRepository
(so require.NoError and similar assertions point to the calling test like
newTestUsecase rather than inside the helper).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/service/resources/kesselinventoryservice_test.go`:
- Around line 3947-3948: Add t.Helper() to the test helper
newFakeSchemaRepository so test failures inside it are reported at the caller
site; locate the function newFakeSchemaRepository(t *testing.T) in
internal/service/resources/kesselinventoryservice_test.go and add a single call
to t.Helper() at the top of that function before creating the schemaRepository
(so require.NoError and similar assertions point to the calling test like
newTestUsecase rather than inside the helper).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 671329fc-3531-4618-a476-a8bf0dd0ddc2

📥 Commits

Reviewing files that changed from the base of the PR and between 614266e and b009ce1.

📒 Files selected for processing (4)
  • cmd/schema/schema.go
  • internal/data/schema_inmemory.go
  • internal/data/schema_inmemory_test.go
  • internal/service/resources/kesselinventoryservice_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • internal/data/schema_inmemory.go
  • internal/data/schema_inmemory_test.go

@snehagunta snehagunta force-pushed the RHCLOUD-45005-tiny-types-resource-reporter-type branch 2 times, most recently from 048f435 to 9aba36c Compare May 1, 2026 22:45
… and events

Replace raw string usage of resource type and reporter type with
domain-specific tiny types throughout the schema repository, schema
service, resource events, outbox events, and related tests.

- Update SchemaRepository interface and InMemorySchemaRepository to use
  ResourceType/ReporterType parameters and return types
- Update ResourceEvent interface to return ResourceType/ReporterType
- Update SchemaService to accept ResourceType/ReporterType
- Update resource_service validation to pass types directly
- Add NormalizeReporterType matching existing NormalizeResourceType
- Convert loadResourceSchema/loadCommonResourceDataSchema to accept
  tiny types
- Update cmd/schema to use proper constructors for both types
- Use .String() at external boundaries (event serialization, map keys,
  file paths, error messages)

Co-authored-by: Cursor <cursoragent@cursor.com>
@snehagunta snehagunta force-pushed the RHCLOUD-45005-tiny-types-resource-reporter-type branch from 9aba36c to 7ad9481 Compare May 1, 2026 22:47
@snehagunta
Copy link
Copy Markdown
Contributor Author

Sanity Test Results — All 16/16 PASSED

Namespace: ephemeral-ks5jcq
Branch: RHCLOUD-45005-tiny-types-resource-reporter-type
Date: 2026-05-01

=== RUN   TestSanity_CheckBulk_MultipleHosts
--- PASS: TestSanity_CheckBulk_MultipleHosts (9.87s)
=== RUN   TestSanity_CheckSelf_ReturnsError
--- PASS: TestSanity_CheckSelf_ReturnsError (0.29s)
=== RUN   TestSanity_CheckSelfBulk_ReturnsError
--- PASS: TestSanity_CheckSelfBulk_ReturnsError (0.27s)
=== RUN   TestSanity_ReportHost_CheckAllowed
--- PASS: TestSanity_ReportHost_CheckAllowed (0.44s)
=== RUN   TestSanity_Check_WrongWorkspace
--- PASS: TestSanity_Check_WrongWorkspace (0.34s)
=== RUN   TestSanity_DeleteHost_AccessLost
--- PASS: TestSanity_DeleteHost_AccessLost (5.53s)
=== RUN   TestSanity_Check_Combinations
=== RUN   TestSanity_Check_Combinations/matching_workspace
=== RUN   TestSanity_Check_Combinations/non_matching_workspace
=== RUN   TestSanity_Check_Combinations/no_instance_in_check_ref
=== RUN   TestSanity_Check_Combinations/with_instance_in_check_ref
--- PASS: TestSanity_Check_Combinations (4.51s)
    --- PASS: TestSanity_Check_Combinations/matching_workspace (1.41s)
    --- PASS: TestSanity_Check_Combinations/non_matching_workspace (0.33s)
    --- PASS: TestSanity_Check_Combinations/no_instance_in_check_ref (1.38s)
    --- PASS: TestSanity_Check_Combinations/with_instance_in_check_ref (1.38s)
=== RUN   TestSanity_CheckForUpdate_Combinations
=== RUN   TestSanity_CheckForUpdate_Combinations/matching
=== RUN   TestSanity_CheckForUpdate_Combinations/non_matching
=== RUN   TestSanity_CheckForUpdate_Combinations/after_delete
--- PASS: TestSanity_CheckForUpdate_Combinations (8.10s)
    --- PASS: TestSanity_CheckForUpdate_Combinations/matching (1.32s)
    --- PASS: TestSanity_CheckForUpdate_Combinations/non_matching (0.27s)
    --- PASS: TestSanity_CheckForUpdate_Combinations/after_delete (6.52s)
=== RUN   TestSanity_CheckForUpdateBulk_MixedResults
--- PASS: TestSanity_CheckForUpdateBulk_MixedResults (1.38s)
=== RUN   TestSanity_ReportDeleteReReport_Revive
--- PASS: TestSanity_ReportDeleteReReport_Revive (5.86s)
=== RUN   TestSanity_MultiResourceChurn
--- PASS: TestSanity_MultiResourceChurn (19.97s)
PASS
ok  	github.com/project-kessel/inventory-api/test/e2e/sanity	62.483s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants