Skip to content

Leaderboard: dbasunag/opendatahub-tests (69.9/100 - Silver)#301

Merged
kami619 merged 1 commit intoambient-code:mainfrom
dbasunag:leaderboard-dbasunag-opendatahub-tests-2026-02-18T19-35-35
Feb 19, 2026
Merged

Leaderboard: dbasunag/opendatahub-tests (69.9/100 - Silver)#301
kami619 merged 1 commit intoambient-code:mainfrom
dbasunag:leaderboard-dbasunag-opendatahub-tests-2026-02-18T19-35-35

Conversation

@dbasunag
Copy link
Copy Markdown
Contributor

@dbasunag dbasunag commented Feb 18, 2026

Leaderboard Submission

Repository: dbasunag/opendatahub-tests
Score: 69.9/100
Tier: Silver
Submitted by: @dbasunag

Validation Checklist

  • Repository exists and is public
  • Submitter has commit access
  • Assessment re-run passes (±2 points tolerance)
  • JSON schema valid

Automated validation will run on this PR.


Submitted via agentready submit command.

@github-actions
Copy link
Copy Markdown
Contributor

📈 Test Coverage Report

Branch Coverage
This PR 66.0%
Main 66.0%
Diff ✅ +0%

Coverage calculated from unit tests only

@kami619
Copy link
Copy Markdown
Collaborator

kami619 commented Feb 19, 2026

@dbasunag

can you confirm the number of findings that you excluded from your agentready assess ?

The CI is probably complaining about this - findings: [... list of 22 findings ...] is too short - logged a ticket to track this - #309

@kami619
Copy link
Copy Markdown
Collaborator

kami619 commented Feb 19, 2026

@jeremyeder WDYT is the best path forward here ?

@jeremyeder
Copy link
Copy Markdown
Contributor

we want the most impactful checks, the number 25 isn't special. if we find only 10 really matter then that is success

@dbasunag
Copy link
Copy Markdown
Contributor Author

I ran the command like this

agentready assess . --exclude architecture_decisions --exclude openapi_specs --exclude test_coverage

None of these are relevant for a downstream test repository.

kami619 pushed a commit to kami619/agentready that referenced this pull request Feb 19, 2026
Relaxes the JSON schema constraints to accept assessments with 10-25
attributes instead of exactly 25. This enables users who run
`agentready assess --exclude` to submit valid assessments to the
leaderboard.

Changes:
- `attributes_total`: changed from `const: 25` to `minimum: 10, maximum: 25`
- `findings`: changed from `minItems: 25, maxItems: 25` to `minItems: 10, maxItems: 25`

Added regression tests to verify:
- Assessments with 10 attributes validate (minimum boundary)
- Assessments with 22 attributes validate (PR ambient-code#301 case)
- Assessments with fewer than 10 attributes are rejected

Fixes ambient-code#309
@kami619 kami619 merged commit be7a55f into ambient-code:main Feb 19, 2026
11 of 12 checks passed
github-actions Bot pushed a commit that referenced this pull request Feb 20, 2026
# [2.29.0](v2.28.2...v2.29.0) (2026-02-20)

### Features

* add dbasunag/opendatahub-tests to leaderboard ([#301](#301)) ([be7a55f](be7a55f))
* add opendatahub-io/opendatahub-tests to leaderboard ([#314](#314)) ([7a52466](7a52466))
@github-actions
Copy link
Copy Markdown
Contributor

🎉 This PR is included in version 2.29.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

kami619 added a commit that referenced this pull request Feb 23, 2026
PR #301 submitted an assessment for dbasunag's fork instead of the
official opendatahub-io/opendatahub-tests repository. The correct
submission was made in PR #314.

This removes the incorrect submission to avoid duplicate/confusing
entries in the leaderboard.

Co-authored-by: Ambient Code Bot <bot@ambient-code.local>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
github-actions Bot pushed a commit that referenced this pull request Feb 23, 2026
## [2.29.2](v2.29.1...v2.29.2) (2026-02-23)

### Bug Fixes

* **cli:** check .pre-commit-config.yaml for conventional commit ([#310](#310)) ([61c44d9](61c44d9))
* remove incorrect dbasunag/opendatahub-tests submission ([#321](#321)) ([e6aecf8](e6aecf8)), closes [#301](#301)
kami619 added a commit that referenced this pull request Feb 24, 2026
* fix(schema): allow assessments with excluded attributes

Relaxes the JSON schema constraints to accept assessments with 10-25
attributes instead of exactly 25. This enables users who run
`agentready assess --exclude` to submit valid assessments to the
leaderboard.

Changes:
- `attributes_total`: changed from `const: 25` to `minimum: 10, maximum: 25`
- `findings`: changed from `minItems: 25, maxItems: 25` to `minItems: 10, maxItems: 25`

Added regression tests to verify:
- Assessments with 10 attributes validate (minimum boundary)
- Assessments with 22 attributes validate (PR #301 case)
- Assessments with fewer than 10 attributes are rejected

Fixes #309

* fix(schema): add cross-field validation to validate-report

Add programmatic cross-field validation to SchemaValidator.validate_report()
to enforce constraints that JSON Schema draft-07 cannot express:
- len(findings) == attributes_total
- attributes_assessed + attributes_skipped == attributes_total

These constraints are documented in specs/001-agentready-scorer/data-model.md
but were only enforced internally by the Assessment model. External reports
submitted via validate-report bypassed this validation.

Addresses PR #312 review comment about cross-field consistency.
Fixes #309

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Ambient Code Bot <bot@ambient-code.local>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
github-actions Bot pushed a commit that referenced this pull request Feb 24, 2026
## [2.29.3](v2.29.2...v2.29.3) (2026-02-24)

### Bug Fixes

* **schema:** allow assessments with excluded attributes ([#312](#312)) ([81b999f](81b999f)), closes [#301](#301) [#309](#309) [#309](#309)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants