Skip to content

feat: add odh-dashboard results#342

Merged
kami619 merged 1 commit intoambient-code:mainfrom
rsun19:submit-odh-dashboard-results
Mar 18, 2026
Merged

feat: add odh-dashboard results#342
kami619 merged 1 commit intoambient-code:mainfrom
rsun19:submit-odh-dashboard-results

Conversation

@rsun19
Copy link
Copy Markdown
Contributor

@rsun19 rsun19 commented Mar 18, 2026

Description

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Refactoring (no functional changes)
  • Performance improvement
  • Test coverage improvement

Related Issues

Fixes #
Relates to #

Changes Made

Testing

  • Unit tests pass (pytest)
  • Integration tests pass
  • Manual testing performed
  • No new warnings or errors

Checklist

  • My code follows the project's code style
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published

Screenshots (if applicable)

Additional Notes

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 18, 2026

Warning

.coderabbit.yaml has a parsing error

The CodeRabbit configuration file in this repository has a parsing error and default settings were used instead. Please fix the error(s) in the configuration file. You can initialize chat with CodeRabbit to get help with the configuration file.

💥 Parsing errors (1)
Validation error: String must contain at most 250 character(s) at "tone_instructions"
⚙️ Configuration instructions
  • Please see the configuration documentation for more information.
  • You can also validate your configuration using the online YAML validator.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Walkthrough

An assessment tool re-ran evaluation checks on a repository, updating the assessment report. The overall certification score improved from 42.8 to 76.5 (Bronze to Gold level), with multiple attributes transitioning from fail to pass status and supporting evidence updated accordingly.

Changes

Cohort / File(s) Summary
Assessment Report Update
submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json
Updated assessment metadata including agentready version (2.23.0→2.29.6), timestamp, and repository commit information; certification level upgraded from Bronze to Gold; multiple attributes changed from fail to pass (CLAUDE.md, README.md, repository structure, ADRs); evidence and remediation blocks updated throughout; assessment duration reduced from 85.2 to 27.3 seconds.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The description is a template with no substantive content beyond checkbox selections, making it vague and uninformative about the actual changes made to the codebase. Fill in the 'Changes Made' section with specific details about what was added, updated, or modified; provide relevant issue links if applicable.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add odh-dashboard results' clearly and directly reflects the main change—adding assessment results for the odh-dashboard repository to the submissions.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can use your project's `pylint` configuration to improve the quality of Python code reviews.

Add a pylint configuration file to your project to customize how CodeRabbit runs pylint.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json (1)

89-133: ⚠️ Potential issue | 🟠 Major

The standard-layout finding is using the wrong ecosystem heuristic for this repo.

This block tells odh-dashboard to add src/mypackage/ and __init__.py, and cites PyPA layout guidance, even though this same report identifies the repo as primarily TypeScript/Go in Lines 19-25 and later references source under packages/... in Lines 596-597. That makes the failure and remediation misleading for a non-Python monorepo. The layout check needs language-aware rules here instead of Python package scaffolding.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json`
around lines 89 - 133, The report's standard-layout check is applying
Python-specific remediation (e.g., suggesting src/mypackage/ and __init__.py)
even though the repo is detected as TypeScript/Go and contains packages/; update
the layout heuristic to be language-aware: detect ecosystem via package.json,
go.mod, tsconfig.json, or presence of packages/ and change the remediation
payload (keys like "measured_value", "remediation", "examples", "criteria") to
recommend appropriate layouts for TS/Go monorepos (e.g., packages/, cmd/, pkg/,
src/ per language) and remove Python-specific advice when the repo is not
Python. Ensure the code path that builds the "examples" and "steps" strings
skips adding "__init__.py" and PyPA links when language != python and instead
injects relevant guidance for the detected ecosystem.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json`:
- Around line 169-172: The dependency_security entry is inconsistent: the
"score" field (35) is below the "threshold" (≥60) but "status" is set to "pass";
update the record for the dependency_security attribute so the "status" reflects
the score (set "status" to "fail") or adjust the numeric fields so score >=
threshold (e.g., raise "score" to ≥60) and ensure "measured_value" and
"threshold" remain accurate; locate the JSON object containing
"status"/"score"/"measured_value"/"threshold" for dependency_security in the
assessment and make the change.
- Around line 13-15: The submission uses the container mount alias instead of
the canonical repo slug: replace the JSON field "name": "repo" (or
repository.name) with the logical repository identifier derived from the "url"
(e.g. "opendatahub-io/odh-dashboard" or just "odh-dashboard" per project
convention) so the record uses the real repo name/slug rather than the sandbox
path; update the value where the object with keys "path", "name", "url" is
constructed or serialized (look for the object containing "path": "/repo",
"name": "repo", "url") and set "name" to the correct repository slug.
- Around line 31-35: The scorer currently removes `not_applicable`/`skipped`
findings from the denominator when computing overall_score and
certification_level in src/agentready/services/scorer.py (around the
compute/finalize score blocks at lines ~121-165), which inflates results for
incomplete assessments; change the logic so skipped/not_applicable items do not
get silently excluded: either (A) include them as zero-scored items in the
denominator when calculating overall_score and certification_level, or (B)
detect when assessed_count < total_attributes and set a distinct state (e.g.,
certification_level = "Incomplete" and add a boolean flag like
`incomplete_assessment = True`) and surface that in the output JSON instead of
computing a direct Gold/Silver/… grade; update the functions/methods that build
`overall_score` and `certification_level` (the
compute_score/finalize_certification code paths) to implement one of these
behaviors and ensure the output includes the new flag or uses the full rubric
denominator so incomplete assessments are not presented as directly comparable
certifications.

---

Outside diff comments:
In `@submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json`:
- Around line 89-133: The report's standard-layout check is applying
Python-specific remediation (e.g., suggesting src/mypackage/ and __init__.py)
even though the repo is detected as TypeScript/Go and contains packages/; update
the layout heuristic to be language-aware: detect ecosystem via package.json,
go.mod, tsconfig.json, or presence of packages/ and change the remediation
payload (keys like "measured_value", "remediation", "examples", "criteria") to
recommend appropriate layouts for TS/Go monorepos (e.g., packages/, cmd/, pkg/,
src/ per language) and remove Python-specific advice when the repo is not
Python. Ensure the code path that builds the "examples" and "steps" strings
skips adding "__init__.py" and PyPA links when language != python and instead
injects relevant guidance for the detected ecosystem.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: b155885b-6bd1-417e-a9c3-9a2176b46416

📥 Commits

Reviewing files that changed from the base of the PR and between da6bc9f and 6d78826.

📒 Files selected for processing (1)
  • submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json

Comment on lines +13 to +15
"path": "/repo",
"name": "repo",
"url": "https://github.com/opendatahub-io/odh-dashboard.git",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the canonical repository name, not the container mount name.

Line 14 records repository.name as repo, while Line 15 clearly identifies opendatahub-io/odh-dashboard. That will mislabel this submission anywhere the dashboard displays or groups by repository.name, and it risks collisions with other reports generated from the same /repo mount. Populate the logical repo name/slug here instead of the sandbox path alias.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json`
around lines 13 - 15, The submission uses the container mount alias instead of
the canonical repo slug: replace the JSON field "name": "repo" (or
repository.name) with the logical repository identifier derived from the "url"
(e.g. "opendatahub-io/odh-dashboard" or just "odh-dashboard" per project
convention) so the record uses the real repo name/slug rather than the sandbox
path; update the value where the object with keys "path", "name", "url" is
constructed or serialized (look for the object containing "path": "/repo",
"name": "repo", "url") and set "name" to the correct repository slug.

Comment on lines +31 to +35
"overall_score": 76.5,
"certification_level": "Gold",
"attributes_assessed": 17,
"attributes_skipped": 6,
"attributes_total": 23,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This Gold certification is being computed from an incomplete assessment.

Lines 33-35 show only 17 of 23 attributes were assessed, and the omitted ones include test coverage, inline docs, naming, structured logging, and complexity—checks that are still relevant to a TypeScript/Go repository but are marked not_applicable or skipped here because the assessor cannot evaluate them. In src/agentready/services/scorer.py:121-145 and src/agentready/services/scorer.py:150-165, those findings are removed from the denominator, so the 76.5/Gold result is materially inflated by missing coverage rather than earned against the full rubric. The dashboard should surface this as an incomplete assessment instead of a directly comparable Gold certification.

Also applies to: 228-234, 394-400, 414-430, 551-577

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json`
around lines 31 - 35, The scorer currently removes `not_applicable`/`skipped`
findings from the denominator when computing overall_score and
certification_level in src/agentready/services/scorer.py (around the
compute/finalize score blocks at lines ~121-165), which inflates results for
incomplete assessments; change the logic so skipped/not_applicable items do not
get silently excluded: either (A) include them as zero-scored items in the
denominator when calculating overall_score and certification_level, or (B)
detect when assessed_count < total_attributes and set a distinct state (e.g.,
certification_level = "Incomplete" and add a boolean flag like
`incomplete_assessment = True`) and surface that in the output JSON instead of
computing a direct Gold/Silver/… grade; update the functions/methods that build
`overall_score` and `certification_level` (the
compute_score/finalize_certification code paths) to implement one of these
behaviors and ensure the output includes the new flag or uses the full rubric
denominator so incomplete assessments are not presented as directly comparable
certifications.

Comment on lines 169 to +172
"status": "pass",
"score": 35,
"measured_value": "Security tools configured: Dependabot",
"threshold": "\u226560 points (Dependabot + SAST or multiple scanners)",
"threshold": "\u226560 points (Dependabot/Renovate + SAST or multiple scanners)",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

dependency_security cannot be pass with 35 points against a ≥60 threshold.

Line 170 gives this finding a score of 35, while Line 172 says the passing bar is ≥60 points. Publishing a pass state here makes the per-attribute result internally inconsistent and undermines trust in the dashboard output. Either the threshold is wrong or the status should be fail.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@submissions/opendatahub-io/odh-dashboard/assessment-20260318-191501.json`
around lines 169 - 172, The dependency_security entry is inconsistent: the
"score" field (35) is below the "threshold" (≥60) but "status" is set to "pass";
update the record for the dependency_security attribute so the "status" reflects
the score (set "status" to "fail") or adjust the numeric fields so score >=
threshold (e.g., raise "score" to ≥60) and ensure "measured_value" and
"threshold" remain accurate; locate the JSON object containing
"status"/"score"/"measured_value"/"threshold" for dependency_security in the
assessment and make the change.

@github-actions
Copy link
Copy Markdown
Contributor

📈 Test Coverage Report

Branch Coverage
This PR 66.8%
Main 66.8%
Diff ✅ +0%

Coverage calculated from unit tests only

@kami619 kami619 merged commit deec3fa into ambient-code:main Mar 18, 2026
13 of 14 checks passed
@github-actions
Copy link
Copy Markdown
Contributor

🎉 This PR is included in version 2.31.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants