Skip to content

Test coverage for API modules#47

Merged
mshriver merged 1 commit intoibutsu:mainfrom
mshriver:apis-coverage
Nov 24, 2025
Merged

Test coverage for API modules#47
mshriver merged 1 commit intoibutsu:mainfrom
mshriver:apis-coverage

Conversation

@mshriver
Copy link
Contributor

@mshriver mshriver commented Nov 24, 2025

Summary by Sourcery

Add comprehensive unit-style tests for ibutsu_client API modules using pytest-style tests and mocked API clients.

Tests:

  • Add end-to-end tests for ArtifactApi covering list, retrieval, download, view, upload, and delete operations and their request construction.
  • Add tests for RunApi including creating, fetching, listing with filters, updating, and bulk updating runs with query parameter validation.
  • Add tests for ProjectApi to cover create, get, list with pagination and owner filtering, update, and fetching filter parameters.
  • Add tests for ResultApi to validate create, get, list with filters, and update behavior and request payloads.
  • Add tests for AdminProjectManagementApi to exercise admin CRUD operations on projects and ensure correct endpoints and methods are used.
  • Add tests for DashboardApi to verify dashboard CRUD operations and list retrieval with pagination parameters.

Copilot AI review requested due to automatic review settings November 24, 2025 18:01
@sourcery-ai
Copy link

sourcery-ai bot commented Nov 24, 2025

Reviewer's Guide

Replace autogenerated unittest stubs with concrete pytest-style tests for multiple Ibutsu API client modules, using shared mock_api_client and mock_rest_response fixtures to validate response deserialization, HTTP methods, URLs, query parameters, and request bodies including file uploads.

File-Level Changes

Change Details Files
Add concrete tests for ArtifactApi covering list, get, download, view, upload, and delete behaviors with detailed request/response validation.
  • Replace empty unittest.TestCase stub with a pytest-style TestArtifactApi class using injected mock_api_client and mock_rest_response fixtures.
  • Implement get_artifact_list test to assert ArtifactList deserialization, artifact fields, pagination, and construction of GET /artifact URL with page and pageSize query parameters.
  • Implement get_artifact test to assert Artifact deserialization and correct GET /artifact/{id} path.
  • Implement download_artifact and view_artifact tests that mock binary data and validate GET /artifact/{id}/download and /artifact/{id}/view endpoints return raw bytes.
  • Implement upload_artifact test that verifies Artifact deserialization, POST /artifact call, and that multipart form post_params contains the file tuple with filename and content.
  • Implement delete_artifact test asserting DELETE /artifact/{id} call with no return body.
test/test_artifact_api.py
Add functional tests for RunApi that exercise run creation, retrieval, listing with filters, single update, and bulk metadata update, validating both payload serialization and query construction.
  • Replace placeholder unittest-based RunApi tests with pytest-style TestRunApi using mock_api_client and mock_rest_response fixtures.
  • Implement add_run test to verify Run deserialization, POST /run call, and that request body is a dict with expected fields (component, env).
  • Implement get_run test to validate Run deserialization and correct GET /run/{id} path.
  • Implement get_run_list test that checks RunList deserialization, run fields, pagination, and that query string contains page, pageSize, and filter parameters parsed via urllib.parse.
  • Implement update_run test to verify PUT /run/{id} is called with a serialized dict body matching the UpdateRun data.
  • Implement bulk_update test to verify POST /runs/bulk-update, filter query parameter, and that the body dict contains the expected metadata field from UpdateRun.
test/test_run_api.py
Add comprehensive tests for ProjectApi including create, get, list with owner filter, update, and retrieval of filterable parameters.
  • Convert ProjectApi tests from empty unittest stubs to pytest-style TestProjectApi using shared mocks.
  • Implement add_project test to validate Project deserialization, POST /project, and serialized body containing name and title.
  • Implement get_project test to assert GET /project/{id} and correct Project fields.
  • Implement get_project_list test that asserts ProjectList deserialization, pagination fields, and that GET /project includes page, pageSize, and ownerId query parameters parsed via urllib.parse.
  • Implement update_project test to validate PUT /project/{id} and body serialization for updated project name.
  • Implement get_filter_params test that returns a list of filterable parameter names and verifies GET /project/filter-params/{id} path and response list contents.
test/test_project_api.py
Add tests for ResultApi to cover result creation, retrieval, listing with filters, and update semantics, including body serialization and query parameters.
  • Replace ResultApi unittest stubs with pytest-style TestResultApi using mock_api_client and mock_rest_response fixtures.
  • Implement add_result test to verify Result deserialization, POST /result endpoint, and serialized body fields (test_id, result, duration).
  • Implement get_result test to assert GET /result/{id} path and basic Result field mapping.
  • Implement get_result_list test to validate ResultList deserialization, pagination, and GET /result query string containing page, pageSize, and filter parameters via urllib.parse parsing.
  • Implement update_result test to verify PUT /result/{id} and that the body dict contains the updated result field.
test/test_result_api.py
Add admin-level project management tests for AdminProjectManagementApi ensuring CRUD operations and project listing behave as expected.
  • Replace empty unittest.TestCase for admin project management with pytest-style TestAdminProjectManagementApi using shared mocks.
  • Implement admin_add_project test to verify Project deserialization and POST /admin/project invocation when creating a project as an admin.
  • Implement admin_delete_project test to ensure DELETE /admin/project/{id} is called with no return payload expectations.
  • Implement admin_get_project test validating GET /admin/project/{id} and Project field mapping.
  • Implement admin_get_project_list test to assert ProjectList deserialization, pagination, and GET /admin/project listing URL with page and pageSize query parameters embedded in the URL.
  • Implement admin_update_project test verifying PUT /admin/project/{id} with a Project body and correct response mapping.
test/test_admin_project_management_api.py
Add dashboard management tests for DashboardApi covering dashboard CRUD and listing behavior using the mocked API client.
  • Replace DashboardApi unittest stubs with pytest-style TestDashboardApi class leveraging mock_api_client and mock_rest_response fixtures.
  • Implement add_dashboard test to verify Dashboard deserialization and POST /dashboard call when creating a dashboard.
  • Implement delete_dashboard test to assert DELETE /dashboard/{id} endpoint is invoked.
  • Implement get_dashboard test validating GET /dashboard/{id} URL and Dashboard field mapping.
  • Implement get_dashboard_list test to assert DashboardList deserialization, pagination handling, and GET /dashboard URL with page and pageSize query parameters present.
  • Implement update_dashboard test to verify PUT /dashboard/{id} with serialized Dashboard body and correct response mapping.
test/test_dashboard_api.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@codecov
Copy link

codecov bot commented Nov 24, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 68.68%. Comparing base (0774c30) to head (714637f).
⚠️ Report is 1 commits behind head on main.

❌ Your project status has failed because the head coverage (68.68%) is below the target coverage (85.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@             Coverage Diff             @@
##             main      #47       +/-   ##
===========================================
+ Coverage   57.22%   68.68%   +11.45%     
===========================================
  Files          58       58               
  Lines        4783     4783               
  Branches      496      496               
===========================================
+ Hits         2737     3285      +548     
+ Misses       1996     1338      -658     
- Partials       50      160      +110     
Flag Coverage Δ
unittests 68.68% <ø> (+11.45%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 7 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0774c30...714637f. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location> `test/test_dashboard_api.py:87-96` </location>
<code_context>
+    def test_get_dashboard_list(self, mock_api_client, mock_rest_response):
</code_context>

<issue_to_address>
**nitpick (testing):** Use URL parsing for query parameter assertions to make the test less brittle

In `test_get_dashboard_list` (and `test_admin_get_project_list`), you’re asserting query params by checking substrings like `"page=1"` and `"pageSize=25"` in the URL, which is brittle if ordering or encoding changes. Since other tests (`RunApi`/`ProjectApi`/`ResultApi`) already use `urlparse` and `parse_qs`, please follow the same pattern here and assert on `query_params["page"]` and `query_params["pageSize"]` instead.
</issue_to_address>

### Comment 2
<location> `test/test_artifact_api.py:159-163` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 3
<location> `test/test_artifact_api.py:160-163` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 4
<location> `test/test_artifact_api.py:156-163` </location>
<code_context>
    def test_upload_artifact(self, mock_api_client, mock_rest_response):
        """Test case for upload_artifact"""
        api = ArtifactApi(api_client=mock_api_client)
        run_id = uuid4()
        filename = "test.txt"
        file_content = b"content"

        artifact_data = {
            "id": str(uuid4()),
            "filename": filename,
            "result_id": None,
            "run_id": str(run_id),
        }

        # Mock the API response
        mock_response = mock_rest_response(data=artifact_data, status=201)
        mock_api_client.call_api.return_value = mock_response

        # Call the API
        response = api.upload_artifact(filename=filename, file=file_content, run_id=run_id)

        # Verify result
        assert isinstance(response, Artifact)
        assert response.filename == filename
        assert str(response.run_id) == str(run_id)

        # Verify call
        mock_api_client.call_api.assert_called_once()
        args, _kwargs = mock_api_client.call_api.call_args
        assert args[0] == "POST"
        assert args[1].endswith("/artifact")

        # Verify form params include file and metadata
        # args[4] is post_params
        post_params = args[4]
        # Check if file is in post_params
        # post_params is a list of tuples
        file_found = False
        for param in post_params:
            if param[0] == "file" and param[1][1] == file_content:
                # param[1] should be (filename, filedata, mimetype)
                file_found = True
                break
        assert file_found

</code_context>

<issue_to_address>
**suggestion (code-quality):** Use any() instead of for loop ([`use-any`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/use-any/))

```suggestion
        file_found = any(
            param[0] == "file" and param[1][1] == file_content
            for param in post_params
        )
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive test coverage for API modules in the ibutsu_client, replacing auto-generated test stubs with functional unit tests. The tests use pytest fixtures for mocking and validate both API request construction and response deserialization.

  • Replaces empty unittest-based stubs with pytest-based tests using fixtures
  • Adds full test coverage for CRUD operations across 6 API modules
  • Implements consistent patterns for mocking API calls and validating responses

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
test/test_run_api.py Comprehensive tests for RunApi including add, get, list, update, and bulk update operations with proper URL parsing for query parameter validation
test/test_result_api.py Complete test suite for ResultApi CRUD operations with robust query parameter validation using URL parsing
test/test_project_api.py Full coverage for ProjectApi including standard CRUD operations and filter params endpoint testing
test/test_dashboard_api.py Tests for DashboardApi covering all CRUD operations including delete functionality; uses string matching for URL verification
test/test_artifact_api.py Complete test coverage for ArtifactApi including file upload/download, view, and delete operations; uses string matching for URL verification
test/test_admin_project_management_api.py Full test suite for AdminProjectManagementApi admin endpoints; uses string matching for URL verification

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 115 to 117
assert "/admin/project" in args[1]
assert "page=1" in args[1]
assert "pageSize=25" in args[1]
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The query parameter verification using string matching is less robust than the URL parsing approach used in other test files (e.g., test_run_api.py, test_result_api.py). Consider using urlparse and parse_qs for consistency and to avoid false positives if the parameter appears elsewhere in the URL.

Example:

from urllib.parse import parse_qs, urlparse

parsed_url = urlparse(args[1])
assert parsed_url.path.endswith("/admin/project")
query_params = parse_qs(parsed_url.query)
assert query_params["page"] == ["1"]
assert query_params["pageSize"] == ["25"]

Copilot uses AI. Check for mistakes.
Comment on lines 115 to 117
assert "/dashboard" in args[1]
assert "page=1" in args[1]
assert "pageSize=25" in args[1]
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The query parameter verification using string matching is less robust than the URL parsing approach used in other test files (e.g., test_run_api.py, test_result_api.py). Consider using urlparse and parse_qs for consistency and to avoid false positives if the parameter appears elsewhere in the URL.

Example:

from urllib.parse import parse_qs, urlparse

parsed_url = urlparse(args[1])
assert parsed_url.path.endswith("/dashboard")
query_params = parse_qs(parsed_url.query)
assert query_params["page"] == ["1"]
assert query_params["pageSize"] == ["25"]

Copilot uses AI. Check for mistakes.
Comment on lines +42 to +44
assert "/artifact" in args[1]
assert "page=1" in args[1]
assert "pageSize=25" in args[1]
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The query parameter verification using string matching is less robust than the URL parsing approach used in other test files (e.g., test_run_api.py, test_result_api.py). Consider using urlparse and parse_qs for consistency and to avoid false positives if the parameter appears elsewhere in the URL.

Example:

from urllib.parse import parse_qs, urlparse

parsed_url = urlparse(args[1])
assert parsed_url.path.endswith("/artifact")
query_params = parse_qs(parsed_url.query)
assert query_params["page"] == ["1"]
assert query_params["pageSize"] == ["25"]

Copilot uses AI. Check for mistakes.
@mshriver mshriver merged commit 7d61f7d into ibutsu:main Nov 24, 2025
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants