Skip to content

Conversation

@mielvds
Copy link
Collaborator

@mielvds mielvds commented Nov 19, 2025

This PR is an attempt to migrate the original test runner to pytest with minimal changes. It fixes #206.

Tests can be run with or without pytest; I've added notes in tests/runtests.py wherever functionality is handled by pytest and if desired, can be removed after review or somewhere in the future. When more unit tests arrive, we might want to rename runtests.py to something more accurate. (or merge it with test_manifests.py )

Copilot was used to explain the original code and help adding commentary to tests/runtests.py, which I checked for correctness. This helped a lot to understand the existing code, but apologies if I misunderstood anything; it wasn't easy :) Next, I used copilot to provide some pytest boilerplate and took it from there.

The EARL reports match up between two test runners. I've also updated the GitHub actions to use pytest and that seems to work as well.

@mielvds mielvds marked this pull request as ready for review November 19, 2025 13:03
Copy link
Collaborator

@anatoly-scherbakov anatoly-scherbakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried running both pytest and unittest implementations locally, and results are close. I consider it a good enough reason to get this merged.

I have been thinking about creating a generic tool to run JSON-LD test suites against any toolkit in any language, but that's an endeavor of its own, completely out of scope of this project.

@mielvds
Copy link
Collaborator Author

mielvds commented Nov 26, 2025

@anatoly-scherbakov what do you mean with 'close'? :) As in not equivalent? There should be no difference functionality-wise

@BigBlueHat @davidlehn could you have a look at this and tell me what you think?

@anatoly-scherbakov
Copy link
Collaborator

@mielvds I was being overcautious. :)

I asked Cursor to rerun both test suites again, they match. I am not sure what "pending" tests are though, but I would not consider that a breaking issue.

image

@mielvds
Copy link
Collaborator Author

mielvds commented Nov 27, 2025

As far as I can tell, the 'pending' tests more or less correspond to 'xfail'. However, I can't really find an easy hook to map those. It's also not clear whether 'pending' also means 'skipped' in runtests.py, but I believe it does.

I took another simple stab at it, which classifies the pending tests as xfail, but this might not be the intended behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Use pytest for (unit)testing?

3 participants