-
Notifications
You must be signed in to change notification settings - Fork 16
Update _querying and participant-label filtering #474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Update _querying and participant-label filtering #474
Conversation
|
Thanks for bringing this up. I actually don't think your fix will do anything for flaky test behaviour... that's more likely because this test is breaking the hypothesis time limits (these examples where an entire dataset generated by hypothesis is created and indexed need to be phased out eventually). But there is an issue here, the test is pretty clearly incorrect and ineffectual. And I think there's some underlying issues with the query code. I'm going to rework this test a bit more before merging. It really should be testing that no filtering occurs when there is no subject wildcard. We already have a test for when subject has a filter. |
- Fix test_participant_label_doesnt_filter_comps_when_subject_has_filter_no_wcard - Test was creating modified config but not using it in generate_inputs call - This caused intermittent test failures due to non-deterministic behavior - Now uses the modified config consistently for reliable test execution
Previously returned a regex with a negative look-ahead on an empty string. This would match and exclude everything.
Previously tested only by integration type tests in test_generate_inputs.
Tests involve pybids indexing, and previously used hypothesis to generate examples. Remove hypothesis from all cases and parametrize with focused cases. Test logic between generate_inputs and get_matching_files with patches
a0737dc to
23d4676
Compare
|
I've used this PR now to add a new unit test suite for the _querying.py module. The affected test class in test_generate_inputs.py has been greatly pared down, and hypothesis has been removed. @akhanf, in the past, I've overused hypothesis on long-running tests (e.g. that run pybids for indexing). It's resulted in a lot of long-running, flaky tests. This fix represents a move toward a fix: use hypothesis in short-running unit tests, test logic in higher-order functions with mocker, and test integration with focused examples. |
Fix Test Logic Bug in Participant Filtering Test
Summary
This PR fixes a bug in the test
test_participant_label_doesnt_filter_comps_when_subject_has_filter_no_wcardwhere the test was creating a modified config but not using it in the actual test call, causing intermittent test failures.Problem
The test was experiencing flaky behavior - sometimes passing, sometimes failing. Investigation revealed that the test was:
configfrom the dataset usingcreate_snakebids_config(rooted)generate_inputs()with the originalcreate_snakebids_config(rooted)instead of the modifiedconfigThis meant the subject filters were never actually applied during testing, making the test's behavior non-deterministic and dependent on the randomly generated test data from Hypothesis.
Solution
Fixed the test to use the modified
configvariable in thegenerate_inputs()call instead of recreating the config. This ensures that the subject filters are properly applied during the test, making the test behavior consistent and deterministic.Changes
test_participant_label_doesnt_filter_comps_when_subject_has_filter_no_wcardinsnakebids/tests/test_generate_inputs.pygenerate_inputs()call to use the modifiedconfiginstead ofcreate_snakebids_config(rooted)Testing
Impact
This fix ensures reliable test execution and eliminates a source of CI flakiness. The test now properly validates the intended behavior: that participant label filtering doesn't affect components when subject entities have explicit filters applied.