Conversation
|
Added some more functionality. Can now define more then one solver: python -m pytest ./tests --solver exact,gurobiSolver-specific tests will be filtered to these two. Non-solver-specific tests will be parametrised with each of these solvers. You can also run on all installed solvers: python -m pytest ./tests --solver allOr skipp all tests which depend on "solving a model": python -m pytest ./tests --solver NoneAlso added a README with instructions on how to use the testsuite, how to use the new decorators and in general how to write new tests. |
|
Currently a lot of tests have their own solver parametrisation, for example in SOLVERNAMES = [name for name, solver in SolverLookup.base_solvers() if solver.supported()]
def _generate_inputs(generator):
exprs = []
for solver in SOLVERNAMES:
exprs += [(solver, expr) for expr in generator(solver)]
return exprs
@pytest.mark.parametrize(("solver","constraint"),list(_generate_inputs(bool_exprs)), ids=str)
def test_bool_constraints(solver, constraint):
...This makes it so that for the default behavior (no |
|
Now examples with optional dependencies are also "skipped"; an exception due to missing dependencies causes the test to be ignored. This is a hacky way of not having to label each example script with its dependencies. An additional downside is that when the required dependencies are installed, the test is always run (even when the required solver has been excluded from |
|
Had to remove the parametrisation on some tests that use |
| if not_installed_solvers: | ||
| warnings.warn( | ||
| f"The following solvers are not installed and will not be tested: {', '.join(not_installed_solvers)}. " | ||
| f"Only installed solvers will be used for testing.", |
There was a problem hiding this comment.
Maybe print the list of solvers that will be tested here?
tests/README.md
Outdated
| This automatically detects all installed solvers from `SolverLookup` and parametrises non-solver-specific tests to run against each one. | ||
|
|
||
| #### Skip Solver Tests | ||
| Skip all solver-parametrised tests (only run tests that don't depend on solver parametrisation): |
There was a problem hiding this comment.
"I.e., tests that do not rely on solving a model. Examples are tests that evaluate constructors of expressions"
|
Encountered an issue with the Looking at our use of unittest's TestCase, we only use it for the nice assertion methods ( |
|
Found some remaining issues in some of the tests. But these are specific to the tests themselves, and not really on topic for "parametrisation". So will make separate pull requests for those once this PR has been merged (as to not increase the size of this one even more ;) ). |
|
Code looks good, also good that you removed the dirty import *s.
I have 2 failing errors: because I am in this weird situation that gurobipy is installed, but my license has expired.. is there a way to be safe against that? FAILED tests/test_examples.py::test_advanced_example[examples/advanced/explain_satisfaction.py] - ImportError: cannot import name 'musx' from 'musx' (/home/tias/local/src/mi... not sure what happend with this one... if I run Finally, if I run =============================================== 106 failed, 78692 passed, 4 skipped, 249 warnings in 211.46s (0:03:31) If I don't run with '-n5' then it reports 81329 tests (bit more?) and does show the "per file" progress that allows me to locate the errors... is this a known side effect of -n? we should document that then? And it shows me that it is because of e.g.: (indeed, we have not hooked int2bool to pytest yet) So, I think the essence of this dump is:
|
|
As mentioned a couple of times, the parametrisation of our entire test suite (as opposed to a limited section as currently on master) reveals a bunch of small bugs / edge cases we'll have to fix (some are just as simple as not running a certain test for a certain solver because it doesn't support it). So currently the default behavior of pytest (without So for example this could look like: @pytest.mark.run_all_by_default() <- only marker to add in selective places to replicate current behavior on master, can have different name
@pytest.mark.generate_constraints.with_args(bool_exprs)
@skip_on_missing_pblib(skip_on_exception_only=True)
def test_bool_constraints(solver, constraint):
...This seems like a minimal effort temporary fix to be able to merge this PR, with the goal to fix these remaining issues in future PRs and being able to run even more tests parametrised across all solvers. |
|
so on my machine all test failures (ignore examples) are pysdd related (which properly throws a not implemented error on any integer expression). The temporary fix for that can be simply to remove pysdd from 'all':
in a later PR, we will make pysdd like pysat and pindakaas also have int2bool and we can reactivate this... However, while deactivating pysdd, I noticed:
|
|
Will look into those issues you mentioned. But when I run with the same solvers as the github runner (except pysdd) and with the examples ignored due to the timeout issue, I get 59 failed tests: Instead of redefining "all", I can also just put the following in the girhub runner: python -m pytest -n auto tests/ --solver z3,exact,pysat,choco,minizinc,pindakaas,pumpkin |
|
I see.
I am unsure what the best way forward is... The tests pass with the default option of only testing with ortools. So lets just keep using that on github, and then we fix these things after the release? |
|
Tried fixing some of the tests on branch |
|
This PR became a bit too large to manage. Started with much smaller essentials in #817. |
|
Marked as draft; merged the pytestify one. Next nice thing to have from this is the 'import *' fixes. I leave it to you to judge if this is better done afresh or by rebasing this on master... |
Attempt to parametrise our pytest test-suite. The default behavior stays the same: generic tests get run on the default solver
OR-Toolsand the more solver specific tests / across solver tests check which backends are available on the current system.But sometimes you want to just run the testsuite on a single solver without having to uninstall all other solvers / create a fresh environment. Now you can pass an optional argument
--solver:This will have three consequences:
exactinstead of the defaultortoolsexactIn general, I opted for filtering instead of skipping tests. So the non-
exacttests will not count towards the total number of tests. I believe we should reserve "skipping" for tests which don't get run due reasons of which we want to inform the user, e.g. missing dependencies which they need to install. When the user provides the--solveroption, they already know that tests targeting other solvers won't be run so it would just clutter the results if we were to skip instead of filter those tests.To parameterise a unittest class for the "generic" tests, simply decorate it with:
@pytest.mark.usefixtures("solver")After which
self.solverwill be available, matching the user provided solver argument.All solver specific tests can now be decorated with:
@pytest.mark.requires_solver("<SOLVER_NAME>")And will automatically be skipped if a
--solverargument has bee provided which doesn't matchSOLVER_NAME.For the across-solver tests which use generators (those in
test_constraints.py), thepytest_collection_modifyitemshook will filter out parameterised pytest functions which have been instantiated with a solver different than the user provided one. Both the argumentsolverandsolver_nameget filtered on.There are still some smaller places (see
test_solveAll.py) wherecp.SolverLookup.base_solvers()is used more directly, which can't be filtered without making changes to the test itself (not possible with one of the decorators / callback functions)As a further improvement, it might be possible to merge the following two (Already did it ;) )
So do the skipping if solver is not available also through the first mark and skip the tests more centrally in
pytest_collection_modifyitems.Using this parameterisation with solver different from OR-Tools revealed some issues with our testsuite related to #779