-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Integrate pytest-subtests #13738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Integrate pytest-subtests #13738
Conversation
I agree that this feature should be in pytest core because it's a feature in unittest and pytest should aim to be a drop in replacement for unittest (plus the original issue have 40 👍 and no 👎 at time of writing, overwhelming popular support in my book). |
I recall we need to fix marking the owning test case as failed if one subtest fails |
But yeah i want to see this in |
Sounds reasonable |
b43ab38
to
97ee032
Compare
97ee032
to
6b5831f
Compare
5f56d81
to
c93c0e0
Compare
Ready for an initial review folks. |
c93c0e0
to
b569c93
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to see this happening.
I ran out of time for the review for today, so didn't really get to the implementation parts, but already have some comments so submitting a partial review.
b569c93
to
506c75b
Compare
42f910d
to
ec3144f
Compare
we might want to bikeshed a api around sub-section, subtests and parameter loops a little - not necessarily for doing right now - but for setting up a roadmap |
I was hoping to get this feature into 9.0, if possible. |
we absolutely want this in 9.0 we shouldnt change the api to ensure compat with existing users we should try to ensure lastfailed does rerun tests with failed subtests before releasing 9.0 |
I will add a test, good call. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately ran out of time to do a full review again, but left some comments.
I also noticed two things while I was testing it:
If some or all subtests fail, the parent test is still PASSED, is this intentional?

For some reason the errors are shown as "Captured log call", seems wrong as there are not log calls in my test.

Regarding the name SUBPASS
etc., I saw that regular pass is PASSED
so I wonder it shouldn't be SUBPASSED
etc? (I can check the code later).
from typing_extensions import Self | ||
|
||
|
||
def pytest_addoption(parser: Parser) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Can you explain the rationale for these options -- did someone ask for them in
pytest-subtests
? -
I admit the difference between the options is a bit obscure, I wonder if they can't be folded into a single option?
-
I wonder if this shouldn't be an ini option rather than a cli option? I imagine someone would want to set this if it's too noisy, in which case it more of a project setting. But maybe it's more of a user-preference thing, then it's a CLI flag...
-
Maybe it should be configurable at the test level, using the fixture itself? (Probably not)
-
Should we document the options in the tutorial?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were external contributions: pytest-dev/pytest-subtests#81 and pytest-dev/pytest-subtests#198.
I'm OK with changing them somehow into a single option, perhaps an option to disable all subtest progress unless it is failure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have experience with subtests to say myself. Generally less flags is better :)
pytest-dev/pytest-subtests#81
This allows the native pytest column calculations to not be disrupted and minimizes unneeded output for large CI systems.
Is the "disruption" still a thing?
Otherwise seems to want to minimize output. Though for shortletter it's a bit dubious I must say (it is quite "short"). I suspect what they mean is "we semantically don't view subtests as something that should be reported".
pytest-dev/pytest-subtests#198
I find it extremely annoying that each subtest success is printed to stdout
Wants to minimize output.
an option to disable all subtest progress unless it is failure?
IIUC this is mostly what they want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should leverage the new "verbosity_" options:
https://docs.pytest.org/en/stable/reference/reference.html#confval-verbosity_assertions
We could include a verbosity_subtests
which defaults to only show subtests failure, with additional verbosity being used to also show passed subtests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few thoughts on this:
-
Since the main motivation appears to be adding a feature unittest already provides, IMO it makes sense to follow its behavior: no extra output if all subtests pass.
-
CLI tools usually disable verbose output by default unless requested. Examples:
-
pytest -v ...
-
python3 -m unittest -v ...
-
PYTHONWARNINGS=always python3 foo.py
-
cp -v foo bar
The PR I made in Add
--no-subtest-reports
CLI opt pytest-subtests#199 goes in the opposite direction: it adds--no-subtest-reports
to suppress verbosity. I think that should not be replicated here.
- Subtests are mainly for context on failure. If all pass, knowing the exact execution is usually irrelevant. If you do care, it probably means it should be a separate test, not a subtest.
Just my 2 cents, and thanks to @nicoddemus for this very welcome feature.
Now the top-level test will fail with a message in case it contains failed subtests but otherwise does not contain failed assertions on its own:
|
In addition, enable the plugin in `pytest/__init__.py` and `config/__init__.py`.
d8c7655
to
1c31ff1
Compare
1c31ff1
to
ee3d5db
Compare
r"test_skip_with_failure.py::T::test_foo \[custom message\] \(i=0\) SUBSKIPPED" | ||
r" \(skip subtest i=0\) .*", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think it would be better to skip lint/formatter complaining here than to break the line, as it's less readable this way.
|
||
* Each subtest is reported with the ``,`` character. | ||
* Subtests are reported first and the "top-level" test is reported at the end on its own. | ||
* Subtest failures are reported as ``SUBFAIL``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Subtest failures are reported as ``SUBFAIL``. | |
* Subtest failures are reported as ``SUBFAILED``. |
from typing_extensions import Self | ||
|
||
|
||
def pytest_addoption(parser: Parser) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have experience with subtests to say myself. Generally less flags is better :)
pytest-dev/pytest-subtests#81
This allows the native pytest column calculations to not be disrupted and minimizes unneeded output for large CI systems.
Is the "disruption" still a thing?
Otherwise seems to want to minimize output. Though for shortletter it's a bit dubious I must say (it is quite "short"). I suspect what they mean is "we semantically don't view subtests as something that should be reported".
pytest-dev/pytest-subtests#198
I find it extremely annoying that each subtest success is printed to stdout
Wants to minimize output.
an option to disable all subtest progress unless it is failure?
IIUC this is mostly what they want.
|
||
def test(subtests): | ||
for i in range(5): | ||
with subtests.test(msg="custom message", i=i): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per comment above
with subtests.test(msg="custom message", i=i): | |
with subtests.test("custom message", i=i): |
# capsys or capfd are active, subtest should not capture. | ||
capman = request.config.pluginmanager.getplugin("capturemanager") | ||
capture_fixture_active = getattr(capman, "_capture_fixture", None) | ||
|
||
if option == "sys" and not capture_fixture_active: | ||
fixture = CaptureFixture(SysCapture, request, _ispytest=True) | ||
elif option == "fd" and not capture_fixture_active: | ||
fixture = CaptureFixture(FDCapture, request, _ispytest=True) | ||
else: | ||
fixture = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me the code is clearer written like this:
# capsys or capfd are active, subtest should not capture. | |
capman = request.config.pluginmanager.getplugin("capturemanager") | |
capture_fixture_active = getattr(capman, "_capture_fixture", None) | |
if option == "sys" and not capture_fixture_active: | |
fixture = CaptureFixture(SysCapture, request, _ispytest=True) | |
elif option == "fd" and not capture_fixture_active: | |
fixture = CaptureFixture(FDCapture, request, _ispytest=True) | |
else: | |
fixture = None | |
capman = request.config.pluginmanager.getplugin("capturemanager") | |
if getattr(capman, "_capture_fixture", None): | |
# capsys or capfd are active, subtest should not capture. | |
fixture = None | |
elif option == "sys": | |
fixture = CaptureFixture(SysCapture, request, _ispytest=True) | |
elif option == "fd": | |
fixture = CaptureFixture(FDCapture, request, _ispytest=True) | |
else: | |
fixture = None |
|
||
@dataclasses.dataclass() | ||
class SubtestsReporterPlugin: | ||
NAME: ClassVar[str] = "subtests-reporter" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NAME: ClassVar[str] = "subtests-reporter" | |
NAME: ClassVar = "subtests-reporter" |
|
||
|
||
def pytest_configure(config: Config) -> None: | ||
config.pluginmanager.register(SubtestsReporterPlugin(), SubtestsReporterPlugin.NAME) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a separate plugin? It seems like it can be on the top-level plugin, using config.stash
for contains_failed_subtests
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using config.stash for contains_failed_subtests.
Would actually be better to use item.stash
instead of a dict on config keyed by nodeid.
# Top-level test, fail it it contains failed subtests and it has passed. | ||
if ( | ||
report.passed | ||
and (count := self.contains_failed_subtests.get(report.nodeid, 0)) > 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a defaultdict so the get
isn't needed.
**Support for subtests** has been added. | ||
|
||
:ref:`subtests <subtests>` are an alternative to parametrization, useful in situations where the parametrization values are not all known at collection time. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps should mention that it integrates/replaces pytest-subtests
with some differences?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more comments.
try: | ||
if exc_val is not None: | ||
exc_info = ExceptionInfo.from_exception(exc_val) | ||
else: | ||
exc_info = None | ||
finally: | ||
self._exit_stack.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think ExceptionInfo.from_exception
is something that can fail, so is there a reason for wrapping in try/finally?
item=self.request.node, call=call_info | ||
) | ||
sub_report = SubtestReport._from_test_report( | ||
report, SubtestContext(msg=self.msg, kwargs=self.kwargs.copy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the kwargs.copy() necessary? SubtestContext
uses Mapping
(so promises not to mutate) and this function "owns" self.kwargs
and doesn't mutate it, so should be OK not to copy.
self._captured_output.update_report(sub_report) | ||
self._captured_logs.update_report(sub_report) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take it or leave it: I prefer to reduce mutation when possible, so instead of update_report
I would pass these two objects to SubtestReport._from_test_report
and do it directly there. Since removing update_report
will mess up NullCapturedLogs
, I'd replace it will None
and an if check. Slightly less nice but pretty ordinary.
|
||
|
||
def pytest_configure(config: Config) -> None: | ||
config.pluginmanager.register(SubtestsReporterPlugin(), SubtestsReporterPlugin.NAME) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using config.stash for contains_failed_subtests.
Would actually be better to use item.stash
instead of a dict on config keyed by nodeid.
self._captured_logs.update_report(sub_report) | ||
|
||
with self.suspend_capture_ctx(): | ||
self.ihook.pytest_runtest_logreport(report=sub_report) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An obscure bug: the logging plugin "live logging" support has stateful calls like self.log_cli_handler.set_when("logreport")
. With subtests, it ends up getting a pytest_runtest_logreport
while the call
of the parent test is still ongoing, it ends up messing the headers ("live log logreport" instead of "live log call"):
def test(subtests):
logging.error('START')
with subtests.test():
logging.error('FIRST')
assert False
logging.error('MIDDLE')
with subtests.test():
logging.error('SECOND')
assert False
logging.error('END')
assert False
z.py::test
------------------------------------ live log call ------------------------------------
ERROR root:z.py:6 START
ERROR root:z.py:9 FIRST
(<subtest>) SUBFAILED [100%]
--------------------------------- live log logreport ----------------------------------
ERROR root:z.py:12 MIDDLE
ERROR root:z.py:15 SECOND
z.py::test (<subtest>) SUBFAILED [100%]
--------------------------------- live log logreport ----------------------------------
ERROR root:z.py:18 END
z.py::test FAILED [100%]
Dunno how much to bother with this...
if exc_val is not None: | ||
if self.request.session.shouldfail: | ||
return False | ||
return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a complication here, this misses the reraise
handling that CallInfo.from_call
has:
reraise: tuple[type[BaseException], ...] = (Exit,)
if not item.config.getoption("usepdb", False):
reraise += (KeyboardInterrupt,)
So subtests.test()
ends up swallowing pytest.exit()
and Ctrl-C. Can test it with:
import pytest
def test(subtests):
with subtests.test():
pytest.exit() # To test exit
# import time; time.sleep(5) # To test Ctrl-C
print('AFTER')
def test2(): pass
There's also a question of pytest.skip()
behavior. Currently it skips the subtest. Maybe that's what we want? Didn't think about it too much.
handler.setFormatter(logging_plugin.formatter) | ||
|
||
captured_logs = CapturedLogs(handler) | ||
with catching_logs(handler): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe should respect --log-level
?
with catching_logs(handler): | |
with catching_logs(handler, level=logging_plugin.log_level): |
In pytest-dev/pytest-subtests#44 I saw something I didn't think about, nested subtests, e.g. import pytest
def test(subtests):
with subtests.test("a"):
with subtests.test("b"):
1 / 0
1 / 0 unittest doesn't seem to do anything special with them. Only thing I can think of is to automatically nest the subtest name, e.g. |
This PR copies the files from
pytest-subtests
and performs minimal integration.I'm opening this to gauge whether everyone is on board with integrating this feature into the core.
Why?
Pros
subtests
is a standardunittest
feature, so it makes sense for pytest to support it as well.Cons
TODO ✅
If everyone is on board, I will take the time this week to polish it and get it ready to merge ASAP:
Related