Skip to content

Conversation

nicoddemus
Copy link
Member

@nicoddemus nicoddemus commented Sep 20, 2025

This PR copies the files from pytest-subtests and performs minimal integration.
I'm opening this to gauge whether everyone is on board with integrating this feature into the core.

Why?

Pros

  • subtests is a standard unittest feature, so it makes sense for pytest to support it as well.
  • Provides a simple alternative to parametrization.
  • Adds the ability to generate new test cases at runtime during the test execution, which is not possible with parametrization.
  • While it can exist as an external plugin, it requires many hacks, and better report integration is not easily achievable without core integration (off the top of my head: issues with terminal reporting, last failed and stepwise support, among others).

Cons

  • Adds another maintenance burden to the core.

TODO ✅

If everyone is on board, I will take the time this week to polish it and get it ready to merge ASAP:

  • Cleanup the implementation: Currently it relies on monkey-patching, which should no longer be needed since we can modify the core code directly.
  • Documentation: The feature should be documented as experimental -- meaning that while the feature itself is solid, we are still working out details such as how to best report subtest failures, integration with other plugins, etc.
  • Fix lint and testing failures (obviously).

Related

@nicoddemus nicoddemus changed the title [DRAFT] Integrate pytest subtests [DRAFT] Integrate pytest-subtests Sep 20, 2025
@Pierre-Sassoulas
Copy link
Member

I agree that this feature should be in pytest core because it's a feature in unittest and pytest should aim to be a drop in replacement for unittest (plus the original issue have 40 👍 and no 👎 at time of writing, overwhelming popular support in my book).

@RonnyPfannschmidt
Copy link
Member

I recall we need to fix marking the owning test case as failed if one subtest fails

@RonnyPfannschmidt
Copy link
Member

But yeah i want to see this in

@webknjaz
Copy link
Member

Sounds reasonable

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from b43ab38 to 97ee032 Compare September 22, 2025 23:07
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from 97ee032 to 6b5831f Compare September 26, 2025 13:34
@psf-chronographer psf-chronographer bot added the bot:chronographer:provided (automation) changelog entry is part of PR label Sep 26, 2025
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch 2 times, most recently from 5f56d81 to c93c0e0 Compare September 26, 2025 14:00
@nicoddemus nicoddemus marked this pull request as ready for review September 26, 2025 14:00
@nicoddemus nicoddemus changed the title [DRAFT] Integrate pytest-subtests Integrate pytest-subtests Sep 26, 2025
@nicoddemus
Copy link
Member Author

Ready for an initial review folks.

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from c93c0e0 to b569c93 Compare September 29, 2025 22:17
Copy link
Member

@bluetech bluetech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to see this happening.

I ran out of time for the review for today, so didn't really get to the implementation parts, but already have some comments so submitting a partial review.

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from b569c93 to 506c75b Compare October 11, 2025 21:23
@nicoddemus nicoddemus requested a review from bluetech October 11, 2025 21:23
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch 2 times, most recently from 42f910d to ec3144f Compare October 11, 2025 21:32
@RonnyPfannschmidt
Copy link
Member

we might want to bikeshed a api around sub-section, subtests and parameter loops a little - not necessarily for doing right now - but for setting up a roadmap

@nicoddemus
Copy link
Member Author

I was hoping to get this feature into 9.0, if possible.

@RonnyPfannschmidt
Copy link
Member

we absolutely want this in 9.0

we shouldnt change the api to ensure compat with existing users
we shoul dplan for mkaing some things nicer

we should try to ensure lastfailed does rerun tests with failed subtests before releasing 9.0

@RonnyPfannschmidt RonnyPfannschmidt added this to the 9.0 milestone Oct 12, 2025
@nicoddemus
Copy link
Member Author

we should try to ensure lastfailed does rerun tests with failed subtests before releasing 9.0

I will add a test, good call.

Copy link
Member

@bluetech bluetech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately ran out of time to do a full review again, but left some comments.

I also noticed two things while I was testing it:


If some or all subtests fail, the parent test is still PASSED, is this intentional?

image

For some reason the errors are shown as "Captured log call", seems wrong as there are not log calls in my test.

image

Regarding the name SUBPASS etc., I saw that regular pass is PASSED so I wonder it shouldn't be SUBPASSED etc? (I can check the code later).

from typing_extensions import Self


def pytest_addoption(parser: Parser) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Can you explain the rationale for these options -- did someone ask for them in pytest-subtests?

  • I admit the difference between the options is a bit obscure, I wonder if they can't be folded into a single option?

  • I wonder if this shouldn't be an ini option rather than a cli option? I imagine someone would want to set this if it's too noisy, in which case it more of a project setting. But maybe it's more of a user-preference thing, then it's a CLI flag...

  • Maybe it should be configurable at the test level, using the fixture itself? (Probably not)

  • Should we document the options in the tutorial?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These were external contributions: pytest-dev/pytest-subtests#81 and pytest-dev/pytest-subtests#198.

I'm OK with changing them somehow into a single option, perhaps an option to disable all subtest progress unless it is failure?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have experience with subtests to say myself. Generally less flags is better :)

pytest-dev/pytest-subtests#81
This allows the native pytest column calculations to not be disrupted and minimizes unneeded output for large CI systems.

Is the "disruption" still a thing?

Otherwise seems to want to minimize output. Though for shortletter it's a bit dubious I must say (it is quite "short"). I suspect what they mean is "we semantically don't view subtests as something that should be reported".

pytest-dev/pytest-subtests#198
I find it extremely annoying that each subtest success is printed to stdout

Wants to minimize output.

an option to disable all subtest progress unless it is failure?

IIUC this is mostly what they want.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should leverage the new "verbosity_" options:

https://docs.pytest.org/en/stable/reference/reference.html#confval-verbosity_assertions

We could include a verbosity_subtests which defaults to only show subtests failure, with additional verbosity being used to also show passed subtests.

Copy link

@giampaolo giampaolo Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few thoughts on this:

  1. Since the main motivation appears to be adding a feature unittest already provides, IMO it makes sense to follow its behavior: no extra output if all subtests pass.

  2. CLI tools usually disable verbose output by default unless requested. Examples:

  • pytest -v ...

  • python3 -m unittest -v ...

  • PYTHONWARNINGS=always python3 foo.py

  • cp -v foo bar

    The PR I made in Add --no-subtest-reports CLI opt pytest-subtests#199 goes in the opposite direction: it adds --no-subtest-reports to suppress verbosity. I think that should not be replicated here.

  1. Subtests are mainly for context on failure. If all pass, knowing the exact execution is usually irrelevant. If you do care, it probably means it should be a separate test, not a subtest.

Just my 2 cents, and thanks to @nicoddemus for this very welcome feature.

@nicoddemus
Copy link
Member Author

Now the top-level test will fail with a message in case it contains failed subtests but otherwise does not contain failed assertions on its own:

__________________________________ test_foo ___________________________________
Contains 2 failed subtests

In addition, enable the plugin in `pytest/__init__.py` and `config/__init__.py`.
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from d8c7655 to 1c31ff1 Compare October 18, 2025 14:02
Comment on lines +566 to +567
r"test_skip_with_failure.py::T::test_foo \[custom message\] \(i=0\) SUBSKIPPED"
r" \(skip subtest i=0\) .*",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think it would be better to skip lint/formatter complaining here than to break the line, as it's less readable this way.


* Each subtest is reported with the ``,`` character.
* Subtests are reported first and the "top-level" test is reported at the end on its own.
* Subtest failures are reported as ``SUBFAIL``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Subtest failures are reported as ``SUBFAIL``.
* Subtest failures are reported as ``SUBFAILED``.

from typing_extensions import Self


def pytest_addoption(parser: Parser) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have experience with subtests to say myself. Generally less flags is better :)

pytest-dev/pytest-subtests#81
This allows the native pytest column calculations to not be disrupted and minimizes unneeded output for large CI systems.

Is the "disruption" still a thing?

Otherwise seems to want to minimize output. Though for shortletter it's a bit dubious I must say (it is quite "short"). I suspect what they mean is "we semantically don't view subtests as something that should be reported".

pytest-dev/pytest-subtests#198
I find it extremely annoying that each subtest success is printed to stdout

Wants to minimize output.

an option to disable all subtest progress unless it is failure?

IIUC this is mostly what they want.


def test(subtests):
for i in range(5):
with subtests.test(msg="custom message", i=i):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per comment above

Suggested change
with subtests.test(msg="custom message", i=i):
with subtests.test("custom message", i=i):

Comment on lines +276 to +285
# capsys or capfd are active, subtest should not capture.
capman = request.config.pluginmanager.getplugin("capturemanager")
capture_fixture_active = getattr(capman, "_capture_fixture", None)

if option == "sys" and not capture_fixture_active:
fixture = CaptureFixture(SysCapture, request, _ispytest=True)
elif option == "fd" and not capture_fixture_active:
fixture = CaptureFixture(FDCapture, request, _ispytest=True)
else:
fixture = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me the code is clearer written like this:

Suggested change
# capsys or capfd are active, subtest should not capture.
capman = request.config.pluginmanager.getplugin("capturemanager")
capture_fixture_active = getattr(capman, "_capture_fixture", None)
if option == "sys" and not capture_fixture_active:
fixture = CaptureFixture(SysCapture, request, _ispytest=True)
elif option == "fd" and not capture_fixture_active:
fixture = CaptureFixture(FDCapture, request, _ispytest=True)
else:
fixture = None
capman = request.config.pluginmanager.getplugin("capturemanager")
if getattr(capman, "_capture_fixture", None):
# capsys or capfd are active, subtest should not capture.
fixture = None
elif option == "sys":
fixture = CaptureFixture(SysCapture, request, _ispytest=True)
elif option == "fd":
fixture = CaptureFixture(FDCapture, request, _ispytest=True)
else:
fixture = None


@dataclasses.dataclass()
class SubtestsReporterPlugin:
NAME: ClassVar[str] = "subtests-reporter"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
NAME: ClassVar[str] = "subtests-reporter"
NAME: ClassVar = "subtests-reporter"



def pytest_configure(config: Config) -> None:
config.pluginmanager.register(SubtestsReporterPlugin(), SubtestsReporterPlugin.NAME)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a separate plugin? It seems like it can be on the top-level plugin, using config.stash for contains_failed_subtests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using config.stash for contains_failed_subtests.

Would actually be better to use item.stash instead of a dict on config keyed by nodeid.

# Top-level test, fail it it contains failed subtests and it has passed.
if (
report.passed
and (count := self.contains_failed_subtests.get(report.nodeid, 0)) > 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a defaultdict so the get isn't needed.

**Support for subtests** has been added.

:ref:`subtests <subtests>` are an alternative to parametrization, useful in situations where the parametrization values are not all known at collection time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps should mention that it integrates/replaces pytest-subtests with some differences?

Copy link
Member

@bluetech bluetech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more comments.

Comment on lines +224 to +230
try:
if exc_val is not None:
exc_info = ExceptionInfo.from_exception(exc_val)
else:
exc_info = None
finally:
self._exit_stack.close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think ExceptionInfo.from_exception is something that can fail, so is there a reason for wrapping in try/finally?

item=self.request.node, call=call_info
)
sub_report = SubtestReport._from_test_report(
report, SubtestContext(msg=self.msg, kwargs=self.kwargs.copy())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the kwargs.copy() necessary? SubtestContext uses Mapping (so promises not to mutate) and this function "owns" self.kwargs and doesn't mutate it, so should be OK not to copy.

Comment on lines +255 to +256
self._captured_output.update_report(sub_report)
self._captured_logs.update_report(sub_report)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take it or leave it: I prefer to reduce mutation when possible, so instead of update_report I would pass these two objects to SubtestReport._from_test_report and do it directly there. Since removing update_report will mess up NullCapturedLogs, I'd replace it will None and an if check. Slightly less nice but pretty ordinary.



def pytest_configure(config: Config) -> None:
config.pluginmanager.register(SubtestsReporterPlugin(), SubtestsReporterPlugin.NAME)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using config.stash for contains_failed_subtests.

Would actually be better to use item.stash instead of a dict on config keyed by nodeid.

self._captured_logs.update_report(sub_report)

with self.suspend_capture_ctx():
self.ihook.pytest_runtest_logreport(report=sub_report)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An obscure bug: the logging plugin "live logging" support has stateful calls like self.log_cli_handler.set_when("logreport"). With subtests, it ends up getting a pytest_runtest_logreport while the call of the parent test is still ongoing, it ends up messing the headers ("live log logreport" instead of "live log call"):

def test(subtests):
    logging.error('START')

    with subtests.test():
        logging.error('FIRST')
        assert False

    logging.error('MIDDLE')

    with subtests.test():
        logging.error('SECOND')
        assert False

    logging.error('END')

    assert False
z.py::test 
------------------------------------ live log call ------------------------------------
ERROR    root:z.py:6 START
ERROR    root:z.py:9 FIRST
(<subtest>) SUBFAILED                                                           [100%]
--------------------------------- live log logreport ----------------------------------
ERROR    root:z.py:12 MIDDLE
ERROR    root:z.py:15 SECOND

z.py::test (<subtest>) SUBFAILED                                                [100%]
--------------------------------- live log logreport ----------------------------------
ERROR    root:z.py:18 END

z.py::test FAILED                                                               [100%]

Dunno how much to bother with this...

if exc_val is not None:
if self.request.session.shouldfail:
return False
return True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a complication here, this misses the reraise handling that CallInfo.from_call has:

    reraise: tuple[type[BaseException], ...] = (Exit,)
    if not item.config.getoption("usepdb", False):
        reraise += (KeyboardInterrupt,)

So subtests.test() ends up swallowing pytest.exit() and Ctrl-C. Can test it with:

import pytest

def test(subtests):
    with subtests.test():
        pytest.exit() # To test exit
        # import time; time.sleep(5) # To test Ctrl-C
    print('AFTER')

def test2(): pass

There's also a question of pytest.skip() behavior. Currently it skips the subtest. Maybe that's what we want? Didn't think about it too much.

handler.setFormatter(logging_plugin.formatter)

captured_logs = CapturedLogs(handler)
with catching_logs(handler):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe should respect --log-level?

Suggested change
with catching_logs(handler):
with catching_logs(handler, level=logging_plugin.log_level):

@bluetech
Copy link
Member

bluetech commented Oct 20, 2025

In pytest-dev/pytest-subtests#44 I saw something I didn't think about, nested subtests, e.g.

import pytest

def test(subtests):
    with subtests.test("a"):
        with subtests.test("b"):
            1 / 0
        1 / 0

unittest doesn't seem to do anything special with them. Only thing I can think of is to automatically nest the subtest name, e.g. a::b or such. Not sure if it's desirable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bot:chronographer:provided (automation) changelog entry is part of PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants