-
Notifications
You must be signed in to change notification settings - Fork 27
Description
(I recommend copy-pasting the XML content from this issue into files, and then opening those in a web-browser. The tags nesting will be much more apparent.)
Take this code:
import pytest
@pytest.mark.parametrize('n', [0,2,4,0,3,6,0,5,10])
class TestClass:
def test_func(self, n):
print(n)
self.run_single(n)
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
assert n%2 == 0, 'n is odd'
When run with pytest pytest-subtest.py --junitxml=out-subtest.xml
, the XML file it produces is the following:
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="0" failures="2" skipped="2" tests="9" time="0.041" timestamp="2023-02-22T13:10:30.982095" hostname="stefano-XPS"><testcase classname="pytest-regular.TestClass" name="test_func[00]" time="0.001" /><testcase classname="pytest-regular.TestClass" name="test_func[2]" time="0.000"><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-regular.py:11: Skipped</skipped></testcase><testcase classname="pytest-regular.TestClass" name="test_func[4]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[01]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[3]" time="0.001"><failure message="AssertionError: n is odd assert (3 % 2) == 0">self = <pytest-regular.TestClass object at 0x7f7770985300>, n = 3
def test_func(self, n):
print(n)
> self.run_single(n)
pytest-regular.py:7:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-regular.TestClass object at 0x7f7770985300>, n = 3
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (3 % 2) == 0
pytest-regular.py:13: AssertionError</failure></testcase><testcase classname="pytest-regular.TestClass" name="test_func[6]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[02]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[5]" time="0.001"><failure message="AssertionError: n is odd assert (5 % 2) == 0">self = <pytest-regular.TestClass object at 0x7f77709854b0>, n = 5
def test_func(self, n):
print(n)
> self.run_single(n)
pytest-regular.py:7:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-regular.TestClass object at 0x7f77709854b0>, n = 5
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (5 % 2) == 0
pytest-regular.py:13: AssertionError</failure></testcase><testcase classname="pytest-regular.TestClass" name="test_func[10]" time="0.000"><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-regular.py:11: Skipped</skipped></testcase></testsuite></testsuites>
I tweaked that code to run the exact same test cases, but split in 3 tests of 3 subtests each:
import pytest
@pytest.mark.parametrize('start', [2,3,5])
class TestClass:
def test_func(self, subtests, start):
print(start)
for multiplier in range(3):
with subtests.test():
n = start*multiplier
self.run_single(n)
def run_single(self, n):
if n == 6 or n == 10:
pytest.skip()
assert n%2 == 0, 'n is odd'
the resulting XML of which is:
<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="0" failures="2" skipped="2" tests="12" time="0.041" timestamp="2023-02-22T13:10:24.166299" hostname="stefano-XPS"><testcase classname="pytest-subtest.TestClass" name="test_func[2]" time="0.007"><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-subtest.py:15: Skipped</skipped></testcase><testcase classname="pytest-subtest.TestClass" name="test_func[3]" time="0.017"><failure message="AssertionError: n is odd assert (3 % 2) == 0">self = <pytest-subtest.TestClass object at 0x7f8123d1e530>
subtests = SubTests(ihook=<_pytest.config.compat.PathAwareHookProxy object at 0x7f8124cf4190>, suspend_capture_ctx=<bound method ...te='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_func[3]>>)
start = 3
def test_func(self, subtests, start):
print(start)
for multiplier in range(3):
with subtests.test():
n = start*multiplier
print(n)
> self.run_single(n)
pytest-subtest.py:11:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-subtest.TestClass object at 0x7f8123d1e530>, n = 3
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (3 % 2) == 0
pytest-subtest.py:17: AssertionError</failure></testcase><testcase classname="pytest-subtest.TestClass" name="test_func[5]" time="0.004"><failure message="AssertionError: n is odd assert (5 % 2) == 0">self = <pytest-subtest.TestClass object at 0x7f8123d1e410>
subtests = SubTests(ihook=<_pytest.config.compat.PathAwareHookProxy object at 0x7f8124cf4190>, suspend_capture_ctx=<bound method ...te='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_func[5]>>)
start = 5
def test_func(self, subtests, start):
print(start)
for multiplier in range(3):
with subtests.test():
n = start*multiplier
print(n)
> self.run_single(n)
pytest-subtest.py:11:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest-subtest.TestClass object at 0x7f8123d1e410>, n = 5
def run_single(self, n):
if n == 2 or n == 10:
pytest.skip()
> assert n%2 == 0, 'n is odd'
E AssertionError: n is odd
E assert (5 % 2) == 0
pytest-subtest.py:17: AssertionError</failure><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-subtest.py:15: Skipped</skipped></testcase></testsuite></testsuites>
In the subtest version, the xml lacks any information about passed subtests. There is info about failures/skips as nested tags inside a testcase
, but while the non-subtests version has all tests listed out in separate testcase
tags, the subtests one only lists tests and subtests with special status. This can skew off CI tools that count testcase
tags. We have a few tens of tests that each spawn hundreds of subtests (in a scenario that makes sense, contrary to the stupid example here), and
- we get a full test fail/skip if one subtest fails/is skipped
- we get a single pass if all subtests pass in one test.
If we run 3 tests with 3 subtests each, and 2 subtests are skipped and one fails, my expectation would be the CI to report 1 failure, 2 skips, 7 pass (or 9 if we also consider the tests
, I don't care). Instead, depending a bit on where the tests fail/skip, I can now get 1 failure, 2 skips, 0 pass.
Is there scope for improving on this?