-
Notifications
You must be signed in to change notification settings - Fork 27
Add --no-subtest-reports
CLI opt
#199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --no-subtest-reports
CLI opt
#199
Conversation
Signed-off-by: Giampaolo Rodola <g.rodola@gmail.com>
Signed-off-by: Giampaolo Rodola <g.rodola@gmail.com>
for more information, see https://pre-commit.ci
Signed-off-by: Giampaolo Rodola <g.rodola@gmail.com>
Signed-off-by: Giampaolo Rodola <g.rodola@gmail.com>
Signed-off-by: Giampaolo Rodola <g.rodola@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @giampaolo!
Could you please also add a test case using the option, and ensuring that the subtests do not show up in the output? It should also show subtests if they fail or skip, regardless of the option. Feel free to add more than one test.
Thanks!
I've tried by installing my local branch with The way I run the new test is:
...and the draft of the new test looks like: diff --git a/tests/test_subtests.py b/tests/test_subtests.py
index bd5cef9..307953d 100644
--- a/tests/test_subtests.py
+++ b/tests/test_subtests.py
@@ -156,6 +156,31 @@ class TestFixture:
expected_lines += ["* 1 passed *"]
result.stdout.fnmatch_lines(expected_lines)
+ def test_xxx(self, pytester: pytest.Pytester, mode: Literal["normal", "xdist"]):
+ pytester.makepyfile(
+ """
+ import pytest
+
+ def test_foo(subtests):
+ for i in range(5):
+ with subtests.test(msg="custom", i=i):
+ if i % 2 == 0:
+ pytest.fail('even number')
+ """
+ )
+ expected_lines = [
+ "[custom] (i=0) SUBFAIL test_xxx.py::test_foo - Failed: even number",
+ "[custom] (i=2) SUBFAIL test_xxx.py::test_foo - Failed: even number",
+ "[custom] (i=4) SUBFAIL test_xxx.py::test_foo - Failed: even number",
+ ]
+ # result = pytester.runpytest("-v")
+ # result.stdout.fnmatch_lines(expected_lines)
+
+ result = pytester.runpytest("--no-subtests-reports")
+ print(result.stdout.str())
+ # breakpoint() # XXX
+ # result.stdout.fnmatch_lines(expected_lines)
+
class TestSubTest:
""" |
Good practice is to always use a virtual env: python3 -m venv .venv
source .venv/bin/activate
pip install -e .
pytest --tb=short --no-header -v -s tests/test_subtests.py::TestFixture::test_xxx This way you never install anything in your system Python. |
for more information, see https://pre-commit.ci
Sorry for the delay. I added a test case. Could you please check if it makes sense? |
Hi @giampaolo, I will get to your PR this week! |
Thanks! The problem with the test you added is that the test still passed even without your modifications. Tests for a certain functionality should fail if you remove the code that implements the functionality, otherwise they don't serve the purpose to prevent regressions. I took the liberty of improving it and pushing. 👍 |
See #198.