Skip to content
This repository was archived by the owner on Oct 1, 2025. It is now read-only.
This repository was archived by the owner on Oct 1, 2025. It is now read-only.

Write comprehensive utf self-tests #64

@aaaaalbert

Description

@aaaaalbert

The existing unit tests for utf itself, especially the ones dealing with the various #pragma directives, are insufficient. They don't try combinations of pragmas, and cannot really show that anything goes wrong, because they would stil show FAIL then. For example,

#pragma out
print "This output is ignored, and that's okay."

is nice, but what if we wanted to test that #pragma out correctly lets exceptions raise?

#pragma out
raise Exception("What is the correct result of this unit test?")
# It is FAIL!

Furthermore, are combinations of pragmas safe, and do they show the expected result? What if a #pragma repy is added to the mix? What if a specific restrictions file is given, and what if callargs are to be sent to the Repy program? --- Here is a degenerate example that PASSes although it should FAIL:

#pragma repy restrictions.default additionalarg
#pragma out
raise RepyException("I'm raising an unexpected exception. This test must FAIL!")

Testing:

$ python utf.py -f ut_utftests_pragma_out_pragma_repy_restrictionsfile_additionalarg_raise.py
Testing module: utftests
    Running: ut_utftests_pragma_out_pragma_repy_restrictionsfile_additionalarg_raise.py [ PASS ]

(Interestingly enough, it's the additional arg that makes this test pass erroneously. Any prefix of the presented #pragma repy results in a (correct) fail.)

Problem 1 is that we get a PASS when we should see a FAIL (i.e. utf gets something wrong).

Problem 2 is even if utf got it right and did show FAIL, this would be counter-intuitive for a correct test result.


A possible approach would be to write a unit test that internally constructs a command to run utf.py with arguments, and parses the output of that. Since there are lots of possible combinations of pragmas and (optional) arguments to them, the single test cases should be generated programmatically.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions