Skip to content

Releases: apluslms/python-grader-utils

Release v4.1

15 Sep 14:40

Choose a tag to compare

Minor fixes:

  • Fix iotester and validation bugs
  • Add documentation about the IOTester that was added in v4.0

Release v4.0.1

26 Aug 13:37

Choose a tag to compare

Update the version number in setup.py since it was forgotten in v4.0.

Release v4.0

26 Aug 13:36
ab2627d

Choose a tag to compare

Changes:

  • Add rpyc remote grader. It runs the grader and
    student's submission in different processes.
    It is still possible to run a single process like previously.
  • Add input/output tester.
  • Remove the deprecated configuration remove_more_sentinel.
  • Add timeout decorator for traditional unit tests
    (different timeout for each test method)
  • Improve CSS styles.
  • Write points to file if running with rpyc remote grader
    (otherwise print points to stdout, like previously).
  • Output the test running time and total time in the grading feedback.
  • Remove irrelevant tracebacks when possible from the feedback.

Release v3.5

11 May 08:56

Choose a tag to compare

Minor changes:

  • Fix error during python_import attrs check (validation in test_config.yaml).
  • Limit maximum length of stderr output so that the payload to be sent to A+ does not grow to hundreds of megabytes.
  • Fix the colours of the points and status labels in the feedback.
  • Add default test result messages: "The test was a success!", "The test failed, reason:", "An error occurred:"
  • Coverage tests are ordered correctly in the feedback even if there are more than nine tests.

Release v3.4

08 Apr 09:05

Choose a tag to compare

Major changes:

  • Upgraded libraries. Hypothesis was upgraded from v3 to v6.
    This affects exercise unit tests that use the Hypothesis framework.

  • Added a default timeout of 60 seconds to unit test methods.
    The exercise configuration (test_config.yaml) may modify the timeout limit
    with the new field testmethod_timeout.

  • The feedback is much more specific now if the student's code crashes when
    it is imported by the grader unit tests, e.g., if the student raises
    exceptions in the module-level code outside function definitions.
    Furthermore, if the grader unit tests crash in the import of the submission
    because the submission does not define the expected function, the student's
    feedback is clear and explicit about it.

Release v3.3.1

22 Mar 09:24

Choose a tag to compare

Fix bugs:

* Fix crash when the student code prints to stdout.
* Fix crash when the student code calls sys.exit()
* Fix crash in graderutils_format when the input data contains no points

v3.0

29 Aug 10:16

Choose a tag to compare

Result parsing and feedback rendering have been decoupled by defining an intermediate, test pipeline agnostic, unit test output result JSON format.

Python grader tests are executed with the package graderutils, which writes to stdout the results in JSON, which conforms to the JSON schema defined in feedbackformat/schemas/grading_feedback.schema.json. This JSON string can be fed via stdin into the package feedbackformat, which can render the results into any implemented format (currently HTML).

v2.0

22 Aug 13:27

Choose a tag to compare

Usability improvements by extending the config file

v1.0

18 Aug 14:16

Choose a tag to compare

release