diff --git a/README.md b/README.md index 5805218..1d47bcd 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ The test file for a basic exercise can look like this: ```r -context({ +context(testcases = { testcase('The correct method was used', { testEqual("test$alternative", function(studentEnv) { studentEnv$test$alternative }, 'two.sided') testEqual("test$method", function(studentEnv) { studentEnv$test$method }, ' Two Sample t-test') @@ -34,7 +34,10 @@ First of all, something you can't see in the example code above. Dodona groups c ### Contexts -A context represents one execution of the student code. It is generally desirable to have as many contexts as possible, since students can filter by incorrect contexts. The `context` function does a few things: +A context represents one execution of the student code. It is generally desirable to have as many contexts as possible, since students can filter by incorrect contexts. + +#### `context` + The `context` function does a few things: 1. It creates a clean environment based on the global environment. Students have access to the global environment, but don't have access to the testing code or variables used in the testing code (the testing code is executed in a similar environment that is never bound). 2. It executes the code passed through the `preExec` argument in this clean environment. This can be used for setting the seed (as in this example), but also to set variables or define functions that students can then use. *NOTE: the `preExec` argument is not executed in the environment where the tests are run. If you need this, you will need to do this yourself.* @@ -43,8 +46,32 @@ A context represents one execution of the student code. It is generally desirabl Note that the student code is executed once for each call to `context`. Technically, this allows the student to store intermediate results in the global environment. The use of this is limited, so we don't see this as a problem. +#### `contextWithParsing` +The `contextWithParsing` function does the same as `context` but it allows you to test on intermediate (returned/printed) results that are not stored in objects. In order to use this function you have to split the student code using "section titles". these are comments with the following pattern: `###[whitespace(s)][section title][whitespace(s)]###`. To define testfunctions per section you can know pass a named list to the `testcases` argument, which links each `section title` with a codeblock containing the test functions. + +Here you can find an example: +```r +contextWithParsing(testcases = list( + "section 1" = { + testcase('Question1:', { + testEqual("columnnames", function(studentEnv) {studentEnv$evaluationResult}, c("name1", "name2")) + }) + }, "section 2" = { + testcase('Question2:', { + testEqual("p-value", function(studentEnv) { studentEnv$evaluationResult }, 0.175) + }) + } +), preExec = { + set.seed(20190322) +}) +``` + +> :warning: **This method is new and hasn't been extensively tested yet** . + +#### `contextWithRmd` The `contextWithRmd` function does the same as the `context` function but it expects the student code to be in the R Markdown format. The R chunks are evaluated as before and the markdown text is ignored during evalutaion. +#### `contextWithImage` An extra `contextWithImage` function also exists. This function takes the same arguments, but adds an image to the output if it was generated by the student while their code is executed. By default, this function will make the output wrong if the expected image wasn't generated. This behaviour can be changed by setting the optional `failIfAbsent` parameter to `FALSE`. For introductory exercises students often use R as a calculator and do not store the result of an expression as a variable in their script. For such scripts the eval function that executes the parsed script of the student does not store this result as a variable in the test environment. However, it simply returns the value to the caller. The result of the evaluation is injected into the test environment under the name `evaluationResult`. A simple test using this could look like this: diff --git a/context.R b/context.R index 3d57f65..6db20c5 100644 --- a/context.R +++ b/context.R @@ -67,6 +67,101 @@ context <- function(testcases={}, preExec={}) { ) } + +contextWithParsing <- function(testcases=list(), preExec={}) { + + # hacky way to make sure the list items are not evaluating yet + testcases <- as.list(substitute(testcases))[-1] + + get_reporter()$start_context() + do_exit <- TRUE + on.exit({ + if (do_exit) { + get_reporter()$end_context() + } + }) + + if (sum(duplicated(names(testcases)) | names(testcases) == "") > 0) { + get_reporter()$add_message("Error: There are duplicate named testcases and/or unnamed testcases found.", permission = "staff") + get_reporter()$escalate("internal error") + get_reporter()$end_context(accepted = FALSE) + do_exit <<- FALSE + return() + } + + # parse the student code into a named list linking the parsed codeblock names to codeblocks. + codeblocks <- list() + codeblock_name <- NULL + codeblock <- c() + for (line in read_lines(student_code)) { + match <- str_match(line, "^###\\h*(.+[^\\h])\\h*###")[,2] + if (match %in% names(codeblocks)) { + get_reporter()$add_message(paste0("Warning: There are duplicate section title(s) found in the code.", + "This means the same test will be repeated for all sections with the same title.")) + } + if (!is.na(match) && match %in% names(testcases)) { + if (!is.null(codeblock_name)) { + codeblocks[[codeblock_name]] <- codeblock + codeblock <- c() + } + codeblock_name <- match + } else { + codeblock <- c(codeblock, line) + } + } + if (!is.null(codeblock_name)) { + codeblocks[[codeblock_name]] <- codeblock + } + + # throw parsing error when section titles are missing in the student code to avoid students skipping tests + missing_sections <- setdiff(names(testcases), names(codeblocks)) + if (length(missing_sections) > 0) { + get_reporter()$add_message( + paste0("Parsing error: could not find rhe following section title(s): \r\n", + paste(sapply(missing_sections, function(x) {paste("###", x, "###")}), collapse = ',\r\n')) + ) + get_reporter()$escalate("compilation error") + get_reporter()$end_context(accepted = FALSE) + do_exit <<- FALSE + return() + } + + test_env$clean_env <- new.env(parent = globalenv()) + tryCatch( + withCallingHandlers({ + old_parent <- parent.env(.GlobalEnv) + eval(substitute(preExec), envir = test_env$clean_env) + + # run the codeblock in order and evaluate after each codeblock + for (code_index in seq_along(codeblocks)) { + parent.env(.GlobalEnv) <- starting_parent_env + tryCatch({ + # We don't use source, because otherwise syntax errors leak the location of the student code + test_env$parsed_code <- parse(text = codeblocks[[code_index]]) + capture.output(assign("evaluationResult", eval(test_env$parsed_code, envir = test_env$clean_env), envir = test_env$clean_env)) + + }, finally = { + parent.env(.GlobalEnv) <- old_parent + }) + eval(testcases[[names(codeblocks[code_index])]]) + } + }, + warning = function(w) { + get_reporter()$add_message(paste("Warning while evaluating context: ", conditionMessage(w), sep = '')) + }, + message = function(m) { + get_reporter()$add_message(paste("Message while evaluating context: ", conditionMessage(m), sep = '')) + }), + error = function(e) { + get_reporter()$add_message(paste("Error while evaluating context: ", conditionMessage(e), sep = '')) + get_reporter()$escalate("compilation error") + get_reporter()$end_context(accepted = FALSE) + do_exit <<- FALSE + } + ) +} + + contextWithRmd <- function(testcases={}, preExec={}) { get_reporter()$start_context() do_exit <- TRUE diff --git a/reporter-dodona.R b/reporter-dodona.R index 25d8a58..9c83f94 100644 --- a/reporter-dodona.R +++ b/reporter-dodona.R @@ -48,8 +48,8 @@ DodonaReporter <- R6::R6Class("DodonaReporter", write('{"command": "close-judgement"}', stdout()) }, - add_message = function(message, type="plain") { - write(paste('{"command": "append-message", "message": { "description": ', toJSON(toString(message), auto_unbox=TRUE), ', "format": ', toJSON(type, auto_unbox=TRUE), '} }', sep=''), stdout()) + add_message = function(message, type = "plain", permission = "student") { + write(paste('{"command": "append-message", "message": { "description": ', toJSON(toString(message), auto_unbox=TRUE), ', "format": ', toJSON(type, auto_unbox=TRUE), ', "permission": "', permission, '"} }', sep=''), stdout()) }, escalate = function(status) { diff --git a/run b/run index e4eb08f..c1062fb 100755 --- a/run +++ b/run @@ -10,6 +10,7 @@ invisible(evalq({ library('R6') library('rlang') library('purrr') + library('stringr') input <- fromJSON(file('stdin')) source(paste(input$judge, 'judge.R', sep='/'), chdir=TRUE, local=TRUE)