Skip to content

Handle Submissions

mert tiftikci edited this page Nov 23, 2020 · 1 revision

This script connects the course web site, quiz_checker_env, reference implementations, and Google Sheets API. If provided with the excel export of a given quiz and credentials, initially submitted files are downloaded into the answers folder. Then reference implementations and checker scripts are run. Finally, results are published in the main spreadsheets.

Argument Function Default
--file_locs File locations of the excel files -
--credentials Credentials to log in to course web site -
--name Name of the Quiz -
--id_col Column name in the excel that contains student ID in server "Student ID"
--target Column name in the excel that contains file locations in server "Code"
--counts_validity Questions to be controlled -
--counts File Counts for each question 1
--checker_params Parameters to be fed into quiz_checker ""
--sect_count Section count 2
--max_point Max points of the question to be evaluated 20
--verbose Verbose level 0

Download Submissions

When the --file_locs(which are ordered by sections) and --credentials parameters are provided, excel files are parsed to extract submission files URLs. Using the credentials, those submission files will be downloaded and places under the answers folder with --name folder that will be created.

Given excel files should have a column named --id_col which contains student IDs and columns named --target for each question containing file URLs. Columns considered to be ordered from left-to-right and also cells should have the structure of:

<File1 name>
<File1 URL>
<empty line>

<File2 name>
<File2 URL>
<empty line>

When files will not be provided in this excel format, leaving --file_locs argument empty will bypass this step. But answers folder should be created manually so that rest of the steps could work.

Argument --counts is used in downloading in order to understand how many files going to be submitted for each question. For each of the omitted files(R scripts), a file with default comment will be created because attendance status understood from making any submission at all(empty or not). According to the section count, consequent numbers(assumed to be lower than 10) represent the file count for each question. Every section can have a different configuration which should be provided in an ordered comma-separated fashion. If only one configuration is provided, then it will be mirrored for each section.

Example of two sections with three questions: --counts 121,213 -> [[1,2,1], [2,1,3]]
Example of two sections with three questions(same configuration): --counts 121 -> [[1,2,1], [1,2,1]]

When some of the questions should be skipped in evaluations, --counts_validity argument should be used. It follows the same rules with --counts parameters, only T and F characters are allowed. Questions that have T will ve evaluated and F will be skipped.

Example of two sections with three questions: --counts TTF,TFT -> [[True,True,False], [True,False,True]]
Example of two sections with three questions(same configuration): --counts TTF -> [[True,True,False], [True,True,False]]

Answers Structure

Every quiz has a folder within the answers folder with the name --name. There should be a folder for each of the sections with the naming convention of sect<num>. If there are no sections for the current quiz, this level should be omitted. The next level of folders is consists of question folders with the naming convention of q<num>. Finally, within these folders, student submissions should be placed with the naming convention <student ID>_<file count>.<R|png|jpg|jpeg>.

Example two section file structure:

Quiz1
   -> sect01
     -> q01
       -> 2019800021.R
       -> 2019800021_1.R
       -> 2019800021.png
       -> 2019800021_1.png
       -> <Other student submissions>
     -> q02
       -> <Other submissions>
   -> sect02
     -> q01
       -> <Other submissions>
     -> <Other questions>

Another example of a single section quiz:

Midterm
   -> q01
      -> 2019800021.R
      -> 2019800021_1.R
      -> 2019800021.png
      -> 2019800021_1.png
      -> <Other student submissions>
   -> q02
      -> 2019800021.R
      -> <Other submissions>
   -> q03
      -> <Student submissions>
   -> <Other questions>

Reference Implementations

These implementations are gathered from answer sheets. They should be written for each of the questions separately and create RData files for quiz_checker_env script to work on. They are placed under the reference_implementations folder along with the data files if any exists. Details of these scripts are explained in here.

Quiz Checker

After submission files are downloaded and reference implementation scripts are run, quiz_checker_env script is run for each of the questions for every section. In order to pass extra arguments to the evaluation script --max_points and --checker_params can be used. --checker_params argument accepts space-separated key-value pairs where possible arguments and their values could be found here. A populated data frame is dumped and parsed from this script for each question.

Publishing the Results

The resulting tables will be uploaded to the main spreadsheet if specified. Content and structure of these tables can be found here.

Result Table Structures

Tables are uploaded as columns to a sheet created under the main spreadsheet named --name. Every column should occupy cells equal to the widest of the result tables(question with the most tests). Each column of tables represents section results for the current quiz.

Tips & Pitfalls

Clone this wiki locally