-
Notifications
You must be signed in to change notification settings - Fork 0
Score Features
Conor Wild edited this page Aug 26, 2022
·
4 revisions
- This page is a complete reference of every feature that can be extracted from a test score.
- The cbs_parse_data extracts all these features, providing them in columns in the resulting wideform data file.
- This isn't guaranteed to be up-to-date, yet. Also, it's ugly and I will improve it eventually. |
| Feature | Location | Description | Notes |
|---|---|---|---|
| max_score | Score | Max difficulty successfully completed | Score displayed to user |
| avg_score | Legacy Raw Score | Average difficulty of correctly answered problems | |
| avg_ms_per_item | Session Data | Average (normalized) duration of each correctly answered question, where each question's duration is normalized by the number of items in the question. | Have to Calcualte this feature from the JSON session data |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| final_score | Score | Final score - # of correct - # of incorrect | Score displayed to user |
| pct_CC | Legacy Raw Score - 1st number | % correctly answered of questions in the CC condition (easiest) | CC - the colors and words of prime (top word) and targets (bottom words) are all congruent |
| pct_CI | Legacy Raw Score - 2nd number | % correctly answered of questions in the CI condition | CI - the prime is congruent, but targets are incongruent |
| pct_IC | Legacy Raw Score - 3rd number | % correctly answered of questions in the IC condition | IC - the color of the prime is incongruent, but the targets congruent |
| pct_II | Legacy Raw Score - 4th number | % correctly answered of questions in the II condition (hardest) | II - both targets and prime words are incongruent with the displayed color |
| RT_CC | Legacy Raw Score - 5th number | Average reaction time (ms) of CC problems | Note: all 8 numbers in this Legacy Raw Score are separated by whitespace |
| RT_CI | Legacy Raw Score - 6th number | Average reaction time (ms) of CI problems | |
| RT_IC | Legacy Raw Score - 7th number | Average reaction time (ms) of IC problems | |
| RT_II | Legacy Raw Score - 8th number | Average reaction time (ms) of II problems | |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task (Should be fixed at around 90000) | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| final_score | Score | Final score: sum of difficulties of correct answers - sum of difficulties of incorrect answers | Score displayed to user |
| attempted | Legacy Raw Score (text field) or Session Data | # of attempted problems | Parsed from text field |
| errors | Legacy Raw Score (text field) or Session Data | # of incorreclty answered problems | Parsed from text field |
| max | Legacy Raw Score (text field) | Most difficutly problem answered correctly | Parsed from text field |
| correct_score | Legacy Raw Score (text field) | Sum of all correctly answered problems (no subtraction for errors) | Parsed from text field |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task (Should be fixed at around 300000) | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| final_score | Score | Final score - # of correct - # of incorrect | Score displayed to user |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task (Should be fixed at around 90000) | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| max_score | Score | Max difficulty successfully completed | Score displayed to user |
| avg_score | Legacy Raw Score | Average difficulty of correctly answered problems | |
| avg_ms_per_item | Session Data | Average (normalized) duration of each correctly answered question, where each question's duration is normalized by the number of items in the question. | Have to Calcualte this feature from the JSON session data |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| final_score | Score | Final score: sum of difficulties of correct answers - sum of difficulties of incorrect answers | Score displayed to user |
| attempted | Legacy Raw Score (text field) or Session Data | # of attempted problems | Parsed from text field |
| errors | Legacy Raw Score (text field) or Session Data | # of incorreclty answered problems | Parsed from text field |
| max | Legacy Raw Score (text field) | Most difficutly problem answered correctly | Parsed from text field, USED FOR DOMAIN SCORE CALCULATIONS |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task (Should be ~180000) | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| max_score | Score | Max difficulty successfully completed | Score displayed to user |
| avg_score | Legacy Raw Score | Average difficulty of correctly answered problems | |
| avg_ms_per_item | Session Data | Average (normalized) duration of each correctly answered question, where each question's duration is normalized by the number of items in the question. | Have to Calcualte this feature from the JSON session data |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| final_score | Score | Final score: sum of difficulties of correct answers - sum of difficulties of incorrect answers | Score displayed to user |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task (should be ~90000) | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. |
| Feature | Location | Description | Notes |
|---|---|---|---|
| final_score | Score | Final score: sum of difficulties of correct answers - sum of difficulties of incorrect answers | Score displayed to user |
| attempted | Legacy Raw Score (text field) or Session Data | # of attempted problems | Parsed from text field |
| errors | Legacy Raw Score (text field) or Session Data | # of incorreclty answered problems | Parsed from text field |
| max | Legacy Raw Score (text field) | Most difficutly problem answered correctly | Parsed from text field |
| correct_score | Legacy Raw Score (text field) | Sum of all correctly answered problems (no subtraction for errors) | Parsed from text field |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task (Should be fixed at around 300000) | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. |
| Feature | Location | Description | Notes |
|---|---|---|---|
| final_score | Score | Final score: sum of difficulties of correct answers | Score displayed to user |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task (should be ~180000) | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| max_score | Score | Max difficulty successfully completed | Score displayed to user |
| avg_score | Legacy Raw Score | Average difficulty of correctly answered problems | |
| avg_ms_per_item | Session Data | Average (normalized) duration of each correctly answered question, where each question's duration is normalized by the number of items in the question. | Have to Calcualte this feature from the JSON session data |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
| Feature | Location | Description | Notes |
|---|---|---|---|
| max_score | Score | Max difficulty successfully completed | Score displayed to user |
| avg_score | Legacy Raw Score | Average difficulty of correctly answered problems | |
| avg_ms_per_item | Session Data | Average (normalized) duration of each correctly answered question, where each question's duration is normalized by the number of items in the question. | Have to Calcualte this feature from the JSON session data |
| num_errors | Session Data | Number of errors (should always be 3 for this test) | Embedded in JSON - session_data['errorsMade']
|
| num_correct | Session Data | Number of correctly answered problems | Embedded in JSON - session_data['correctAnswers']
|
| num_attempts | Session Data | Numer of attempted problems | Calculated as # of errors + # of correct
|
| duration_ms | Session Data | Total duration spent on this task | Embedded in JSON - session_data['durationTimeSpan']
|
| avg_ms_correct | Session Data | Average duration of correctly answered problems. | Calculated from the list of session_data['questions']
|
- I used this to convert the csv table to markdown.
- Home
- Score Features
-
CBS Python (Docker)
- cbs_parse_data
- cbs_score_calculator
- cbs_encrypt
- cbs_decrypt
- Terminology