Conversation
| echo "Running Mode: $MODE | Dimension: $DIM" | ||
|
|
||
| python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \ | ||
| python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_DIM${DIM} \ |
There was a problem hiding this comment.
Just to confirm: are notebooks used for data analysis intentionally not tracked in this repository?
There was a problem hiding this comment.
For consistency, it would be preferable to use the format ${PORTFOLIO_STR}_PG_${MODE}_CDB${CDB_VAL}_DIM${DIM}, or alternatively introduce a shared function that adds a common prefix to all parameters so that naming remains consistent across Slurm jobs.
There was a problem hiding this comment.
But that’s (^) just nitpicking.
| fi | ||
| CDB_VAL=1.5 | ||
|
|
||
| PORTFOLIO=('MADDE' 'CMAES' 'SPSO') |
There was a problem hiding this comment.
Out of curiosity, are we sure we want to use these algorithms in the paper, or is this something that could still be changed?
| echo "Running Mode: $MODE | Dimension: $DIM" | ||
|
|
||
| python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \ | ||
| python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_$_REWARD_${REWARD_OPTION}_DIM${DIM} \ |
There was a problem hiding this comment.
nitpicking: it should be REWARD${REWARD_OPTION} or DIM_${DIM}.
| echo "Running Mode: $MODE | Dimension: $DIM" | ||
|
|
||
| python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_DIM${DIM} \ | ||
| -p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \ |
There was a problem hiding this comment.
Oh, that’s interesting: I thought you used custom by default. Which representation works better?
|
|
||
| python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_DIM${DIM} \ | ||
| -p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \ | ||
| --cdb $CDB_VAL --n_epochs 3 --agent policy-gradient |
| -p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \ | ||
| --cdb $CDB_VAL --n_epochs 3 --agent policy-gradient | ||
|
|
||
| # 3. Multidimensional CV-LOIO (Index 8) |
There was a problem hiding this comment.
Here we also include dim=40, which was not included before. Is this intentional?
DIMENSIONS = [2, 3, 5, 10, 20, 40]
| DIMS=(2 3 5 10) | ||
|
|
||
| # 1. Dimension-specific CV-LOIO (Indices 0-3) | ||
| if [[ $SLURM_ARRAY_TASK_ID -ge 0 && $SLURM_ARRAY_TASK_ID -le 3 ]]; then |
There was a problem hiding this comment.
This is a reward_study, but where is the reward actually changed? I don’t see -O or --reward-option. As it stands, this runner seems to just repeat the experiments from single_algorithm_CDB_study.slurm.
| fi | ||
|
|
||
| if [ "$#" -eq 0 ]; then | ||
| PORTFOLIO=('JDE21' 'MADDE' 'NL_SHADE_RSP') |
There was a problem hiding this comment.
Shouldn't we use PORTFOLIO=('MADDE' 'CMAES' 'SPSO') here?
| # Array of Dimensions | ||
| DIMS=(2 3 5 10) | ||
|
|
||
| # 1. Dimension-specific CV-LOIO (Indices 0-3) |
There was a problem hiding this comment.
To be honest, for a single-algorithm experiment I wouldn’t run that many experiments. Since we are not comparing it against a baseline (e.g., RL-DAS), there may be less need to evaluate it across many different CV variants. I would probably focus on a single setting, such as Multidimensional CV-LOPO or Multidimensional CV-LOIO.
There was a problem hiding this comment.
The key question is what exactly we want to measure for a single algorithm, and whether this will be compared against any baseline or used only to explain the motivation or introduce our own metric.
No description provided.