Skip to content

New experiment variants#22

Open
wniec wants to merge 2 commits intomasterfrom
new_experiment_variants
Open

New experiment variants#22
wniec wants to merge 2 commits intomasterfrom
new_experiment_variants

Conversation

@wniec
Copy link
Owner

@wniec wniec commented Mar 4, 2026

No description provided.

@WojtAcht WojtAcht requested a review from Copilot March 5, 2026 17:29
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_DIM${DIM} \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to confirm: are notebooks used for data analysis intentionally not tracked in this repository?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, it would be preferable to use the format ${PORTFOLIO_STR}_PG_${MODE}_CDB${CDB_VAL}_DIM${DIM}, or alternatively introduce a shared function that adds a common prefix to all parameters so that naming remains consistent across Slurm jobs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that’s (^) just nitpicking.

fi
CDB_VAL=1.5

PORTFOLIO=('MADDE' 'CMAES' 'SPSO')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, are we sure we want to use these algorithms in the paper, or is this something that could still be changed?

echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_$_REWARD_${REWARD_OPTION}_DIM${DIM} \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpicking: it should be REWARD${REWARD_OPTION} or DIM_${DIM}.

echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_DIM${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that’s interesting: I thought you used custom by default. Which representation works better?


python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_DIM${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 3?

-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

# 3. Multidimensional CV-LOIO (Index 8)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we also include dim=40, which was not included before. Is this intentional?

DIMENSIONS = [2, 3, 5, 10, 20, 40]

DIMS=(2 3 5 10)

# 1. Dimension-specific CV-LOIO (Indices 0-3)
if [[ $SLURM_ARRAY_TASK_ID -ge 0 && $SLURM_ARRAY_TASK_ID -le 3 ]]; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a reward_study, but where is the reward actually changed? I don’t see -O or --reward-option. As it stands, this runner seems to just repeat the experiments from single_algorithm_CDB_study.slurm.

fi

if [ "$#" -eq 0 ]; then
PORTFOLIO=('JDE21' 'MADDE' 'NL_SHADE_RSP')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we use PORTFOLIO=('MADDE' 'CMAES' 'SPSO') here?

# Array of Dimensions
DIMS=(2 3 5 10)

# 1. Dimension-specific CV-LOIO (Indices 0-3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, for a single-algorithm experiment I wouldn’t run that many experiments. Since we are not comparing it against a baseline (e.g., RL-DAS), there may be less need to evaluate it across many different CV variants. I would probably focus on a single setting, such as Multidimensional CV-LOPO or Multidimensional CV-LOIO.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key question is what exactly we want to measure for a single algorithm, and whether this will be compared against any baseline or used only to explain the motivation or introduce our own metric.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants