Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CDB_study.slurm
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ if [[ $SLURM_ARRAY_TASK_ID -ge 0 && $SLURM_ARRAY_TASK_ID -le 3 ]]; then
DIM=${DIMS[$SLURM_ARRAY_TASK_ID]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_DIM${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

Expand All @@ -47,7 +47,7 @@ elif [[ $SLURM_ARRAY_TASK_ID -ge 4 && $SLURM_ARRAY_TASK_ID -le 7 ]]; then
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 4))]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_DIM${DIM} \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to confirm: are notebooks used for data analysis intentionally not tracked in this repository?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, it would be preferable to use the format ${PORTFOLIO_STR}_PG_${MODE}_CDB${CDB_VAL}_DIM${DIM}, or alternatively introduce a shared function that adds a common prefix to all parameters so that naming remains consistent across Slurm jobs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that’s (^) just nitpicking.

-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ uv run das <name> [options]
| `-x`, `--cdb` | `float` | `1.0` | **Checkpoint Division Exponent**; determines how quickly checkpoint length increases. |
| `-r`, `--state-representation` | `str` | `ELA` | Method used to extract features from the algorithm population. |
| `-d`, `--force-restarts` | `bool` | `False` | Enable selection of forcibly restarting optimizers. |
| `-D`, `--dimensionality` | `int` | `None` | Dimensionality of problems. |
| `-D`, `--dimensionality` | `list[int]` | `[2, 3, 5, 10, 20, 40]` | Dimensionality of problems. |
| `-E`, `--n_epochs` | `int` | `1` | Number of training epochs. |
| `-O`, `--reward-option` | `int` | `1` | ID of method used to compute reward. |

Expand Down
1 change: 0 additions & 1 deletion dynamicalgorithmselection/agents/RLDAS_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,6 @@ def optimize(self, fitness_function=None, args=None):

self._n_generations += 1
self._print_verbose_info(fitness, self.best_so_far_y)
print(self._n_generations)
fes_end = self.n_function_evaluations
speed_factor = self.max_function_evaluations / fes_end

Expand Down
4 changes: 0 additions & 4 deletions dynamicalgorithmselection/agents/agent_utils.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
from typing import Optional

import numpy as np

MAX_DIM = 40
Expand All @@ -14,7 +12,6 @@ def get_runtime_stats(
"""
:param fitness_history: list of tuples [fe, fitness] with only points where best so far fitness improved
:param function_evaluations: max number of function evaluations during run.
:param checkpoints: list of checkpoints by their n_function_evaluations
:return: dictionary of selected run statistics, ready to dump
"""
area_under_optimization_curve = 0.0
Expand Down Expand Up @@ -43,7 +40,6 @@ def get_extreme_stats(
"""
:param fitness_histories: list of lists of tuples [fe, fitness] with only points where best so far fitness improved for each algorithm
:param function_evaluations: max number of function evaluations during run.
:param checkpoints: list of checkpoints by their n_function_evaluations
:return: dictionary of selected run statistics, ready to dump
"""
all_improvements = []
Expand Down
21 changes: 8 additions & 13 deletions portfolio_study.slurm
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,12 @@
#SBATCH --partition=plgrid-gpu-a100
#SBATCH --array=0-9 # 10 tasks total

CDB_VAL=${1:-1.5}
REWARD_OPTION=${1:-1}

if [ "$#" -gt 0 ]; then
shift
fi
CDB_VAL=1.5

PORTFOLIO=('MADDE' 'CMAES' 'SPSO')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, are we sure we want to use these algorithms in the paper, or is this something that could still be changed?


if [ "$#" -eq 0 ]; then
PORTFOLIO=('MADDE' 'CMAES' 'SPSO')
else
PORTFOLIO=("$@")
fi
PORTFOLIO_STR=$(IFS="_"; echo "${PORTFOLIO[*]}")


Expand All @@ -37,7 +32,7 @@ if [[ $SLURM_ARRAY_TASK_ID -ge 0 && $SLURM_ARRAY_TASK_ID -le 3 ]]; then
DIM=${DIMS[$SLURM_ARRAY_TASK_ID]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_$_REWARD_${REWARD_OPTION}_DIM${DIM} \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpicking: it should be REWARD${REWARD_OPTION} or DIM_${DIM}.

-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

Expand All @@ -47,21 +42,21 @@ elif [[ $SLURM_ARRAY_TASK_ID -ge 4 && $SLURM_ARRAY_TASK_ID -le 7 ]]; then
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 4))]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_REWARD_${REWARD_OPTION}_DIM${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

# 3. Multidimensional CV-LOIO (Index 8)
elif [[ $SLURM_ARRAY_TASK_ID -eq 8 ]]; then
MODE="CV-LOIO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE}_REWARD_${REWARD_OPTION} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --cdb $CDB_VAL --agent policy-gradient

# 4. Multidimensional CV-LOPO (Index 9)
elif [[ $SLURM_ARRAY_TASK_ID -eq 9 ]]; then
MODE="CV-LOPO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL} \
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE}_REWARD_${REWARD_OPTION} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --cdb $CDB_VAL --agent policy-gradient
fi
67 changes: 67 additions & 0 deletions reward_study.slurm
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
#!/bin/bash
#SBATCH --job-name=rl_das_experiment
#SBATCH --output=logs/experiment_%A_%a.out
#SBATCH --error=logs/experiment_%A_%a.err
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=32G
#SBATCH --time=48:00:00
#SBATCH --partition=plgrid-gpu-a100
#SBATCH --array=0-9 # 10 tasks total

CDB_VAL=${1:-1.5}

if [ "$#" -gt 0 ]; then
shift
fi

if [ "$#" -eq 0 ]; then
PORTFOLIO=('JDE21' 'MADDE' 'NL_SHADE_RSP')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we use PORTFOLIO=('MADDE' 'CMAES' 'SPSO') here?

else
PORTFOLIO=("$@")
fi
PORTFOLIO_STR=$(IFS="_"; echo "${PORTFOLIO[*]}")


# CONFIGURATION
ENV_PATH="$SCRATCH/DynamicAlgorithmSelection/.venv/bin/activate"
source "$ENV_PATH"
mkdir -p logs

# Array of Dimensions
DIMS=(2 3 5 10)

# 1. Dimension-specific CV-LOIO (Indices 0-3)
if [[ $SLURM_ARRAY_TASK_ID -ge 0 && $SLURM_ARRAY_TASK_ID -le 3 ]]; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a reward_study, but where is the reward actually changed? I don’t see -O or --reward-option. As it stands, this runner seems to just repeat the experiments from single_algorithm_CDB_study.slurm.

MODE="CV-LOIO"
DIM=${DIMS[$SLURM_ARRAY_TASK_ID]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

# 2. Dimension-specific CV-LOPO (Indices 4-7)
elif [[ $SLURM_ARRAY_TASK_ID -ge 4 && $SLURM_ARRAY_TASK_ID -le 7 ]]; then
MODE="CV-LOPO"
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 4))]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_${CDB_VAL}_${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

# 3. Multidimensional CV-LOIO (Index 8)
elif [[ $SLURM_ARRAY_TASK_ID -eq 8 ]]; then
MODE="CV-LOIO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --cdb $CDB_VAL --agent policy-gradient

# 4. Multidimensional CV-LOPO (Index 9)
elif [[ $SLURM_ARRAY_TASK_ID -eq 9 ]]; then
MODE="CV-LOPO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --cdb $CDB_VAL --agent policy-gradient
fi
56 changes: 27 additions & 29 deletions runner.slurm
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,13 @@
#SBATCH --partition=plgrid-gpu-a100
#SBATCH --array=0-23 # Increased to 24 tasks total to split sequential runs

# 1st argument: CDB_VAL (Default: 1.5)
CDB_VAL=${1:-1.5}

if [ "$#" -gt 0 ]; then
shift
fi

# Store the remaining arguments as an array called PORTFOLIO.
# If no additional arguments were provided, fall back to your default.
# 2nd argument: SEED (Default: 42)
SEED=${2:-42}

# Fixed PORTFOLIO variable
PORTFOLIO=('JDE21' 'MADDE' 'NL_SHADE_RSP')

# CONFIGURATION
Expand All @@ -31,72 +29,72 @@ DIMS=(2 3 5 10)
# 1. Dimension-specific CV-LOIO | RL-DAS (Indices 0-3)
if [[ $SLURM_ARRAY_TASK_ID -ge 0 && $SLURM_ARRAY_TASK_ID -le 3 ]]; then
MODE="CV-LOIO"
DIM=${DIMS[$SLURM_ARRAY_TASK_ID]}
DIM=DIM${DIMS[$SLURM_ARRAY_TASK_ID]}
echo "Running Mode: $MODE | Agent: RL-DAS | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RLDAS_${MODE}_${DIM} \
-p "${PORTFOLIO[@]}" --mode $MODE --dimensionality $DIM --n_epochs 40 --agent RL-DAS
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RLDAS_${MODE}_DIM${DIM}_SEED${SEED} \
-p "${PORTFOLIO[@]}" --mode $MODE --dimensionality $DIM --n_epochs 40 --agent RL-DAS -S "$SEED"

# 2. Dimension-specific CV-LOIO | Policy Gradient (Indices 4-7)
elif [[ $SLURM_ARRAY_TASK_ID -ge 4 && $SLURM_ARRAY_TASK_ID -le 7 ]]; then
MODE="CV-LOIO"
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 4))]}
DIM=DIM${DIMS[$((SLURM_ARRAY_TASK_ID - 4))]}
echo "Running Mode: $MODE | Agent: Policy Gradient | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_${MODE}_${CDB_VAL}_${DIM} \
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_${MODE}_${CDB_VAL}_DIM${DIM}_SEED${SEED} \
-p "${PORTFOLIO[@]}" -r custom --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient -S "$SEED"

# 3. Dimension-specific CV-LOPO | RL-DAS (Indices 8-11)
elif [[ $SLURM_ARRAY_TASK_ID -ge 8 && $SLURM_ARRAY_TASK_ID -le 11 ]]; then
MODE="CV-LOPO"
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 8))]}
DIM=DIM${DIMS[$((SLURM_ARRAY_TASK_ID - 8))]}
echo "Running Mode: $MODE | Agent: RL-DAS | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RLDAS_${MODE}_${DIM} \
-p "${PORTFOLIO[@]}" --mode $MODE --dimensionality $DIM --n_epochs 40 --agent RL-DAS
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RLDAS_${MODE}_DIM${DIM}_SEED${SEED} \
-p "${PORTFOLIO[@]}" --mode $MODE --dimensionality $DIM --n_epochs 40 --agent RL-DAS -S "$SEED"

# 4. Dimension-specific CV-LOPO | Policy Gradient (Indices 12-15)
elif [[ $SLURM_ARRAY_TASK_ID -ge 12 && $SLURM_ARRAY_TASK_ID -le 15 ]]; then
MODE="CV-LOPO"
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 12))]}
DIM=DIM${DIMS[$((SLURM_ARRAY_TASK_ID - 12))]}
echo "Running Mode: $MODE | Agent: Policy Gradient | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_${MODE}_${CDB_VAL}_${DIM} \
-p "${PORTFOLIO[@]}" -r custom --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_${MODE}_${CDB_VAL}_DIM${DIM}_SEED${SEED} \
-p "${PORTFOLIO[@]}" --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient -S "$SEED"

# 5. Dimension-specific RL-DAS-random (Indices 16-19)
elif [[ $SLURM_ARRAY_TASK_ID -ge 16 && $SLURM_ARRAY_TASK_ID -le 19 ]]; then
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 16))]}
DIM=DIM${DIMS[$((SLURM_ARRAY_TASK_ID - 16))]}
echo "Running Mode: Random Agent - RLDAS variant | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RANDOM_DAS_${DIM} \
-p 'JDE21' 'MADDE' 'NL_SHADE_RSP' --agent RL-DAS-random --dimensionality $DIM
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RANDOM_DAS_DIM${DIM}_SEED${SEED} \
-p "${PORTFOLIO[@]}" --agent RL-DAS-random --dimensionality $DIM -S "$SEED"

# 6. Multidimensional CV-LOIO (Index 20)
elif [[ $SLURM_ARRAY_TASK_ID -eq 20 ]]; then
MODE="CV-LOIO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL} \
-p "${PORTFOLIO[@]}" -r custom --mode $MODE --cdb $CDB_VAL --agent policy-gradient
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL}_SEED${SEED} \
-p "${PORTFOLIO[@]}" --mode $MODE --cdb $CDB_VAL --agent policy-gradient -S "$SEED"

# 7. Multidimensional CV-LOPO (Index 21)
elif [[ $SLURM_ARRAY_TASK_ID -eq 21 ]]; then
MODE="CV-LOPO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL} \
-p "${PORTFOLIO[@]}" -r custom --mode $MODE --cdb $CDB_VAL --agent policy-gradient
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_PG_MULTIDIMENSIONAL_${MODE}_${CDB_VAL}_SEED${SEED} \
-p "${PORTFOLIO[@]}" --mode $MODE --cdb $CDB_VAL --agent policy-gradient -S "$SEED"

# 8. Global Random Agent (Index 22)
elif [[ $SLURM_ARRAY_TASK_ID -eq 22 ]]; then
echo "Running Mode: Global Random Agent"
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RANDOM_${CDB_VAL} \
-p "${PORTFOLIO[@]}" --cdb $CDB_VAL --agent random
python3 dynamicalgorithmselection/main.py JDE21_MADDE_NL_SHADE_RSP_RANDOM_${CDB_VAL}_SEED${SEED} \
-p "${PORTFOLIO[@]}" --cdb $CDB_VAL --agent random -S "$SEED"

# 9. Global Baselines (Index 23)
elif [[ $SLURM_ARRAY_TASK_ID -eq 23 ]]; then
echo "Running Mode: Baselines"
python3 dynamicalgorithmselection/main.py BASELINES \
-p "${PORTFOLIO[@]}" --mode baselines
-p "${PORTFOLIO[@]}" --mode baselines -S "$SEED"
fi
62 changes: 62 additions & 0 deletions single_algorithm_CDB_study.slurm
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#!/bin/bash
#SBATCH --job-name=rl_das_experiment
#SBATCH --output=logs/experiment_%A_%a.out
#SBATCH --error=logs/experiment_%A_%a.err
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=32G
#SBATCH --time=48:00:00
#SBATCH --partition=plgrid-gpu-a100
#SBATCH --array=0-9 # 10 tasks total

REWARD_OPTION=${1:-1}

CDB_VAL=1.5

PORTFOLIO=('MADDE')

PORTFOLIO_STR=$(IFS="_"; echo "${PORTFOLIO[*]}")


# CONFIGURATION
ENV_PATH="$SCRATCH/DynamicAlgorithmSelection/.venv/bin/activate"
source "$ENV_PATH"
mkdir -p logs

# Array of Dimensions
DIMS=(2 3 5 10)

# 1. Dimension-specific CV-LOIO (Indices 0-3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, for a single-algorithm experiment I wouldn’t run that many experiments. Since we are not comparing it against a baseline (e.g., RL-DAS), there may be less need to evaluate it across many different CV variants. I would probably focus on a single setting, such as Multidimensional CV-LOPO or Multidimensional CV-LOIO.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key question is what exactly we want to measure for a single algorithm, and whether this will be compared against any baseline or used only to explain the motivation or introduce our own metric.

if [[ $SLURM_ARRAY_TASK_ID -ge 0 && $SLURM_ARRAY_TASK_ID -le 3 ]]; then
MODE="CV-LOIO"
DIM=${DIMS[$SLURM_ARRAY_TASK_ID]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_DIM${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that’s interesting: I thought you used custom by default. Which representation works better?

--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 3?


# 2. Dimension-specific CV-LOPO (Indices 4-7)
elif [[ $SLURM_ARRAY_TASK_ID -ge 4 && $SLURM_ARRAY_TASK_ID -le 7 ]]; then
MODE="CV-LOPO"
DIM=${DIMS[$((SLURM_ARRAY_TASK_ID - 4))]}
echo "Running Mode: $MODE | Dimension: $DIM"

python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_${MODE}_DIM${DIM} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --dimensionality $DIM \
--cdb $CDB_VAL --n_epochs 3 --agent policy-gradient

# 3. Multidimensional CV-LOIO (Index 8)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we also include dim=40, which was not included before. Is this intentional?

DIMENSIONS = [2, 3, 5, 10, 20, 40]

elif [[ $SLURM_ARRAY_TASK_ID -eq 8 ]]; then
MODE="CV-LOIO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --cdb $CDB_VAL --agent policy-gradient

# 4. Multidimensional CV-LOPO (Index 9)
elif [[ $SLURM_ARRAY_TASK_ID -eq 9 ]]; then
MODE="CV-LOPO"
echo "Running Mode: $MODE | Multidimensional PG"
python3 dynamicalgorithmselection/main.py ${PORTFOLIO_STR}_PG_MULTIDIMENSIONAL_${MODE} \
-p "${PORTFOLIO[@]}" -r ELA --mode $MODE --cdb $CDB_VAL --agent policy-gradient
fi