Skip to content

Hyperparameter Optimization Isolate #15

@rafecchang

Description

@rafecchang

Here I am trying to optimize hyperparameter individually. Since the original run of optimization (including learning_rate, num_trainable_layers, dropout_rate, batch_size, step_size, gamma, and epochs) gave a higher accuracy score, I am running new trials of hyperparameter optimization with new variables to investigate which are the variables that improve model's performance.

Objective:

  • Run a few trials of original model + early stopping + higher epochs max (increase from 5 to 10)
  • Run a few trials of original model + gradient clipping
  • Run a few trials of original model + switching training / testing

Evaluation:

  • Train/ validate with original split
  • Train/ validate with a random split

Next Step:

  • Determine which hyperparameters to keep
  • Use the combination of these hyperparameters to test for a few trails and assess performance

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions