Skip to content

Updates to the automation scripts #49

@katilp

Description

@katilp

1. Start workflow

As discussed, add a function to run the start workflow, it must be run to get the images on the node before the actual run to avoid several simultaneous image pulls on the same node.

Add a monitor_start function that samples the resource usage values of nodes and all runpfnano pods.

For nodes it is:

kubectl get nodes

For pods, you could do something like

kubectl get pods -n argo | grep runpfnano

That will allow us (or users) to understand the unconstrained CPU and memory needs of the jobs.

2. Command-line inputs to argo submit and terraform apply to avoid sed

As discussed, sed is a bit brutal. Better use arguments when possible

2.1 Argo submit

Argo submit can take the global workflow parameters with

$ argo submit --help
[ ... ]
  -p, --parameter stringArray        pass an input parameter
[ ... ]

where stringArray would be e.g. nJobs="6".

Careful with quotes in the script, note that:

$ NUM_NODES=3
$ echo nJobs="$NUM_NODES"
nJobs=3
$ echo nJobs=\"$NUM_NODES\"
nJobs="3"

Edit: However, this does not matter: -p nJobs=3 works as well

2.2 Terraform apply

You can pass the variables to terraform with the -var flag, e.g.

terraform plan -var "gke_num_nodes=3" -var "gke_machine_type=e2-standard-8"

and in the script

terraform plan -var "gke_num_nodes=$NUM_NODES" -var ...

To be confirmed that it works properly for strings (machine type) and numerical values (n nodes)

This will avoid modifying the tfvars file.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions