Skip to content

Automated Eval-Demo Update #32

@ClashLuke

Description

@ClashLuke

At the moment, the evaluation demo has to be updated manually by SSHing into the server, pulling the latest code, and restarting the WebAPI. Suppose we'd instead have a script that runs after every long-running experiment (from #31). In that case, we could ensure that the demo will always have the latest code and checkpoint without introducing massive downtimes through potential human error.

Metadata

Metadata

Assignees

Labels

engineeringSoftware-engineering problems that don't require ML-Expertisemlops

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions