Skip to content

Add Tests for Evaluation Output #45

@Ueva

Description

@Ueva

Add some test cases to ensure that all of the evaluation methods we support are actually giving us the outputs we’re expecting.

This could be done very simply, with a few short episodes of interaction simulated on a very simple MDP, both with and without skills. See the existing run_agent test cases for inspiration.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions