-
Notifications
You must be signed in to change notification settings - Fork 6
Evaluation Results Differ from README Table #6
Copy link
Copy link
Open
Description
Hello,
I followed the steps in the README to download the pretrained model ("own model 2") and ran the following command to reproduce the results shown in the figure and table in the README:
python eval.py --path_to_checkpoints=./pretrained_models --path_to_data=./data
Note: I did not include the --grid_calibration_samples=True flag because it significantly increases the evaluation time.
However, the results I obtained differ from the Evaluation table in the README.
My evaluation results:
| Test Metric | DataLoader 0 |
|---|---|
| test/offset(k=0)/angular_error | 11.520730018615723 |
| test/offset(k=0)/loss | 0.024358397349715233 |
| test/offset(k=128)/mean_angular_error | 11.36042039937973 |
| test/offset(k=128)/std_angular_error | 0.04376071219296659 |
| test/offset(k=9)/mean_angular_error | 11.74582172794342 |
| test/offset(k=9)/std_angular_error | 0.43741419828264544 |
| test/offset(k=all)/angular_error | 11.332329750061035 |
Expected results from the README:

Has anyone else encountered a similar issue? Is there a solution or any steps I might have missed?
Thank you very much!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels