You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 15, 2022. It is now read-only.
Hi, @hirotomusiker.
I come here again. As the title said, I am confused about the train loss、size_average and the performance. I have train the original darknet repo and this repo on my own dataset (3 classes). And I want to share the results here.
The params are same: MAXITER: 6000, STEPS: (4800, 5400), IMGSIZE: 608 (both for train and test).
With darknet, I gain the mAP@0.5 as 79.0, and the final loss was 0.76 (avg).
With this repo, the mAP@0.5 was 76.9, and the final loss was 4.7 (total).
It seens that with this repo, the loss is harder to converge. So I changed the params for this repo (MAXITER: 8000, STEPS: (6400, 7200)), and gain the mAP@0.5 as 78.3, and the final loss was 8.2 (total).
So I have some questions.
the performance seens different, may be caused by the shuffle of the dataset?
the loss of this repo is larger and harder to converge compared to the darknet. What's the reason?
in #44, you haved talked about the param size_average and said that the loss of darknet is also high?