You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 13, 2024. It is now read-only.
First of all, thank you for your work!
First,I consulted ObjectDatasetsTools and made my custom datasets with D435i.
1.I read this paper and learned that the regularization parameter needs to be set to 0 in pre-training. So am I right to train my data set directly with the following code? python train.py --datacfg cfg/myobj.data --modelcfg cfg/yolo-pose.cfg --initweightfile cfg/darknet19_448.conv.23 --pretrain_num_epochs 15
or python train.py --datacfg cfg/myobj.data --modelcfg cfg/yolo-pose.cfg --initweightfile backup/duck/init.weights
What's the difference between yolo-pose.cfg and yolo-pose-pre. cfg?
yolo -pose-pre. cfg is used for pre-training my target object?
2.After training, I had ran valid.py to evaluate the model and get the following results:
The Acc 3D transformation is significantly lower than the other two terms. Is this a normal result?
3.If the above steps are correct, I take a frame of the video stream and use it as RGB input to get the results of R_pr and t_pr, which coordinate system are they relative to the camera? (I used Realsense D435i).I'm going to view the resulting RT matrix as relative to the optical coordinate system. It was sent to the robot arm and got an incorrect grab.
@btekin@snsinha Can you answer these questions for me? Look forward to your quick answer!