Hi! Thanks for the great tool, I was able to use the package and track the user successfully. However, I have problems in calibrating the Kinect for my environment.
I used to do my extrinsic calibration by detecting a marker with a camera of the robot and the Kinect, through the identification of the transformation between the two point of views.
I could calibrate my kinect2 with the driver available in; https://github.com/code-iai/iai_kinect2.
Such a transformation works for point cloud but does not for the skeleton.
Do you know which is the reference frame of the camera?
Alternatively, do you know how can I see the point cloud used by the tracker (in order to identify its viewpoint) and if/how can I add a cloud transformation?
Hi! Thanks for the great tool, I was able to use the package and track the user successfully. However, I have problems in calibrating the Kinect for my environment.
I used to do my extrinsic calibration by detecting a marker with a camera of the robot and the Kinect, through the identification of the transformation between the two point of views.
I could calibrate my kinect2 with the driver available in; https://github.com/code-iai/iai_kinect2.
Such a transformation works for point cloud but does not for the skeleton.
Do you know which is the reference frame of the camera?
Alternatively, do you know how can I see the point cloud used by the tracker (in order to identify its viewpoint) and if/how can I add a cloud transformation?