Conversation
Signed-off-by: Charayaphan Nakorn Boon Han <charayaphan.nakorn.boon.han@gmail.com>
Signed-off-by: Charayaphan Nakorn Boon Han <charayaphan.nakorn.boon.han@gmail.com>
koonpeng
left a comment
There was a problem hiding this comment.
"Fix" DDS NAT traversal by changing DDS to use FastRTPS, which shared memory transport bypasses this
Prefer there to be a proper fix, rmf does not place restriction on the transport protocol used so the example should at least show a working deployment using the default (iirc cyclone in galactic?).
|
I don't immediately know of a good way to get around a NAT for DDS apart from setting up some sort of VPN and using static discovery, unfortunately. Perhaps if RMF (core) is run from within the minikube as a docker image? |
I guess the problem is beacuse minikube and all the pods runs docker-in-docker in their own network namespace? So "host networking" for the pods would mean the minikube network and not the actual host network. I don't know if there is a way to make minikube itself use host networking. https://stackoverflow.com/questions/66378335/starting-minikube-with-docker-driver-and-bind-it-to-host-network suggests one can use But if the problem is due to NAT routing, then it means this could happen in actual deployment as well, if rmf is on a different network or if there is a firewall in between. |
|
yes from my experience, it is exactly as you said. Running it without NAT risks interference from processes already running on the local network. However, DDS NAT traversal is a challenge in my experience. From my experience what I have done is to try and avoid DDS over NAT in the first place. In cases where it is required, I have used static discovery over a VPN to accomplish DDS over NAT. Feels like this complicates things for this case, but I am open to setting that mechanism up here to replace using FastDDS exclusively. What do you think? For static discovery to work, we will need to set up a VPN server, and configure VPN for the minikube and host network. Then, we need a DDS specific configuration file for each DDS we want to support in this example. This then has to be loaded as a configmap in the kubernetes containers, containing the VPN IP address of the Host machine as a "static discovery ip". This also needs to exist on the host machine. |
|
I cant review this yet as I still have some issues with my network configuration. |
|
I think we can use the default rmw and just add a notice mentioning that rmf should be deployed on the minikube network. But as for how to do that is another tutorial in itself, deploying rmf is out of scope of this example. In the long run, I think we should move this to it's own repository, especially since tasksv2 deployment of rmf and rmf-web has become more coupled (you need to set the |
|
update: i merged this branch into #588 in order to move forward with a working kubernetes framework for identifying remaining issues with the reporting server. |
|
lgtm, tested it on PR #588 |
|
Closing via #588 |
Proposing some changes to fix the example-deployment build
Summary of changes:
Use galactic
"Fix" DDS NAT traversal by changing DDS to use FastRTPS, which shared memory transport bypasses this
Modify deployment to expose rmf-server websocket service at port 8001
Remove lerna references in dockerfiles
Appreciate a look through the README for bug checking.
this should also unblock any fixes on the reporting-server, which needs fluentd on kubernetes to work properly