Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi @matlabbe, @Dekempsy4,
It's great to see that you've added more feature detection options to RTAB-Map. But the code has become somewhat messy and redundant. For example, if we want to use SuperPoint, we even have 3 different methods. If we take into account the integration in DepthAI, then there are 4 methods. This will cause some inconvenience to other users. Considering that other neural network features such as xfeat and lightglue may be added later, it would be best to simplify the neural network inference scheme. So I started trying to implement a previous proposal, which is to add support for ONNX Runtime. Most models can now be converted into ONNX intermediate representations. With the help of ONNX Runtime, different models and different inference backends can be supported in a general way. This allows us to not only simplify the code but also provide better multi-platform support and optimal inference performance.
I previously thought that installing ONNX Runtime required installing the .NET framework first, but I later found out that it wasn't that complicated. We just need to download the *.tgz file from onnxruntime/releases. This includes header files, compiled .so files, and CMake configuration. The ONNX Runtime documentation does not specify the installation location, but according to their pkgconfig, the recommended installation path should be /usr/local.
Therefore, after decompressing the tgz file, we can install it using the following command. Of course, manually configuring environment variables such as LD_LIBRARY_PATH is also an option.
I have already modified the compilation-related configurations in this PR draft. Where you need to use ONNX Runtime, simply include
#include <onnxruntime/onnxruntime_cxx_api.h>to begin using it. I haven't added any model inference yet because I want to see your thoughts first. Using ONNX Runtime as a general inference middleware involves some code structure adjustments and configuration combinations. How should this part be organized afterward?