Skip to content

Conversation

@borongyuan
Copy link
Contributor

Hi @matlabbe, @Dekempsy4,
It's great to see that you've added more feature detection options to RTAB-Map. But the code has become somewhat messy and redundant. For example, if we want to use SuperPoint, we even have 3 different methods. If we take into account the integration in DepthAI, then there are 4 methods. This will cause some inconvenience to other users. Considering that other neural network features such as xfeat and lightglue may be added later, it would be best to simplify the neural network inference scheme. So I started trying to implement a previous proposal, which is to add support for ONNX Runtime. Most models can now be converted into ONNX intermediate representations. With the help of ONNX Runtime, different models and different inference backends can be supported in a general way. This allows us to not only simplify the code but also provide better multi-platform support and optimal inference performance.

I previously thought that installing ONNX Runtime required installing the .NET framework first, but I later found out that it wasn't that complicated. We just need to download the *.tgz file from onnxruntime/releases. This includes header files, compiled .so files, and CMake configuration. The ONNX Runtime documentation does not specify the installation location, but according to their pkgconfig, the recommended installation path should be /usr/local.

prefix=/usr/local
bindir=${prefix}/bin
mandir=${prefix}/share/man
docdir=${prefix}/share/doc/onnxruntime
libdir=${prefix}/lib64
includedir=${prefix}/include/onnxruntime
Name: onnxruntime
Description: ONNX runtime
URL: https://github.com/microsoft/onnxruntime
Version: 1.23.2
Libs: -L${libdir} -lonnxruntime
Cflags: -I${includedir}

Therefore, after decompressing the tgz file, we can install it using the following command. Of course, manually configuring environment variables such as LD_LIBRARY_PATH is also an option.

sudo cp -a onnxruntime-linux-x64-1.23.2/include/ /usr/local/include/onnxruntime
sudo cp -a onnxruntime-linux-x64-1.23.2/lib/cmake/ /usr/local/lib/cmake
sudo cp -a onnxruntime-linux-x64-1.23.2/lib/pkgconfig/ /usr/local/lib/pkgconfig
sudo mkdir /usr/local/lib64
sudo cp onnxruntime-linux-x64-1.23.2/lib/libonnxruntime* /usr/local/lib64

I have already modified the compilation-related configurations in this PR draft. Where you need to use ONNX Runtime, simply include #include <onnxruntime/onnxruntime_cxx_api.h> to begin using it. I haven't added any model inference yet because I want to see your thoughts first. Using ONNX Runtime as a general inference middleware involves some code structure adjustments and configuration combinations. How should this part be organized afterward?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant