Currently ros2grapher uses Gemini AI for dynamic topic resolution via the --ai flag.
It would be useful to support additional AI providers so users can bring their own preferred model or use a locally hosted one.
Providers to consider:
- Anthropic Claude (claude.ai API)
- OpenAI GPT (gpt-4o, gpt-4o-mini)
- Ollama (local models, no API key needed)
- Mistral
- Any OpenAI-compatible endpoint
The implementation would involve abstracting the current Gemini-specific code in ai_resolver.py into a provider interface, then adding implementations for each provider.
A good starting point would be adding a --ai-provider flag:
ros2grapher ./src --ai --ai-provider gemini # default
ros2grapher ./src --ai --ai-provider claude
ros2grapher ./src --ai --ai-provider openai
ros2grapher ./src --ai --ai-provider ollama --ai-model llama3
Contributions welcome.
Currently ros2grapher uses Gemini AI for dynamic topic resolution via the --ai flag.
It would be useful to support additional AI providers so users can bring their own preferred model or use a locally hosted one.
Providers to consider:
The implementation would involve abstracting the current Gemini-specific code in ai_resolver.py into a provider interface, then adding implementations for each provider.
A good starting point would be adding a --ai-provider flag:
Contributions welcome.