Maritime shipboard AI assistant with RAG-powered regulatory search, multi-mode chat, and pre-arrival form auto-fill using a tool-calling LLM agent.
Pre-Arrival form template with field placeholders:
Filled form after LLM agent auto-fill with agent log:
- Hybrid search over maritime regulations (semantic + BM25 + RRF + cross-encoder reranking)
- Multi-mode chat: regulatory, medical, general, forms
- Collection management with multi-collection support
- PDF document upload and indexing via Qdrant vector database
- Configurable system prompts per mode
- Local-first LLM inference via Ollama (gemma3:12b)
- Excel/DOCX/PDF template processing with
{{placeholder}}field detection - Template upload, field extraction, manual fill, and download
- Tool-calling LLM agent for auto-fill (qwen2.5:14b with native Ollama tool support)
- Agent tools:
get_template_fields,get_vessel_profile,search_documents,read_source_document,extract_pdf_text,submit_field_values - Batch fill all templates via SSE with real-time progress streaming
- Download individual filled forms or batch .zip archive
- Ship's Knowledge Base: upload source documents (crew lists, stores declarations, certificates) for agent context
- Agent Log console with model name, tool call tracing, and progress tracking
- Instructions-first workflow: agent context required before filling
| Component | Technology |
|---|---|
| Backend | FastAPI, SQLAlchemy, Python |
| Frontend | Next.js, React, Tailwind CSS, FontAwesome |
| Vector DB | Qdrant |
| Chat/RAG LLM | Ollama (gemma3:12b) |
| Tool-calling LLM | Ollama (qwen2.5:14b) |
| Embeddings | Ollama (mxbai-embed-large, 1024-dim) |
| Reranking | cross-encoder/ms-marco-MiniLM-L-6-v2 |
| Database | SQLite |
| Documents | openpyxl, python-docx, pypdf |
| Model | Role | Details |
|---|---|---|
| gemma3:12b | Chat & RAG | General chat, regulatory search, answer generation |
| qwen2.5:14b | Tool-calling agent | Pre-arrival form auto-fill with native Ollama tool calling |
| mxbai-embed-large | Embeddings | 1024-dimensional vectors for semantic search |
| ms-marco-MiniLM-L-6-v2 | Reranking | Cross-encoder relevance scoring |
The form-filling agent uses a multi-turn tool-calling loop:
- Instructions + KB context are combined into a system prompt with the field schema
- qwen2.5:14b decides which tools to call (e.g., read vessel profile, search documents)
- Tool results are fed back to the model for the next iteration (up to 5 turns)
- The model calls
submit_field_valueswith the completed field-value JSON - Values are applied to the template and the filled document is saved
If tool calling fails, the system falls back to a single-shot JSON prompt using direct context injection.
# Pull required Ollama models
ollama pull gemma3:12b
ollama pull qwen2.5:14b
ollama pull mxbai-embed-large
# Start all services
docker compose up -d- Frontend: http://localhost:3005
- Backend: http://localhost:8005
- Qdrant: http://localhost:6333
marchat/
├── backend/
│ └── app/
│ ├── main.py # FastAPI app entry point
│ ├── config.py # Pydantic settings
│ ├── rag_engine.py # RAG pipeline
│ ├── bm25_search.py # BM25 keyword search
│ ├── schema_manager.py # Document chunking
│ ├── database.py # SQLAlchemy models
│ ├── llm_providers.py # Ollama LLM provider
│ ├── tools.py # Tool-calling agent loop + tool schemas
│ ├── document_engine.py # Excel/DOCX/PDF template processor
│ ├── ingestion_service.py # Document ingestion
│ ├── prompts.py # System prompts per mode
│ └── routers/ # API route modules
├── frontend/
│ ├── components/
│ │ ├── ModeNav.tsx # Navigation bar
│ │ ├── FileManager.tsx # File sidebar (KB, templates, filled)
│ │ ├── FillToolbar.tsx # Fill controls + instructions
│ │ ├── LogConsole.tsx # Agent log with tool call tracing
│ │ └── ExcelViewer.tsx # Spreadsheet preview with diff
│ ├── pages/
│ │ ├── forms.tsx # Pre-Arrival workspace
│ │ ├── rag.tsx # Compliance search
│ │ ├── chat.tsx # Multi-mode chat
│ │ ├── collections.tsx # Data ingestion
│ │ └── settings.tsx # Configuration
│ └── lib/api.ts # API client
├── screenshots/ # UI screenshots
├── docker-compose.yml
└── launch.sh # Dev mode launcher
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

