DevAI lets you index an entire code-base, ask natural-language questions about any part of it, and get context-aware answers backed by Azure OpenAI. A built-in Open WebUI front-end and interactive graph visualizations make the experience feel like IDE autocomplete on steroids. Finally, you can talk to an AI that knows your whole codebase, no more pasting in multiple files.
python -m devai --repo <path to your repo>DevAI fires up a local FastAPI server on :8000, launches Open WebUI on :3000, and opens your browser.
┌──────────────┐
│ Python repo │
└──────┬───────┘
│ parse + AST
┌────────▼────────┐
│ Symbol objects │ (funcs, classes, methods)
└────────┬────────┘
│ static call-graph (DAG)
┌──────────────┴───────────────┐
│ 1. Summaries (GPT-4o-mini) │
│ 2. Embeddings (text-embed-3)│
└──────────────┬───────────────┘
│ vectors
┌─────▼─────┐
│ FAISS │─── nearest-neighbour search
└───────────┘
▼
┌────────────────────┐ runtime
│ Prompt builder │ (select few relevant full sources +
└────────┬───────────┘ surrounding summaries)
│
Azure ChatCompletion (GPT-4o)
│
natural-language answer + links + relevant code chunks
│
Open WebUI chat ▸ browser
| Feature | What it gives you |
|---|---|
| One-command index | devai --repo /path/to/repo parses every .py, builds a call graph, writes summaries & embeddings. |
| Mixed-granularity RAG | Most context is 2-line summaries; only the N most relevant symbols are inlined with full source so prompts stay ≤ 8 k tokens. |
| Intent router | Simple similarity & keyword gate: if the question isn’t about the code, DevAI answers like a normal assistant—no weird dumps. |
| Interactive graph | Click a node to see its signature & summary; a collapsible sunburst + Sankey view shows both package structure and execution flow. |
| Open WebUI integration | Spins up automatically; you get a ChatGPT-style UI without extra setup. |
| Cache per-repo | Re-index only when files change; stored under symbols_cache/<hash>/. |
git clone https://github.com/yourname/devai.git
cd devai
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
export AZURE_OPEN_AI_KEY=...
export AZURE_OPEN_AI_ENDPOINT=https://<resource>.openai.azure.com
export AZURE_OPEN_AI_SUMMARY_DEPLOYMENT=gpt-4o-mini
export AZURE_OPEN_AI_ANSWER_DEPLOYMENT=gpt-4o
# index + serve + web UI
python -m devai --repo ~/projects/markitdownOpen http://localhost:3000, pick model DevAI, and start chatting:
“Why does invoice_parser.py open a temp file?”
“Show me the high-level flow for the DOCX conversion pipeline.”
| Variable | Description |
|---|---|
AZURE_OPEN_AI_KEY |
Your Azure OpenAI key |
AZURE_OPEN_AI_ENDPOINT |
e.g. https://myorg.openai.azure.com |
AZURE_OPEN_AI_SUMMARY_DEPLOYMENT |
Deployment name for quick summarisation model (cheap) |
AZURE_OPEN_AI_ANSWER_DEPLOYMENT |
Deployment name for answer-synthesis model (quality) |
Optional flags:
--refresh # force re-index even if cache exists
--visualize graph.html # just export the graph, don’t serve- AST walk
- functions, async functions, classes, methods all become symbols.
- Call graph
- Add an edge A → B when A calls B (same-module only for speed).
- Tag & metrics
- Tag each symbol as generic helper, orchestrator, etc.—guides summary prompt.
- Summaries (
gpt-4o-mini)- 1-sentence for helpers, 2 for orchestrators; embeds dependencies if helpful.
- Embeddings
text-embedding-3-smallvector for each summary → FAISS.
- Intent gate
- If max-cosine < 0.25 and no code keywords → skip RAG, answer generically.
- Core-symbol selection
- Score by similarity, literal mention, graph centrality, call frequency.
- Prompt builder
- Insert full source for top ≤5 symbols, add parent/child summaries, keep under 8 k tokens.
- Answer synthesis (
gpt-4o)- Uses the mixed context; formatting enforced by
SYS_ANSguidelines.
- Uses the mixed context; formatting enforced by
FastAPI on :8000 exposes OpenAI-compatible /v1/chat/completions—Open WebUI connects as if it were the real API.
- Can I use it without WebUI? Yes—curl the
/v1/chat/completionsendpoint. - Does it support non-Python code? Not yet; parser is Python-specific.
- GPU FAISS? Swap
faiss-cpuforfaiss-gpuinrequirements.txt.
Licensed under MIT.