Add Ollama-backed async article analysis (summary + keywords)#157
Add Ollama-backed async article analysis (summary + keywords)#157xenacode-art wants to merge 5 commits intom2b3:testfrom
Conversation
|
This is a great initiative @xenacode-art ! If possible, please attach some working examples (images/videos) of integrating it with frontend. I have few questions on this -
|
Community names containing special characters (e.g. '+', spaces) were being inserted raw into notification link paths, causing 404s when users clicked through. Apply urllib.parse.quote(..., safe='') to all in-app notification links that include the community name in the path. Fixes m2b3#119
Introduces two new components:
- myapp/services/ai_tasks.py: a Celery shared_task that sends an
article abstract to a locally running Ollama instance, retrieves
a 2-3 sentence summary and a JSON array of keywords, and returns
them as a structured result. The Ollama base URL and model are
configurable via OLLAMA_BASE_URL and OLLAMA_MODEL settings so no
external API keys are required.
- articles/ai_api.py: two endpoints wired into the articles router:
POST /{slug}/ai-summarize — queues the task, returns task_id
GET /ai-task/{task_id} — polls task state and result
Connection and timeout errors trigger automatic Celery retries so
transient Ollama unavailability does not surface as hard failures.
88b39cf to
95c6817
Compare
|
Thanks for the review @armanalam03 Happy to answer each of these.
The POST /articles/{slug}/ai-summarize endpoint does — it calls analyse_article_task.delay(article.id, article.abstract) which queues the job. The frontend (or any authenticated client) hits that endpoint to trigger it. The result is then polled via GET /articles/ai-task/{task_id} using the task ID returned.
Right now it's fully on-demand — nothing runs automatically. The next step I'd suggest is caching the result: add ai_summary and ai_keywords fields to the Article model so after the first run the result is stored and returned immediately on subsequent calls.
|
Overview
This is a proof-of-concept for the AI-assisted literature discovery
feature discussed in the GSoC 2026 possibilities thread. It uses a
locally hosted Ollama model — no external API keys or costs required.
What it does
Given an article's abstract, the system:
Both are returned as a Celery task result so the UI can poll
asynchronously without blocking the request.
New files
myapp/services/ai_tasks.pyanalyse_article_task—@shared_taskthat calls the Ollama/api/generateendpoint with two sequential promptsOLLAMA_BASE_URLand
OLLAMA_MODELDjango settings (defaults:localhost:11434,llama3.2)ConnectionErrorandTimeouttrigger Celery retries withbackoff; other failures propagate so they're visible in the
task state
articles/ai_api.pyPOST /articles/{slug}/ai-summarize— validates article existsand has an abstract, queues the task, returns
task_idGET /articles/ai-task/{task_id}— polls task state and returnsresult when complete
Running locally