This app processes student essays and paragraphs, extracts metadata, runs GED (grammar error detection), applies selective LLM grammar corrections, generates a tracked‑changes DOCX with language feedback appended, and emits explainability reports. The app's offline NLP pipeline guarantees complete student privacy, saves teachers time with assessments, and provides explainability to help understand the NLP outputs. Teachers can and should edit any feedback generated in the individual student DOCX files.
git clone https://github.com/TekneGram/AgentFeedback.git
cd AgentFeedbackpython3 -m venv venv
source venv/bin/activatepip install -r requirements.txtThe app expects the llama.cpp repo to be available as a subfolder of third_party/ so it can build llama-server locally.
Create the folder and clone the repo:
mkdir -p third_party
git clone https://github.com/ggerganov/llama.cpp third_party/llama.cppBuild llama-server (macOS/Linux):
cmake -S third_party/llama.cpp -B .appdata/build/llama.cpp -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF
cmake --build .appdata/build/llama.cpp --config Release --target llama-server -jOn macOS, you may need Xcode Command Line Tools:
xcode-select --installpython -m spacy download en_core_web_smpython3 provide_fb.pyInputs are read from:
Assessment/in/*.docx
Outputs are written to:
Assessment/checked/*.docx(tracked‑changes output)Assessment/explained/*.txt(explainability logs)
Default config is in app/settings.py and includes:
- Input/output folder paths
- GED model
- LLM backend settings
- Run config (e.g.,
max_llm_corrections)
If the GGUF model or llama-server binary is missing, bootstrap_llama will download/build them into .appdata/.
On first run, a large language model (Llama‑3.1‑8B‑Instruct GGUF) will be downloaded if it is not already present.
- Decide where spaCy
en_core_web_smshould live inside the packaged app (resource path vs. app data). - Decide how to bundle or fetch third‑party
llama.cppbinaries for each OS.
Code diagram (Mermaid): architecture/04-code.md
Context
+-------------------------+
| Essay Feedback App |
| - GED + LLM + DOCX |
+-----------+-------------+
|
| HTTP
v
+-------------------------+
| llama-server (llama.cpp)|
+-------------------------+
Containers
+------------------------------+
| CLI Runner |
| - build config |
| - build container |
+---------------+--------------+
|
v
+------------------------------+
| Feedback Pipeline |
| - Docx Loader |
| - GED Service |
| - LLM Service |
| - DOCX Output Service |
| - Explainability Recorder |
+---------------+--------------+
|
v
+------------------------------+
| Output Stores |
| - Assessment/checked/*.docx |
| - Assessment/explained/*.txt|
+------------------------------+
Components
+------------------------------+
| app.pipeline |
| - header/body extraction |
| - GED scoring |
| - LLM metadata extraction |
| - LLM grammar correction |
| - DOCX output + logging |
+------------------------------+
Code (Key Modules)
+------------------------------+
| app.settings |
| app.container |
| app.pipeline |
| services/llm_service.py |
| services/ged_service.py |
| services/docx_output_service |
| services/explainability.py |
| nlp/llm/tasks/* |
| nlp/ged_bert.py |
| docx_tools/track_changes... |
+------------------------------+


