Web app for:
- generating tests by subject/chapter/level,
- solving tests in the browser,
- grading answers and identifying knowledge gaps,
- all in Slovene.
- faster MVP iteration for AI workflows (prompting + JSON validation),
- more mature ecosystem for LLM integrations,
- FastAPI enables a quick path from prototype to production.
- Create a virtual environment and install dependencies:
python -m venv .venv
.venv\Scripts\activate
pip install -r requirements.txt- Configure environment variables:
copy .env.example .envSet OPENAI_API_KEY in .env.
The default model is gpt-5 (higher quality). If that model is not available for your account, the app automatically falls back to gpt-4.1.
- Configure HTTPS certificate and key paths (self-signed is fine for local/dev):
mkdir certsGenerate a local TLS certificate pair (OpenSSL):
openssl req -x509 -newkey rsa:2048 -sha256 -days 365 -nodes -keyout certs/server.key -out certs/server.crt -subj "/CN=localhost" -addext "subjectAltName=DNS:localhost,IP:127.0.0.1"If OpenSSL is not available on Windows, use mkcert:
mkcert -install
mkcert -cert-file certs/server.crt -key-file certs/server.key localhost 127.0.0.1Set these in .env:
SSL_CERTFILE=certs/server.crtSSL_KEYFILE=certs/server.keyNote: SSH keys generated withssh-keygencannot be used as HTTPS TLS certs.
- Start the HTTPS server:
python -m app.main- Open:
https://127.0.0.1:8443- API docs:
https://127.0.0.1:8443/docs
Use this when deploying on a Linux server and you want the app to keep running after SSH disconnect.
- Make script executable:
chmod +x scripts/linux_run_detached.sh- Start in background (creates venv, installs deps, runs app):
./scripts/linux_run_detached.sh start- Check status/logs:
./scripts/linux_run_detached.sh status
./scripts/linux_run_detached.sh logs- Stop server:
./scripts/linux_run_detached.sh stopPOST /api/tests/generatePOST /api/tests/gradeGET /api/progressGET /api/themes
POST /api/tests/grade supports:
- JSON (
test_id,answers) or multipart/form-data(test_id,answers_json,image_<question_id>files).
GET /api/themes returns the local themes database (data/themes_database.json) used by the UI for topic/chapter suggestions.
Rate limits:
POST /api/tests/generate: max 1 call per 60 seconds per session (X-Session-Id).POST /api/tests/grade: max 1 call per 60 seconds per session (X-Session-Id).- If the limit is exceeded, API returns
429withRetry-Afterheader. POST /api/tests/generateandPOST /api/tests/gradeadditionally require a UI security token (cookie +X-UI-Tokenheader match), which helps block crawler/bot direct calls.
Grading safety:
- A given
test_idcan be graded only once. - Re-grading an already graded test returns
409.
If OPENAI_API_KEY is not set, the app uses mock mode (so the frontend flow works immediately).
- The frontend uses a runtime
X-Session-Id(only while the page is open). - Within a session, the system:
- does not repeat previous questions,
- saves knowledge gaps after grading and emphasizes them in the next test.
- On reload or tab/browser close, the session is forgotten (fresh start).
- Changing subject/chapter/level automatically resets the session to avoid state mixing across topics.
- Progress (attempt results) is saved to
data/progress.json. - An additional tabular grading log is saved to
data/progress.txt. - Each graded attempt now also stores
client_ipin both files. - Student identity is an anonymous
X-Student-IdfromlocalStorage. - This allows progress history to persist across browser restarts.
- API event audit log (JSONL) is saved to
data/api_events.jsonl. - Tabular API event log is saved to
data/api_events.txt. - Each event includes timestamp, endpoint, status, client IP, session ID, student ID, and request context (
topic/chapter/test_idwhen available).
- Themes DB file:
data/themes_database.json. - Regenerate with:
python scripts/regenerate_themes_db.py- Fallback mode (no network):
python scripts/regenerate_themes_db.py --no-fetch- Official-only bucket (store only discovered official themes in
themes.official_all):
python scripts/regenerate_themes_db.py --official-only- The file stores source URLs and any fetch errors under
source_urlsandfetch_errors. - Exhaustive discovered candidates are stored under
themes_official_all:themes_official_all.by_sourcethemes_official_all.all
- Generation uses a variation marker and randomized approach, so new tests are not always identical.
- Similar questions are allowed, but exact repeats within a session are blocked.
- You can add an image answer to each question.
- The app now shows a QR code with the current connect URL, so you can open the app quickly on your phone.
- On mobile, you now get explicit actions for
Open cameraandChoose from gallery. - A preview and filename are shown before submit, and you can remove a selected image.
- If no image is attached, grading uses JSON; if at least one image is attached, grading uses
multipart/form-data. - Attached images are sent with answers and marked as labeled image answers for each question.
- The UI shows the currently used AI model after test generation/grading.
- If fallback happens (for example from
gpt-5togpt-4.1), the indicator explicitly shows both models.
- For each question, the result now also includes
Perfect answer (100%), showing an example ideal answer. - Grading prompt is stricter and emphasizes factual correctness and consistent scoring.
- Final
total_scoreis normalized from per-question scores for consistent grading output.
- On Windows, while the backend process is running, the app requests that the system does not sleep.
- When you stop the backend, that setting is automatically released.
- persistence (PostgreSQL),
- better prompts + stricter JSON schema,
- separate endpoint for generating study material from knowledge gaps,
- user authorization (student/teacher).
This project is licensed under the MIT License. See the LICENSE file.