fix: critical bugs, security issues, and missing backend endpoints#237
fix: critical bugs, security issues, and missing backend endpoints#237Muneerali199 wants to merge 1 commit intoAOSSIE-Org:mainfrom
Conversation
Fixed: - Critical: argparse.parse_args() called twice causing Flask crash - Critical: Index route returning Response object causing 500 error - Critical: Hardcoded relative DB paths breaking on wrong CWD - Critical: ChromaDB relative path crash at import - Critical: NameError in vector store except block - Critical: SymptomsScreen null entry crash - Critical: CalendarScreen broken inline styles - Critical: Schema missing mood and sleep tables Security: - UPDATE profile with no WHERE clause (data corruption) - Removed broken require_auth decorators (always 401) - Fixed logout deleting all user data - Removed mock profile data from schema - Added .env validation in model.jsx Missing Endpoints: - Created mood.py routes (log_mood, get_mood_entries, update_mood, delete_mood) - Created sleep.py routes (log_sleep, get_sleep_entries, update_sleep, delete_sleep) - Created analytics.py routes (get_analytics, get_weight_entries, get_mood_entries, get_sleep_entries) Wrong URLs: - Fixed ActionExecutor: /create_appointment -> /add_appointment - Fixed ActionExecutor: /medicine -> /set_medicine - Fixed RAGService: /get_weight -> /weight - Fixed RAGService: /get_all_medicine -> /get_medicine GitHub Issues: AOSSIE-Org#197-236
📝 WalkthroughWalkthroughThis pull request introduces three new Flask blueprints for mood, sleep, and analytics endpoints; adds corresponding database tables; refactors backend path management to use computed BASE_DIR instead of hardcoded paths; removes authentication from medicine routes; adds a logout endpoint; and updates frontend API endpoint paths and validation logic. Changes
Possibly related PRs
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
Backend/db/db.py (1)
21-25:⚠️ Potential issue | 🟠 MajorExisting databases still won't get the new tables.
first_time_setup()only runs on a missing or emptydatabase.db. Deployments that already have a DB will never execute the newCREATE TABLE IF NOT EXISTS mood_logs/sleep_logs ...statements, so the added endpoints will still fail after upgrade. Add an incremental migration step instead of relying on first-run bootstrap.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/db/db.py` around lines 21 - 25, first_time_setup currently only initializes a fresh DB and skips running new CREATE TABLE statements on existing databases; update it (or add a new run_migrations function called during startup) to apply incremental migrations for schema changes by executing idempotent DDL (e.g., CREATE TABLE IF NOT EXISTS mood_logs (...), CREATE TABLE IF NOT EXISTS sleep_logs (...)) or by using a simple migrations table to track applied migration IDs and run pending SQL scripts; locate the first_time_setup function and DATABASE/SCHEMA_FILE symbols and ensure startup invokes the migration runner so existing databases receive the new tables without requiring a fresh DB.Frontend/src/model/model.jsx (1)
73-85:⚠️ Potential issue | 🟠 MajorAllow cached models to load before validating the download source.
The new guard runs before the local-file fast path. If the model is already on device, missing
HF_TO_GGUFshould not blockloadModel()and offline use.Suggested fix
export const downloadModel = async (fileName, onProgress) => { - if (!HF_TO_GGUF) { - Alert.alert("Configuration Error", "HF_TO_GGUF not configured"); - return null; - } - - const downloadUrl = `https://huggingface.co/${HF_TO_GGUF}/resolve/main/${fileName}`; const destPath = `${RNFS.DocumentDirectoryPath}/${fileName}`; if (await RNFS.exists(destPath)) { const loaded = await loadModel(fileName); if (!loaded) return null; return destPath; } + + if (!HF_TO_GGUF) { + Alert.alert("Configuration Error", "HF_TO_GGUF not configured"); + return null; + } + + const downloadUrl = `https://huggingface.co/${HF_TO_GGUF}/resolve/main/${fileName}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Frontend/src/model/model.jsx` around lines 73 - 85, The HF_TO_GGUF guard is checked before the local-file fast path in downloadModel, preventing offline loads; move the HF_TO_GGUF check so that RNFS.exists(destPath) and the loadModel(fileName) fast path run first and return destPath if successful, and only if the file does not exist then validate HF_TO_GGUF (show Alert and return null if missing) before constructing downloadUrl and proceeding with download; update references to downloadModel, HF_TO_GGUF, RNFS.exists, loadModel, downloadUrl, and destPath accordingly.Backend/agent/vector_store.py (1)
48-95:⚠️ Potential issue | 🟠 MajorPersist the new hash only after the collection update succeeds.
Line 60 records the new
guidelines.hashbeforeguidelines_collection.add(...)runs. Any failure after that leaves the store stale, and later refreshes will be skipped because the hash already matches.Suggested fix
- with open(hash_file, "w") as f: - f.write(current_hash) - with open(guidelines_file, 'r', encoding='utf-8') as f: guidelines = json.load(f) @@ guidelines_collection.add( documents=documents, metadatas=metadatas, ids=ids ) + with open(hash_file, "w") as f: + f.write(current_hash)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/agent/vector_store.py` around lines 48 - 95, The code writes the new guidelines.hash before updating the vector store, which can leave the hash advanced if guidelines_collection.add() fails; change the flow so that guidelines_collection.add(...) is executed inside a try/except and only upon successful completion write current_hash to hash_file (preferably atomically by writing to a temp file and renaming); update the block around guidelines_collection.add, guidelines_collection.delete, and the hash_file write so that guidelines_collection.add runs first and the file write of current_hash happens only on successful add (and log/report exceptions from add without updating guidelines.hash).Frontend/src/services/ActionExecutor.js (1)
196-209:⚠️ Potential issue | 🟠 MajorReturn created IDs from the new create endpoints or stop marking these actions undoable.
Both
createAppointment()andcreateMedicine()derive rollback ids from the response, but Backend/routes/appointments.py, Lines 29-44, and Backend/routes/medicine.py, Lines 23-61, only return status/message. These actions will be logged as undoable with an undefined id, soundo_lastcannot reverse them.Also applies to: 509-521
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Frontend/src/services/ActionExecutor.js` around lines 196 - 209, The frontend assumes create endpoints return created IDs (used by createAppointment and createMedicine to set appointmentId/medicineId and mark actions undoable), but Backend/routes/appointments.py and Backend/routes/medicine.py only return status/message so undo will get undefined IDs; either update those backend handlers to include the newly created resource ID in their JSON responses (e.g., return {"status":"ok","id": new_id} from the create endpoints referenced by createAppointment/createMedicine) or change the frontend ActionExecutor (the branches that set actionType 'create_appointment'/'create_medicine' and populate appointmentId/medicineId) to not mark the action undoable when the response lacks a valid id (i.e., verify result.id or result.appointment_id/result.medicine_id before setting undo metadata and set success without undo when missing).
🧹 Nitpick comments (7)
Backend/routes/profile.py (1)
123-125: Make the singleton-profile assumption explicit.Updating by
profile['id']is safer, but the guarantee that only one row exists still lives inset_profile()'s delete-then-insert flow. A schema note or stronger invariant would keep future write paths aligned. Based on learnings: theprofiletable is intentionally single-row and currently relies on delete-then-insert to enforce that invariant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/routes/profile.py` around lines 123 - 125, The UPDATE uses profile['id'] but relies on the implicit singleton-row invariant created by set_profile()'s delete-then-insert flow; make this explicit by adding a schema migration or invariant and runtime checks: update the database schema for the profile table to enforce a single row (e.g., a fixed primary key or a CHECK constraint/unique constraint that ensures only one row), add a brief comment in the schema/migration describing the singleton assumption, and harden Backend/routes/profile.py by asserting the singleton at runtime in set_profile() and before the UPDATE (e.g., verify SELECT COUNT(*) == 1 or verify the update affected exactly one row) so future write paths cannot violate the invariant; reference the profile table, the set_profile() function, and the UPDATE statement that uses profile['id'] when applying the changes.Backend/routes/sleep.py (2)
40-40: Extract repeateddb_pathconstruction.The same path construction appears three times. Extract to a module-level constant.
Extract db_path to constant
+DB_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "db", "database.db") + `@sleep_bp.route`('/log_sleep', methods=['POST']) `@handle_db_errors` def add_sleep(): # ... existing code ... - db_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "db", "database.db") - agent = get_agent(db_path) + agent = get_agent(DB_PATH) agent.update_cache(data_type="sleep", operation="create")Apply the same change to
update_sleepanddelete_sleep.Also applies to: 92-92, 115-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/routes/sleep.py` at line 40, Extract the repeated construction of db_path into a module-level constant (e.g., DB_PATH) so the join call using os.path.dirname(os.path.dirname(__file__)), "db", "database.db" appears once; add DB_PATH at the top of the module and replace the inline constructions in the functions get_sleep, update_sleep, and delete_sleep with that constant, ensuring os is imported and any tests or references still use the same path value.
4-5: Consider removingsys.pathmanipulation.Directly manipulating
sys.pathis fragile. Since the Flask app likely sets up the Python path correctly, this may be unnecessary. If imports fail without it, consider restructuring the package or using relative imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/routes/sleep.py` around lines 4 - 5, Remove the fragile sys.path manipulation in Backend/routes/sleep.py by deleting the two lines that import sys and call sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))); instead rely on proper package structure and use relative imports (e.g., from ..module import X) or absolute imports from the Flask app package so imports resolve without modifying sys.path, and if tests or entry points fail, adjust the package __init__.py or the application entry to ensure the package root is on PYTHONPATH.Backend/routes/mood.py (2)
4-5: Consider removingsys.pathmanipulation.Same concern as
sleep.py- this pattern is fragile and may be unnecessary if the Flask app configures the Python path correctly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/routes/mood.py` around lines 4 - 5, The import in mood.py is modifying sys.path via sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) which is fragile; remove that sys.path manipulation and instead ensure modules are imported using the app's package layout (use relative imports or import from the application package), e.g., replace the sys.path.append line by converting any subsequent imports in mood.py to proper package or relative imports (use from . import X or from myapp.module import Y) and rely on the Flask app entrypoint to configure PYTHONPATH/virtualenv rather than altering sys.path at runtime.
39-39: Extract repeateddb_pathconstruction.Same issue as
sleep.py- extract to a module-level constant to follow DRY.+DB_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "db", "database.db")Also applies to: 86-86, 109-109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/routes/mood.py` at line 39, The repeated construction of db_path in Backend/routes/mood.py should be extracted to a single module-level constant (e.g., DB_PATH or DATABASE_PATH) so all references reuse it; add that constant near the top of the module using the existing os.path.join(os.path.dirname(os.path.dirname(__file__)), "db", "database.db") expression and then replace the inline db_path constructions found in the mood route handlers (the occurrences around the areas referenced at lines ~39, ~86, ~109) to use the new DB_PATH constant; ensure any functions that previously shadowed db_path now reference the module-level symbol.Backend/routes/analytics.py (2)
3-3: Remove unused import.
datetimeandtimedeltaare imported but never used in this file.-from datetime import datetime, timedelta🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/routes/analytics.py` at line 3, Remove the unused imports by deleting "datetime" and "timedelta" from the import statement at the top of the file (the currently unused symbols datetime and timedelta), leaving only necessary imports; if any future code needs date utilities, import them at that time to avoid unused imports.
14-21: Static analysis flag is a false positive - query is safe.Ruff flagged f-string SQL construction as potential injection. However, this is safe because:
week_filteris only ever''or'WHERE week_number = ?'(hardcoded strings)- User input (
week) is passed through the parameterizedparamstupleThe pattern could be made clearer by extracting to a helper, but the current implementation is secure.
Optional: Extract helper to reduce duplication and improve clarity
def _fetch_with_optional_week_filter(db, table_name, week, order_by='week_number'): """Fetch rows with optional week filter using parameterized queries.""" if week: query = f'SELECT * FROM {table_name} WHERE week_number = ? ORDER BY {order_by}' return db.execute(query, (week,)).fetchall() else: query = f'SELECT * FROM {table_name} ORDER BY {order_by}' return db.execute(query).fetchall()Note: Table names are still hardcoded at call sites, maintaining safety.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backend/routes/analytics.py` around lines 14 - 21, Ruff flagged the f-string SQL construction around week_filter/params used when building weight_data; although it's safe because week_filter is constrained to '' or 'WHERE week_number = ?' and week is passed via params, silence the warning or make intent explicit by extracting the logic into a small helper (e.g., _fetch_with_optional_week_filter) that builds the query strings without interpolating user input and always uses parameterized execution, or build the query with explicit conditionals (no f-string insertion of untrusted data) before calling db.execute; update the call site that assigns weight_data to use that helper or the safer conditional query construction so the code is both clear and free of static-analysis warnings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Backend/agent/vector_store.py`:
- Around line 7-10: Top-level side effects (BASE_DIR,
os.makedirs(CHROMADB_PATH), and client = chromadb.PersistentClient(...)) perform
filesystem work and open a DB during import; move that logic into a
lazily-called initializer function (e.g., get_chroma_client or
init_chroma_client) that computes CHROMADB_PATH, creates the directory,
constructs chromadb.PersistentClient, and returns or caches the client; remove
os.makedirs and PersistentClient creation from module import and ensure the
Flask app factory or startup code calls the new initializer inside a try/except
so failures are handled gracefully and logged rather than occurring at import
time.
In `@Backend/routes/analytics.py`:
- Around line 55-65: The analytics blueprint defines duplicate endpoints
get_mood_entries and get_sleep_entries that collide with mood_bp and sleep_bp
(causing ordering differences because analytics_bp is registered last); remove
the duplicate route handlers from analytics.py (functions get_mood_entries and
get_sleep_entries) so the implementations from mood.py and sleep.py are used, or
alternatively register analytics_bp with a URL prefix via
app.register_blueprint(analytics_bp, url_prefix='/analytics') or rename the
analytics handlers to unique paths (e.g., analytics_get_mood_entries) to avoid
shadowing; update any callers to the new paths if you choose prefix/rename.
In `@Backend/routes/mood.py`:
- Around line 39-41: The cache update fails because "mood" is missing from the
valid_types check in Backend/agent/cache.py; add "mood" to the valid_types list
and implement the corresponding data retrieval branch in the
_get_specific_data(...) function so that when
Agent.update_cache(data_type="mood", ...) is called it fetches mood records
(similar to how "sleep" or "symptoms" are handled) and returns them for caching;
update any unit of work or DAO calls used by _get_specific_data to query mood
entries and ensure the returned structure matches the cache expectations so the
calls originating in routes/mood.py (and the other noted call sites) no longer
silently fail.
In `@Backend/routes/sleep.py`:
- Around line 40-42: Add "sleep" to the cache valid types and implement its
fetch logic so cache updates for sleep no longer silently fail: in
Backend/agent/cache.py update the valid_types list (the variable named
valid_types used by _cache_update_handler) to include "sleep", and extend the
_get_specific_data(data_type, ...) function to handle "sleep" (mirror how
"mood"/other types are fetched) so that calls to
_cache_update_handler/_get_specific_data (referenced around the existing checks
and branches near the blocks handling data types at the ~59-77, ~92-94, and
~115-117 areas) return and populate sleep data correctly. Ensure the new branch
returns the same shaped payload as other types so
agent.update_cache(data_type="sleep", ...) succeeds.
In `@Backend/schema.sql`:
- Around line 144-149: The mood_logs table schema lacks the intensity column
that the frontend sends; update the CREATE TABLE statement for mood_logs to
include an intensity column (e.g., intensity INTEGER) and provide a
migration/ALTER TABLE to add intensity to existing DBs (use ALTER TABLE
mood_logs ADD COLUMN intensity INTEGER DEFAULT 0 or NULL as appropriate) so
entries from Frontend/src/services/ActionExecutor.js and
Frontend/src/services/RAGService.js are persisted; ensure the new column name is
exactly intensity and adjust any INSERT statements that reference mood_logs to
include intensity.
- Around line 152-159: The sleep_logs table schema is missing fields sent by the
frontend (duration, bedtime, wake_time) and uses hours instead of the payload's
duration; update the schema for table sleep_logs to either rename hours to
duration or add a new duration REAL column and add bedtime TIMESTAMP and
wake_time TIMESTAMP (keep quality TEXT, note TEXT and created_at), then update
all DB writes/reads that reference sleep_logs (see code paths in
ActionExecutor.js and RAGService.js that currently send duration, bedtime,
wake_time, quality, note) to use the new/renamed columns (INSERT/SELECT/UPDATE
statements should reference duration, bedtime, wake_time instead of hours) so
the payload fields are persisted.
In `@Frontend/src/model/model.jsx`:
- Around line 38-41: The startup guard in model.jsx currently throws if
MODEL_NAME is missing even though fetchAvailableGGUFs() only needs HF_TO_GGUF;
remove the unconditional check that throws for MODEL_NAME (the two-line throw
using MODEL_NAME) or move that validation into the specific function that
actually requires it so GGUF discovery can run when HF_TO_GGUF is present; refer
to the existing symbols HF_TO_GGUF, MODEL_NAME and fetchAvailableGGUFs to locate
and adjust the validation logic.
---
Outside diff comments:
In `@Backend/agent/vector_store.py`:
- Around line 48-95: The code writes the new guidelines.hash before updating the
vector store, which can leave the hash advanced if guidelines_collection.add()
fails; change the flow so that guidelines_collection.add(...) is executed inside
a try/except and only upon successful completion write current_hash to hash_file
(preferably atomically by writing to a temp file and renaming); update the block
around guidelines_collection.add, guidelines_collection.delete, and the
hash_file write so that guidelines_collection.add runs first and the file write
of current_hash happens only on successful add (and log/report exceptions from
add without updating guidelines.hash).
In `@Backend/db/db.py`:
- Around line 21-25: first_time_setup currently only initializes a fresh DB and
skips running new CREATE TABLE statements on existing databases; update it (or
add a new run_migrations function called during startup) to apply incremental
migrations for schema changes by executing idempotent DDL (e.g., CREATE TABLE IF
NOT EXISTS mood_logs (...), CREATE TABLE IF NOT EXISTS sleep_logs (...)) or by
using a simple migrations table to track applied migration IDs and run pending
SQL scripts; locate the first_time_setup function and DATABASE/SCHEMA_FILE
symbols and ensure startup invokes the migration runner so existing databases
receive the new tables without requiring a fresh DB.
In `@Frontend/src/model/model.jsx`:
- Around line 73-85: The HF_TO_GGUF guard is checked before the local-file fast
path in downloadModel, preventing offline loads; move the HF_TO_GGUF check so
that RNFS.exists(destPath) and the loadModel(fileName) fast path run first and
return destPath if successful, and only if the file does not exist then validate
HF_TO_GGUF (show Alert and return null if missing) before constructing
downloadUrl and proceeding with download; update references to downloadModel,
HF_TO_GGUF, RNFS.exists, loadModel, downloadUrl, and destPath accordingly.
In `@Frontend/src/services/ActionExecutor.js`:
- Around line 196-209: The frontend assumes create endpoints return created IDs
(used by createAppointment and createMedicine to set appointmentId/medicineId
and mark actions undoable), but Backend/routes/appointments.py and
Backend/routes/medicine.py only return status/message so undo will get undefined
IDs; either update those backend handlers to include the newly created resource
ID in their JSON responses (e.g., return {"status":"ok","id": new_id} from the
create endpoints referenced by createAppointment/createMedicine) or change the
frontend ActionExecutor (the branches that set actionType
'create_appointment'/'create_medicine' and populate appointmentId/medicineId) to
not mark the action undoable when the response lacks a valid id (i.e., verify
result.id or result.appointment_id/result.medicine_id before setting undo
metadata and set success without undo when missing).
---
Nitpick comments:
In `@Backend/routes/analytics.py`:
- Line 3: Remove the unused imports by deleting "datetime" and "timedelta" from
the import statement at the top of the file (the currently unused symbols
datetime and timedelta), leaving only necessary imports; if any future code
needs date utilities, import them at that time to avoid unused imports.
- Around line 14-21: Ruff flagged the f-string SQL construction around
week_filter/params used when building weight_data; although it's safe because
week_filter is constrained to '' or 'WHERE week_number = ?' and week is passed
via params, silence the warning or make intent explicit by extracting the logic
into a small helper (e.g., _fetch_with_optional_week_filter) that builds the
query strings without interpolating user input and always uses parameterized
execution, or build the query with explicit conditionals (no f-string insertion
of untrusted data) before calling db.execute; update the call site that assigns
weight_data to use that helper or the safer conditional query construction so
the code is both clear and free of static-analysis warnings.
In `@Backend/routes/mood.py`:
- Around line 4-5: The import in mood.py is modifying sys.path via
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
which is fragile; remove that sys.path manipulation and instead ensure modules
are imported using the app's package layout (use relative imports or import from
the application package), e.g., replace the sys.path.append line by converting
any subsequent imports in mood.py to proper package or relative imports (use
from . import X or from myapp.module import Y) and rely on the Flask app
entrypoint to configure PYTHONPATH/virtualenv rather than altering sys.path at
runtime.
- Line 39: The repeated construction of db_path in Backend/routes/mood.py should
be extracted to a single module-level constant (e.g., DB_PATH or DATABASE_PATH)
so all references reuse it; add that constant near the top of the module using
the existing os.path.join(os.path.dirname(os.path.dirname(__file__)), "db",
"database.db") expression and then replace the inline db_path constructions
found in the mood route handlers (the occurrences around the areas referenced at
lines ~39, ~86, ~109) to use the new DB_PATH constant; ensure any functions that
previously shadowed db_path now reference the module-level symbol.
In `@Backend/routes/profile.py`:
- Around line 123-125: The UPDATE uses profile['id'] but relies on the implicit
singleton-row invariant created by set_profile()'s delete-then-insert flow; make
this explicit by adding a schema migration or invariant and runtime checks:
update the database schema for the profile table to enforce a single row (e.g.,
a fixed primary key or a CHECK constraint/unique constraint that ensures only
one row), add a brief comment in the schema/migration describing the singleton
assumption, and harden Backend/routes/profile.py by asserting the singleton at
runtime in set_profile() and before the UPDATE (e.g., verify SELECT COUNT(*) ==
1 or verify the update affected exactly one row) so future write paths cannot
violate the invariant; reference the profile table, the set_profile() function,
and the UPDATE statement that uses profile['id'] when applying the changes.
In `@Backend/routes/sleep.py`:
- Line 40: Extract the repeated construction of db_path into a module-level
constant (e.g., DB_PATH) so the join call using
os.path.dirname(os.path.dirname(__file__)), "db", "database.db" appears once;
add DB_PATH at the top of the module and replace the inline constructions in the
functions get_sleep, update_sleep, and delete_sleep with that constant, ensuring
os is imported and any tests or references still use the same path value.
- Around line 4-5: Remove the fragile sys.path manipulation in
Backend/routes/sleep.py by deleting the two lines that import sys and call
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))));
instead rely on proper package structure and use relative imports (e.g., from
..module import X) or absolute imports from the Flask app package so imports
resolve without modifying sys.path, and if tests or entry points fail, adjust
the package __init__.py or the application entry to ensure the package root is
on PYTHONPATH.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2c408165-237c-45b6-bc76-a5e109bf9400
📒 Files selected for processing (14)
Backend/agent/vector_store.pyBackend/app.pyBackend/db/db.pyBackend/routes/analytics.pyBackend/routes/medicine.pyBackend/routes/mood.pyBackend/routes/profile.pyBackend/routes/sleep.pyBackend/schema.sqlFrontend/src/Screens/CalendarScreen.jsxFrontend/src/Screens/SymptomsScreen.jsxFrontend/src/model/model.jsxFrontend/src/services/ActionExecutor.jsFrontend/src/services/RAGService.js
💤 Files with no reviewable changes (2)
- Backend/routes/medicine.py
- Frontend/src/Screens/CalendarScreen.jsx
| BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) | ||
| CHROMADB_PATH = os.path.join(BASE_DIR, "db", "chromadb") | ||
| os.makedirs(CHROMADB_PATH, exist_ok=True) | ||
| client = chromadb.PersistentClient(path=CHROMADB_PATH) |
There was a problem hiding this comment.
Keep Chroma initialization out of module import.
The absolute path fix helps, but Lines 7-10 still perform filesystem and PersistentClient(...) setup during import. A misconfigured or read-only environment can still fail before Flask finishes booting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Backend/agent/vector_store.py` around lines 7 - 10, Top-level side effects
(BASE_DIR, os.makedirs(CHROMADB_PATH), and client =
chromadb.PersistentClient(...)) perform filesystem work and open a DB during
import; move that logic into a lazily-called initializer function (e.g.,
get_chroma_client or init_chroma_client) that computes CHROMADB_PATH, creates
the directory, constructs chromadb.PersistentClient, and returns or caches the
client; remove os.makedirs and PersistentClient creation from module import and
ensure the Flask app factory or startup code calls the new initializer inside a
try/except so failures are handled gracefully and logged rather than occurring
at import time.
| @analytics_bp.route('/get_mood_entries', methods=['GET']) | ||
| def get_mood_entries(): | ||
| db = open_db() | ||
| week = request.args.get('week') | ||
|
|
||
| if week: | ||
| rows = db.execute('SELECT * FROM mood_logs WHERE week_number = ? ORDER BY week_number', (week,)).fetchall() | ||
| else: | ||
| rows = db.execute('SELECT * FROM mood_logs ORDER BY week_number').fetchall() | ||
|
|
||
| return jsonify([dict(row) for row in rows]), 200 |
There was a problem hiding this comment.
Route collision with mood.py and sleep.py blueprints.
The routes /get_mood_entries and /get_sleep_entries are defined both here in analytics_bp and in mood_bp/sleep_bp. Since analytics_bp is registered last in app.py (line 43), these routes will shadow the ones in mood.py and sleep.py.
The implementations differ: mood.py and sleep.py order by created_at DESC, while analytics orders by week_number. This causes inconsistent behavior depending on which blueprint handles the request.
Options to fix:
- Remove duplicate routes from analytics.py and rely on mood.py/sleep.py implementations
- Use a URL prefix for analytics (e.g.,
/analytics/get_mood_entries) - Rename analytics routes (e.g.,
/analytics_mood_entries)
Option 1: Remove duplicate routes from analytics.py
-@analytics_bp.route('/get_mood_entries', methods=['GET'])
-def get_mood_entries():
- db = open_db()
- week = request.args.get('week')
-
- if week:
- rows = db.execute('SELECT * FROM mood_logs WHERE week_number = ? ORDER BY week_number', (week,)).fetchall()
- else:
- rows = db.execute('SELECT * FROM mood_logs ORDER BY week_number').fetchall()
-
- return jsonify([dict(row) for row in rows]), 200
-
-
-@analytics_bp.route('/get_sleep_entries', methods=['GET'])
-def get_sleep_entries():
- db = open_db()
- week = request.args.get('week')
-
- if week:
- rows = db.execute('SELECT * FROM sleep_logs WHERE week_number = ? ORDER BY week_number', (week,)).fetchall()
- else:
- rows = db.execute('SELECT * FROM sleep_logs ORDER BY week_number').fetchall()
-
- return jsonify([dict(row) for row in rows]), 200Option 2: Add URL prefix in app.py
# In Backend/app.py, change line 43:
app.register_blueprint(analytics_bp, url_prefix='/analytics')Also applies to: 68-78
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Backend/routes/analytics.py` around lines 55 - 65, The analytics blueprint
defines duplicate endpoints get_mood_entries and get_sleep_entries that collide
with mood_bp and sleep_bp (causing ordering differences because analytics_bp is
registered last); remove the duplicate route handlers from analytics.py
(functions get_mood_entries and get_sleep_entries) so the implementations from
mood.py and sleep.py are used, or alternatively register analytics_bp with a URL
prefix via app.register_blueprint(analytics_bp, url_prefix='/analytics') or
rename the analytics handlers to unique paths (e.g., analytics_get_mood_entries)
to avoid shadowing; update any callers to the new paths if you choose
prefix/rename.
| db_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "db", "database.db") | ||
| agent = get_agent(db_path) | ||
| agent.update_cache(data_type="mood", operation="create") |
There was a problem hiding this comment.
Cache update silently fails - "mood" not in valid data types.
Same issue as sleep.py: the cache implementation only supports ['profile', 'weight', 'medicine', 'symptoms', 'blood_pressure', 'discharge']. Calls to agent.update_cache(data_type="mood", ...) will silently fail.
Update Backend/agent/cache.py to include "mood" in valid_types and add data fetch logic in _get_specific_data().
Also applies to: 86-88, 109-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Backend/routes/mood.py` around lines 39 - 41, The cache update fails because
"mood" is missing from the valid_types check in Backend/agent/cache.py; add
"mood" to the valid_types list and implement the corresponding data retrieval
branch in the _get_specific_data(...) function so that when
Agent.update_cache(data_type="mood", ...) is called it fetches mood records
(similar to how "sleep" or "symptoms" are handled) and returns them for caching;
update any unit of work or DAO calls used by _get_specific_data to query mood
entries and ensure the returned structure matches the cache expectations so the
calls originating in routes/mood.py (and the other noted call sites) no longer
silently fail.
| db_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), "db", "database.db") | ||
| agent = get_agent(db_path) | ||
| agent.update_cache(data_type="sleep", operation="create") |
There was a problem hiding this comment.
Cache update silently fails - "sleep" not in valid data types.
Based on the cache implementation in Backend/agent/cache.py (lines 59-77), the _cache_update_handler only supports these data types: ['profile', 'weight', 'medicine', 'symptoms', 'blood_pressure', 'discharge'].
Calling agent.update_cache(data_type="sleep", ...) will silently fail because "sleep" is not in valid_types. The cache will not be updated, but no error is raised or logged.
To fix, update Backend/agent/cache.py to include "sleep" in valid_types:
# In Backend/agent/cache.py, line ~64
valid_types = ['profile', 'weight', 'medicine', 'symptoms', 'blood_pressure', 'discharge', 'mood', 'sleep']And add corresponding data fetch logic in _get_specific_data().
Also applies to: 92-94, 115-117
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Backend/routes/sleep.py` around lines 40 - 42, Add "sleep" to the cache valid
types and implement its fetch logic so cache updates for sleep no longer
silently fail: in Backend/agent/cache.py update the valid_types list (the
variable named valid_types used by _cache_update_handler) to include "sleep",
and extend the _get_specific_data(data_type, ...) function to handle "sleep"
(mirror how "mood"/other types are fetched) so that calls to
_cache_update_handler/_get_specific_data (referenced around the existing checks
and branches near the blocks handling data types at the ~59-77, ~92-94, and
~115-117 areas) return and populate sleep data correctly. Ensure the new branch
returns the same shaped payload as other types so
agent.update_cache(data_type="sleep", ...) succeeds.
| CREATE TABLE IF NOT EXISTS mood_logs ( | ||
| id INTEGER PRIMARY KEY AUTOINCREMENT, | ||
| week_number INTEGER NOT NULL, | ||
| mood TEXT NOT NULL, | ||
| note TEXT, | ||
| created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP |
There was a problem hiding this comment.
mood_logs is missing the intensity field the new clients send.
Frontend/src/services/ActionExecutor.js, Lines 362-367, and Frontend/src/services/RAGService.js, Lines 3051-3055, both submit intensity for mood entries. With the current table shape, that value is either dropped or the new mood routes have to diverge from the frontend contract.
Suggested schema update
CREATE TABLE IF NOT EXISTS mood_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
week_number INTEGER NOT NULL,
mood TEXT NOT NULL,
+ intensity TEXT,
note TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CREATE TABLE IF NOT EXISTS mood_logs ( | |
| id INTEGER PRIMARY KEY AUTOINCREMENT, | |
| week_number INTEGER NOT NULL, | |
| mood TEXT NOT NULL, | |
| note TEXT, | |
| created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | |
| CREATE TABLE IF NOT EXISTS mood_logs ( | |
| id INTEGER PRIMARY KEY AUTOINCREMENT, | |
| week_number INTEGER NOT NULL, | |
| mood TEXT NOT NULL, | |
| intensity TEXT, | |
| note TEXT, | |
| created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Backend/schema.sql` around lines 144 - 149, The mood_logs table schema lacks
the intensity column that the frontend sends; update the CREATE TABLE statement
for mood_logs to include an intensity column (e.g., intensity INTEGER) and
provide a migration/ALTER TABLE to add intensity to existing DBs (use ALTER
TABLE mood_logs ADD COLUMN intensity INTEGER DEFAULT 0 or NULL as appropriate)
so entries from Frontend/src/services/ActionExecutor.js and
Frontend/src/services/RAGService.js are persisted; ensure the new column name is
exactly intensity and adjust any INSERT statements that reference mood_logs to
include intensity.
| CREATE TABLE IF NOT EXISTS sleep_logs ( | ||
| id INTEGER PRIMARY KEY AUTOINCREMENT, | ||
| week_number INTEGER NOT NULL, | ||
| hours REAL NOT NULL, | ||
| quality TEXT, | ||
| note TEXT, | ||
| created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | ||
| ); |
There was a problem hiding this comment.
sleep_logs can't persist the new sleep payload.
Frontend/src/services/ActionExecutor.js, Lines 408-415, and Frontend/src/services/RAGService.js, Lines 3087-3094, send duration, bedtime, wake_time, quality, and note. This table only stores hours, quality, and note, so bedtime/wake time are lost and duration has to be renamed everywhere.
Suggested schema update
CREATE TABLE IF NOT EXISTS sleep_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
week_number INTEGER NOT NULL,
- hours REAL NOT NULL,
+ duration REAL NOT NULL,
+ bedtime TEXT,
+ wake_time TEXT,
quality TEXT,
note TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CREATE TABLE IF NOT EXISTS sleep_logs ( | |
| id INTEGER PRIMARY KEY AUTOINCREMENT, | |
| week_number INTEGER NOT NULL, | |
| hours REAL NOT NULL, | |
| quality TEXT, | |
| note TEXT, | |
| created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | |
| ); | |
| CREATE TABLE IF NOT EXISTS sleep_logs ( | |
| id INTEGER PRIMARY KEY AUTOINCREMENT, | |
| week_number INTEGER NOT NULL, | |
| duration REAL NOT NULL, | |
| bedtime TEXT, | |
| wake_time TEXT, | |
| quality TEXT, | |
| note TEXT, | |
| created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Backend/schema.sql` around lines 152 - 159, The sleep_logs table schema is
missing fields sent by the frontend (duration, bedtime, wake_time) and uses
hours instead of the payload's duration; update the schema for table sleep_logs
to either rename hours to duration or add a new duration REAL column and add
bedtime TIMESTAMP and wake_time TIMESTAMP (keep quality TEXT, note TEXT and
created_at), then update all DB writes/reads that reference sleep_logs (see code
paths in ActionExecutor.js and RAGService.js that currently send duration,
bedtime, wake_time, quality, note) to use the new/renamed columns
(INSERT/SELECT/UPDATE statements should reference duration, bedtime, wake_time
instead of hours) so the payload fields are persisted.
| if (!HF_TO_GGUF) | ||
| throw new Error("HF_TO_GGUF not configured in environment variables"); | ||
| if (!MODEL_NAME) | ||
| throw new Error("MODEL_NAME not configured in environment variables"); |
There was a problem hiding this comment.
Don't fail GGUF discovery on MODEL_NAME.
fetchAvailableGGUFs() only uses HF_TO_GGUF. Requiring MODEL_NAME here turns a missing-but-unused env var into an empty model list.
Suggested cleanup
- if (!MODEL_NAME)
- throw new Error("MODEL_NAME not configured in environment variables");
-
const repoPath = HF_TO_GGUF;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!HF_TO_GGUF) | |
| throw new Error("HF_TO_GGUF not configured in environment variables"); | |
| if (!MODEL_NAME) | |
| throw new Error("MODEL_NAME not configured in environment variables"); | |
| if (!HF_TO_GGUF) | |
| throw new Error("HF_TO_GGUF not configured in environment variables"); | |
| const repoPath = HF_TO_GGUF; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Frontend/src/model/model.jsx` around lines 38 - 41, The startup guard in
model.jsx currently throws if MODEL_NAME is missing even though
fetchAvailableGGUFs() only needs HF_TO_GGUF; remove the unconditional check that
throws for MODEL_NAME (the two-line throw using MODEL_NAME) or move that
validation into the specific function that actually requires it so GGUF
discovery can run when HF_TO_GGUF is present; refer to the existing symbols
HF_TO_GGUF, MODEL_NAME and fetchAvailableGGUFs to locate and adjust the
validation logic.
Summary
Critical Bugs Fixed
argparse.parse_args()called twice causing Flask crash on startupSecurity Issues Fixed
New Backend Endpoints
/log_mood,/get_mood_entries,/update_mood,/delete_mood/log_sleep,/get_sleep_entries,/update_sleep,/delete_sleep/get_analytics,/get_weight_entries,/get_mood_entries,/get_sleep_entriesWrong URLs Fixed
/create_appointment->/add_appointment/medicine->/set_medicine/get_weight->/weight/get_all_medicine->/get_medicineGitHub Issues
Related to issues #197-236
Summary by CodeRabbit
New Features
Bug Fixes