feat: Added AI-Powered Learning Analytics & Smart Study Planner#997
feat: Added AI-Powered Learning Analytics & Smart Study Planner#997Premkumar-2004 wants to merge 3 commits intoalphaonelabs:mainfrom
Conversation
👀 Peer Review RequiredHi @Premkumar-2004! This pull request does not yet have a peer review. Before this PR can be merged, please request a review from one of your peers:
Thank you for contributing! 🎉 |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (10)
WalkthroughAdds a learning analytics and study-planning feature: new models for SubjectStrength, StudyPlan, and StudyPlanItem; AI-integrated analytics and plan generation; signals to update strengths on quiz completion; admin, views, templates, URLs, and tests to support dashboards and study-plan workflows. Changes
Sequence DiagramsequenceDiagram
actor User
participant UI as Dashboard View
participant Reco as Recommendations (analytics)
participant DB as Database Models
participant AI as OpenAI API
participant Fallback as Rule-based Engine
User->>UI: Request study plan / analytics page
UI->>Reco: get_learning_analytics(user)
Reco->>DB: Query quizzes, sessions, subjects, strengths
DB-->>Reco: Return user performance data
Reco->>Reco: Compute analytics & weak subjects
Reco->>AI: _generate_ai_insights(student_profile) (if configured)
alt AI responds
AI-->>Reco: AI coaching & plan items
else AI fails or unavailable
Reco->>Fallback: _generate_fallback_plan_items(...)
Fallback-->>Reco: Deterministic plan items
end
Reco->>DB: create StudyPlan and StudyPlanItem records
UI->>DB: fetch StudyPlan for rendering
DB-->>UI: StudyPlan + items
UI->>User: Render analytics and study plan
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/test_trackers.py (1)
29-40:⚠️ Potential issue | 🟠 MajorUncomment tracker creation; this test currently cannot pass.
On Line 29, the POST request is inside a triple-quoted string, so the request never runs. The later assertions then query a tracker that was never created.
Proposed fix
def test_create_tracker(self): - """response = self.client.post(reverse('create_tracker'), { - 'title': 'New Tracker', - 'description': 'New description', - 'current_value': 10, - 'target_value': 50, - 'color': 'green-600', - 'public': True - })""" + response = self.client.post( + reverse("create_tracker"), + { + "title": "New Tracker", + "description": "New description", + "current_value": 10, + "target_value": 50, + "color": "green-600", + "public": True, + }, + ) + self.assertEqual(response.status_code, 302) self.assertEqual(ProgressTracker.objects.count(), 2) new_tracker = ProgressTracker.objects.get(title="New Tracker")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_trackers.py` around lines 29 - 40, The test currently has the tracker creation POST wrapped in a triple-quoted string so the request never executes; remove the triple quotes around the self.client.post(reverse('create_tracker'), {...}) call so the POST actually runs, creating the ProgressTracker used by the subsequent assertions (ProgressTracker.objects.count(), ProgressTracker.objects.get(title="New Tracker"), new_tracker.current_value and new_tracker.percentage). Ensure the payload keys match the view's expected fields (title, description, current_value, target_value, color, public) and that reverse('create_tracker') resolves in the test context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/test_trackers.py`:
- Around line 63-64: Replace the hardcoded password literals used in
User.objects.create_user and self.client.login by generating a non-hardcoded
test password and reusing it; for example, add a local variable password =
secrets.token_urlsafe(8) (or use django.utils.crypto.get_random_string) and pass
that variable to User.objects.create_user(...) and self.client.login(...), and
add the required import for secrets (or django.utils.crypto) at the top of the
test file so Ruff S106 is resolved for self.user, User.objects.create_user, and
self.client.login usages.
In `@web/models.py`:
- Around line 3289-3292: mark_complete currently overwrites completed_at on
every call; make it idempotent by checking existing completion state/field
before mutating: in mark_complete(), if self.is_completed is already True (or
self.completed_at is not None) return/do nothing, otherwise set
self.is_completed = True, set self.completed_at = timezone.now(), and then
save(); reference the mark_complete method and the is_completed and completed_at
fields when applying the change.
- Around line 3201-3212: The update_from_quiz method can produce strength_score
>100 and inconsistent totals when score is out of bounds; before computing
new_score or updating totals in update_from_quiz, clamp score to the range [0,
max_score] (or treat negative/over-max values appropriately), compute new_score
= (clamped_score / max_score) * 100 (and if desired also cap new_score at 100),
use the clamped_score when incrementing total_correct and max_score when
incrementing total_questions, then apply the existing EMA logic on
self.strength_score, increment self.total_quizzes, and call self.save();
reference update_from_quiz, strength_score, total_quizzes, total_correct, and
total_questions when making the changes.
- Around line 3283-3284: The Meta.ordering currently uses "-priority" which
sorts the CharField lexically instead of by business importance; add a numeric
field (e.g., priority_rank = IntegerField) to the same model, populate it for
existing rows (high=3, medium=2, low=1 or your mapping), update Meta.ordering to
use "-priority_rank" (and keep "order" and "due_date" as needed), and
create/update migration 0064_add_learning_analytics_models.py to add the new
column and backfill values so the DB and code stay in sync.
In `@web/recommendations.py`:
- Line 113: The variables `ninety_days_ago` and `quiz_map` are assigned but
unused causing lint F841; remove these unused local assignments or use them
meaningfully. Locate the `ninety_days_ago = now - timedelta(days=90)` assignment
and either delete it or replace it with a direct use of `now -
timedelta(days=90)` where needed; locate the `quiz_map` assignment at its usage
site and either remove the `quiz_map` variable or refactor code to actually use
`quiz_map` (e.g., replace intermediate mapping with direct lookups). Ensure no
other references depend on these locals after removal to keep behavior
unchanged.
- Line 294: The new helper functions _generate_ai_insights,
_generate_fallback_recommendations, _generate_ai_study_plan, and
_generate_fallback_plan_items should have explicit type hints: add parameter and
return annotations (e.g., student_profile: Dict[str, Any] or a concrete
StudentProfile model if available, logger: logging.Logger) and specify precise
return types such as Dict[str, Any] or List[Dict[str, Any]] as appropriate;
import required typing names (from typing import Any, Dict, List, Optional) and
logging, update each function signature (e.g., def
_generate_ai_insights(student_profile: Dict[str, Any], logger: logging.Logger)
-> Dict[str, Any]:) and adjust any internal variable annotations if needed to
satisfy the type checker.
- Around line 530-535: The code is currently loading a Quiz by primary key from
item_data["quiz_id"] using Quiz.objects.get without validating it against the
quizzes passed into the function; instead, validate quiz_id against the existing
quiz_map (built from available_quizzes) before assigning: check if quiz_id is
present in quiz_map and, if so, set item_kwargs["quiz"] = quiz_map[quiz_id]; do
not call Quiz.objects.get or rely on a try/except for this mapping so
unapproved/private quizzes from the AI output are ignored.
- Around line 357-362: The OpenAI client is initialized without an explicit
timeout which can cause long hangs; update the OpenAI client construction(s)
used before the chat completion calls (the variable named client created near
where feed completions are invoked) to pass a timeout argument (e.g.,
timeout=20.0) so the calls from client.chat.completions.create(...) time out
predictably; apply this change to both client initializations that feed the
completions calls around the blocks that call
client.chat.completions.create(...) (the usages at ~lines 357-362 and ~622-627).
- Around line 211-212: Replace the bare "except Exception: pass" that surrounds
access to enrollment.progress.completed_sessions with a targeted "except
AttributeError as e" (or similar) and log the issue instead of swallowing it;
specifically, in the function or block where enrollment.progress and its
.completed_sessions attribute are read (look for references to
enrollment.progress and completed_sessions), catch AttributeError, emit a
warning/error via the module logger (e.g., logger.warning or logger.error)
including the enrollment id/context and the exception message, and allow the
code to continue with a safe default (e.g., completed_sessions = 0) so
downstream predictions remain informed by the logged anomaly.
- Around line 450-451: Wrap the entire study plan regeneration flow in a
database transaction using transaction.atomic() so pausing old plans and
creating the new StudyPlan happen atomically; inside that atomic block lock the
existing active plans with select_for_update() (e.g.,
StudyPlan.objects.select_for_update().filter(user=user,
status="active").update(status="paused")) before inserting the new plan to
prevent concurrent requests from creating a second active plan. Ensure all
subsequent operations that create or set the new StudyPlan to active occur
inside the same transaction.atomic() block so the lock and update protect the
whole critical section.
In `@web/signals.py`:
- Around line 72-100: The post_save handler
update_subject_strength_on_quiz_complete is double-counting when the same
UserQuiz is re-saved with completed=True; add an idempotency guard by recording
the previous completed state in a pre_save receiver (for UserQuiz) and then in
the post_save handler only proceed when created is True or when completed
changed from False→True (use the cached previous value), then call
SubjectStrength.objects.get_or_create and strength.update_from_quiz(...) as
before; ensure you reference UserQuiz in the pre_save receiver and keep the
existing update_subject_strength_on_quiz_complete logic but gated by the created
or transitioned-completed condition.
In `@web/templates/dashboard/learning_analytics.html`:
- Line 231: The progressbar divs rendering rc.progress lack accessible names;
update both progress elements (the divs with role="progressbar" and
data-width="{{ rc.progress }}") to include an accessible label by adding either
an aria-label that describes what the progress measures (e.g.,
aria-label="Course completion progress for {{ rc.name }}" or similar) or an
aria-labelledby that points to a nearby visible label element ID; ensure the
label text is unique and meaningful for each RC instance and apply the change to
both occurrences referenced (the div using rc.progress at the two locations).
- Around line 355-361: Clamp dataset width/height values before applying styles
to prevent overflow: inside the DOMContentLoaded listener where
document.querySelectorAll('[data-width]') and
document.querySelectorAll('[data-height]') are handled, parse each
bar.dataset.width / bar.dataset.height as a number, coerce invalid values to 0,
then clamp with Math.max(0, Math.min(value, 100)) and use that clamped value
when setting bar.style.width and bar.style.height (append '%' as before).
In `@web/templates/dashboard/student.html`:
- Around line 12-18: The "Create Study Group" anchor (href="{% url
'create_study_group' %}") uses non-approved colors and lacks explicit
focus-visible styling; update its class list to use the project's approved
button palette and include hover and focus-visible utility classes (matching the
"Learning Analytics" button pattern or project button component), ensuring
consistent bg/hover colors, text color, and an explicit focus-visible ring so
keyboard users get a visible focus state.
In `@web/templates/dashboard/study_plan.html`:
- Around line 39-40: The badge at the span showing weekly_goal_sessions uses
blue/purple Tailwind classes that don't follow the project's color palette;
update the span's class list (the element rendering {{ plan.weekly_goal_sessions
}} sessions/week) to use the prescribed classes (Primary: use bg-teal-300
text-teal-600 with corresponding dark: variants, and ensure other badges
referenced in the same template around the block at the later occurrence (lines
~78-80) follow the same mapping: Secondary->gray-600, Success->green-600,
Warning->yellow-600, Danger->red-600 with dark: prefixes as needed) so all
badges conform to the project's color scheme.
In `@web/views.py`:
- Around line 8878-8886: The view generate_study_plan_view calls
generate_study_plan(request.user) without handling exceptions; wrap the call in
a try/except inside generate_study_plan_view to catch any exceptions from
generate_study_plan, log the exception (use the module logger), call
messages.error(request, "Could not generate study plan. Please try again.") and
then redirect to "study_plan" so the user does not get a 500; keep the existing
messages.success flow for the success path.
- Around line 8852-8902: Add explicit type hints on the new view functions:
annotate the request parameter as django.http.HttpRequest and set return types
accordingly—learning_analytics_dashboard and study_plan_view should return
django.http.HttpResponse, generate_study_plan_view should return
django.http.HttpResponse (or HttpResponseRedirect) and complete_study_plan_item
should return django.http.JsonResponse; update/ add the necessary imports
(HttpRequest, HttpResponse, JsonResponse, etc.) at the top of the module and
apply the annotations to the functions get_learning_analytics, study_plan_view,
generate_study_plan_view, and complete_study_plan_item.
- Line 23: Replace the private import send_verification_email_for_user and its
calls with the documented public API: import EmailAddress from
allauth.account.models, lookup the EmailAddress instance for the target user
(EmailAddress.objects.get(user=user, email=user.email)) and call its
send_confirmation(request, signup=False); update the places that currently call
send_verification_email_for_user (the calls referenced at lines where
send_verification_email_for_user is used) to use
email_address.send_confirmation(request, signup=False) instead.
---
Outside diff comments:
In `@tests/test_trackers.py`:
- Around line 29-40: The test currently has the tracker creation POST wrapped in
a triple-quoted string so the request never executes; remove the triple quotes
around the self.client.post(reverse('create_tracker'), {...}) call so the POST
actually runs, creating the ProgressTracker used by the subsequent assertions
(ProgressTracker.objects.count(), ProgressTracker.objects.get(title="New
Tracker"), new_tracker.current_value and new_tracker.percentage). Ensure the
payload keys match the view's expected fields (title, description,
current_value, target_value, color, public) and that reverse('create_tracker')
resolves in the test context.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (13)
.env.sampletests/test_trackers.pyweb/admin.pyweb/migrations/0064_add_learning_analytics_models.pyweb/models.pyweb/recommendations.pyweb/settings.pyweb/signals.pyweb/templates/dashboard/learning_analytics.htmlweb/templates/dashboard/student.htmlweb/templates/dashboard/study_plan.htmlweb/urls.pyweb/views.py
💬 Unresolved Review ConversationsHi @Premkumar-2004! 👋 This pull request currently has 9 unresolved review conversations. Please address all review feedback and push a new commit to resolve them before this PR can be merged. Steps to resolve:
Once all conversations are resolved, this notice will be removed automatically. Thank you! 🙏 |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (14)
web/views.py (3)
8884-8886:⚠️ Potential issue | 🟠 MajorGuard study-plan generation failures to avoid user-facing 500s.
generate_study_plan(request.user)is unhandled; failures in analytics/AI paths will bubble up and break the POST flow.Suggested hardening
def generate_study_plan_view(request): """Generate a study plan.""" from .recommendations import generate_study_plan - generate_study_plan(request.user) - messages.success(request, "Your study plan has been generated!") + try: + generate_study_plan(request.user) + except Exception: + logger.exception("Failed to generate study plan for user_id=%s", request.user.id) + messages.error(request, "Could not generate study plan. Please try again.") + else: + messages.success(request, "Your study plan has been generated!") return redirect("study_plan")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/views.py` around lines 8884 - 8886, Wrap the call to generate_study_plan(request.user) in a try/except to prevent exceptions from bubbling to the user: catch broad exceptions around generate_study_plan, log the exception (using existing logger or error reporting), show a friendly messages.error to the user instead of letting a 500 occur, and still return redirect("study_plan") so the POST flow completes; reference generate_study_plan, messages.success, messages.error and redirect in your change.
8853-8867:⚠️ Potential issue | 🟡 MinorAdd explicit request/response type hints on the new views.
These newly added handlers still use untyped signatures.
Suggested typing update
`@login_required` -def learning_analytics_dashboard(request): +def learning_analytics_dashboard(request: HttpRequest) -> HttpResponse: @@ `@login_required` -def study_plan_view(request): +def study_plan_view(request: HttpRequest) -> HttpResponse: @@ `@login_required` `@require_POST` -def generate_study_plan_view(request): +def generate_study_plan_view(request: HttpRequest) -> HttpResponse: @@ `@login_required` `@require_POST` -def complete_study_plan_item(request, item_id): +def complete_study_plan_item(request: HttpRequest, item_id: int) -> JsonResponse:As per coding guidelines
Use type hints in Python where appropriate.Also applies to: 8880-8891
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/views.py` around lines 8853 - 8867, The new view functions lack explicit type hints; update the signatures for learning_analytics_dashboard and study_plan_view (and any other new handlers around 8880-8891) to use Django types (e.g., request: HttpRequest and return type HttpResponse or TemplateResponse), import HttpRequest/HttpResponse from django.http (or TemplateResponse from django.template.response) at top of the module if needed, and adjust any internal type-sensitive variables accordingly so the views have explicit request/response type annotations.
23-23:⚠️ Potential issue | 🟠 MajorReplace private allauth flow import with supported confirmation API.
This uses
allauth.account.internal.*and can break on allauth upgrades. UseEmailAddress.send_confirmation(request, signup=False)at both send sites.Suggested fix
-from allauth.account.internal.flows.email_verification import send_verification_email_for_user @@ - send_verification_email_for_user(request, user) + email_address, _ = EmailAddress.objects.get_or_create( + user=user, + email=email, + defaults={"primary": True, "verified": False}, + ) + email_address.send_confirmation(request, signup=False) @@ - EmailAddress.objects.create(user=user, email=email, primary=True, verified=False) + email_address = EmailAddress.objects.create( + user=user, email=email, primary=True, verified=False + ) @@ - send_verification_email_for_user(request, user) + email_address.send_confirmation(request, signup=False)Also applies to: 1338-1338, 1368-1368
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/views.py` at line 23, The code imports and uses the private function send_verification_email_for_user (from allauth.account.internal.flows.email_verification) which is unstable; replace any usage of send_verification_email_for_user with the supported API EmailAddress.send_confirmation(request, signup=False). Update imports to use EmailAddress from allauth.account.models and call EmailAddress.send_confirmation(request, signup=False) at each site where send_verification_email_for_user is currently invoked (e.g., the send sites referenced by send_verification_email_for_user) so you use the public stable API.web/models.py (3)
3287-3290:⚠️ Potential issue | 🟡 MinorMake
mark_complete()idempotent to preserve first completion timestamp.Repeated calls currently overwrite
completed_at.Suggested change
def mark_complete(self): - self.is_completed = True - self.completed_at = timezone.now() - self.save() + if self.is_completed: + return + self.is_completed = True + self.completed_at = timezone.now() + self.save(update_fields=["is_completed", "completed_at"])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/models.py` around lines 3287 - 3290, mark_complete currently overwrites completed_at on every call; make it idempotent by leaving completed_at as-is after the first completion: check if self.is_completed or self.completed_at is already set and only set self.is_completed = True and self.completed_at = timezone.now() (and then call save()) when not previously completed; ensure you still persist the change (or skip save if no changes) so repeated calls do not update the original completion timestamp in the mark_complete method.
3201-3212:⚠️ Potential issue | 🟡 MinorClamp quiz scores before updating mastery totals.
Out-of-range
scorevalues can pushstrength_scoreabove 100 and make totals inconsistent.Suggested change
def update_from_quiz(self, score, max_score): """Update strength score from a quiz result.""" - if max_score > 0: - new_score = (score / max_score) * 100 + if max_score <= 0: + return + bounded_score = max(0, min(score, max_score)) + new_score = (bounded_score / max_score) * 100 if self.total_quizzes == 0: self.strength_score = new_score else: self.strength_score = (0.7 * self.strength_score) + (0.3 * new_score) self.total_quizzes += 1 - self.total_correct += score + self.total_correct += bounded_score self.total_questions += max_score self.save()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/models.py` around lines 3201 - 3212, In update_from_quiz, clamp the incoming score to the valid range (0..max_score) and ensure max_score > 0 before computing new_score so out-of-range inputs can't inflate totals; use the clamped_score when incrementing total_correct and total_questions, compute new_score = (clamped_score / max_score) * 100, apply the existing smoothing into strength_score, then clamp strength_score to the 0–100 range before calling save(); update references: method update_from_quiz and fields strength_score, total_quizzes, total_correct, total_questions.
3281-3282:⚠️ Potential issue | 🟠 Major
-prioritysorting is lexicographic, not business-priority aware.
high/medium/lowstored as strings will not sort in intended urgency order.Suggested change
class StudyPlanItem(models.Model): + PRIORITY_RANK = {"high": 0, "medium": 1, "low": 2} @@ priority = models.CharField(max_length=10, choices=PRIORITY_CHOICES, default="medium") + priority_rank = models.PositiveSmallIntegerField(default=1, editable=False, db_index=True) @@ class Meta: - ordering = ["order", "-priority", "due_date"] + ordering = ["order", "priority_rank", "due_date"] + + def save(self, *args, **kwargs): + self.priority_rank = self.PRIORITY_RANK.get(self.priority, 1) + super().save(*args, **kwargs)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/models.py` around lines 3281 - 3282, Meta.ordering currently uses the string field "priority" which sorts lexicographically (so "high","low","medium" won't produce business urgency order); change the model to represent priority as an integer-ranked choice and update ordering to use that numeric field (or add a computed integer field like "priority_rank" and order by "-priority_rank"). Specifically, modify the priority field definition (the "priority" attribute on the model) to use IntegerField with choices (e.g., HIGH=3, MEDIUM=2, LOW=1) or add a new IntegerField "priority_rank" populated from the existing string values, update Meta.ordering to reference the numeric field (e.g., ordering = ["order", "-priority_rank", "due_date"]), and add the corresponding data migration to convert existing string priorities to the new numeric values.web/templates/dashboard/learning_analytics.html (2)
231-231:⚠️ Potential issue | 🟡 MinorAdd accessible names to progress bars.
Line 231 and Line 306 expose
role="progressbar"without an accessible label.Suggested change
-<div class="bg-yellow-500 h-2 rounded-full" role="progressbar" aria-valuenow="{{ rc.progress }}" aria-valuemin="0" aria-valuemax="100" data-width="{{ rc.progress }}"></div> +<div class="bg-yellow-500 h-2 rounded-full" role="progressbar" aria-valuenow="{{ rc.progress }}" aria-valuemin="0" aria-valuemax="100" aria-label="{{ rc.course }} risk progress {{ rc.progress }}%" data-width="{{ rc.progress }}"></div> @@ -<div class="bg-teal-500 h-2.5 rounded-full" role="progressbar" aria-valuenow="{{ pc.progress }}" aria-valuemin="0" aria-valuemax="100" data-width="{{ pc.progress }}"></div> +<div class="bg-teal-500 h-2.5 rounded-full" role="progressbar" aria-valuenow="{{ pc.progress }}" aria-valuemin="0" aria-valuemax="100" aria-label="{{ pc.course }} completion progress {{ pc.progress }}%" data-width="{{ pc.progress }}"></div>As per coding guidelines
web/templates/**/*.html: Ensure proper HTML structure and accessibility in templates.Also applies to: 306-306
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/templates/dashboard/learning_analytics.html` at line 231, The progressbar divs lack accessible labels; update the progress bar elements (the divs with role="progressbar" and data-width="{{ rc.progress }}" and the similar one using the same pattern) to include either an aria-label (e.g. aria-label="{{ rc.name }} progress: {{ rc.progress }} percent") or aria-labelledby pointing to a visually-hidden element that contains the readable label (create a <span id="{{ rc.id }}-label" class="sr-only">...</span> and reference it). Ensure aria-valuenow, aria-valuemin and aria-valuemax remain and that the label text includes the item name and percent so assistive tech can announce the progress correctly.
355-361:⚠️ Potential issue | 🟠 MajorClamp dynamic bar dimensions before applying styles.
Unbounded
data-width/data-heightvalues can overflow the chart layout.Suggested change
document.addEventListener('DOMContentLoaded', function() { document.querySelectorAll('[data-width]').forEach(function(bar) { - bar.style.width = bar.dataset.width + '%'; + const width = Math.max(0, Math.min(100, Number(bar.dataset.width) || 0)); + bar.style.width = width + '%'; }); document.querySelectorAll('[data-height]').forEach(function(bar) { - bar.style.height = bar.dataset.height + '%'; + const height = Math.max(0, Math.min(100, Number(bar.dataset.height) || 0)); + bar.style.height = height + '%'; }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/templates/dashboard/learning_analytics.html` around lines 355 - 361, The DOMContentLoaded handler sets inline styles from unvalidated dataset values which can overflow the chart; update the listener that iterates over querySelectorAll('[data-width]') and querySelectorAll('[data-height]') to parse the dataset values (e.g., parseFloat(bar.dataset.width/height)), clamp them to a safe range (0–100), and then apply the clamped value to bar.style.width and bar.style.height respectively so invalid or out-of-range inputs cannot break layout.web/recommendations.py (4)
108-108:⚠️ Potential issue | 🟡 MinorRemove unused
ninety_days_agolocal to keep lint clean.This assignment is currently dead code and fails Ruff F841.
As per coding guidelines
**/*.py: Fix linting errors in code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/recommendations.py` at line 108, The local variable `ninety_days_ago` is assigned but never used (causing Ruff F841); remove the dead assignment `ninety_days_ago = now - timedelta(days=90)` from the function in recommendations.py (or replace it with a used expression if the 90-day cutoff is intended) so the unused symbol `ninety_days_ago` is eliminated and the lint error is resolved.
436-437:⚠️ Potential issue | 🟠 MajorWrap regeneration in an atomic transaction to prevent active-plan races.
Line 436 pauses old plans and later creates a new one without atomicity. Concurrent requests can interleave and create conflicting active plans.
Suggested change
def generate_study_plan(user): @@ + from django.db import transaction @@ - # Deactivate old plans - StudyPlan.objects.filter(user=user, status="active").update(status="paused") - analytics = get_learning_analytics(user) @@ - plan = StudyPlan.objects.create( - user=user, - title=f"Study Plan — {now.strftime('%B %d, %Y')}", - description=ai_plan_items.get("plan_description", "Plan based on your learning analytics."), - daily_goal_minutes=ai_plan_items.get("daily_goal_minutes", 30), - weekly_goal_sessions=ai_plan_items.get("weekly_goal_sessions", 5), - ) + with transaction.atomic(): + StudyPlan.objects.select_for_update().filter(user=user, status="active").update(status="paused") + + plan = StudyPlan.objects.create( + user=user, + title=f"Study Plan — {now.strftime('%B %d, %Y')}", + description=ai_plan_items.get("plan_description", "Plan based on your learning analytics."), + daily_goal_minutes=ai_plan_items.get("daily_goal_minutes", 30), + weekly_goal_sessions=ai_plan_items.get("weekly_goal_sessions", 5), + ) @@ - for idx, item_data in enumerate(items): + for idx, item_data in enumerate(items): ... - StudyPlanItem.objects.create(**item_kwargs) + StudyPlanItem.objects.create(**item_kwargs) return planAlso applies to: 476-539
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/recommendations.py` around lines 436 - 437, Wrap the pause-and-create sequence in a Django DB transaction to avoid race conditions: import django.db.transaction and enclose the block that pauses old plans (StudyPlan.objects.filter(user=user, status="active").update(...)) and the subsequent creation of the new plan in a with transaction.atomic() block; additionally acquire row locks before updating by selecting the user's plans with StudyPlan.objects.select_for_update().filter(user=user, status="active") (or otherwise lock the relevant rows) so concurrent requests cannot interleave. Apply the same fix to the other regeneration block referenced (the code around lines 476-539).
493-493:⚠️ Potential issue | 🟠 MajorConstrain AI-selected quizzes to the pre-approved quiz set.
The current
Quiz.objects.get(id=quiz_id)trusts raw AI output and can link unrelated quizzes. Also,quiz_mapis currently unused.Suggested change
- quiz_map = {q["id"]: q for q in available_quizzes} + quiz_ids = [q["id"] for q in available_quizzes] + quiz_map = {quiz.id: quiz for quiz in Quiz.objects.filter(id__in=quiz_ids)} @@ - if quiz_id: - try: - item_kwargs["quiz"] = Quiz.objects.get(id=quiz_id) - except Quiz.DoesNotExist: - pass + if quiz_id in quiz_map: + item_kwargs["quiz"] = quiz_map[quiz_id]Also applies to: 516-521
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/recommendations.py` at line 493, The code currently trusts raw AI-supplied quiz IDs by calling Quiz.objects.get(id=quiz_id) and doesn't use the computed quiz_map; replace those unsafe lookups by constraining to the pre-approved available_quizzes: use the existing quiz_map (built from available_quizzes) to validate each quiz_id returned by the AI and only fetch or reference Quiz objects that exist in quiz_map (reject or skip IDs not present). Update all occurrences (the current Quiz.objects.get usage and the selection logic at the block around lines referenced, e.g., where quiz_id is used and lines ~516-521) to consult quiz_map first and only query the DB for validated IDs or reuse the mapped objects.
203-207:⚠️ Potential issue | 🟠 MajorDo not silently swallow progress access errors.
The bare
except Exception: passhides data anomalies and makes analytics debugging difficult.Suggested change
if hasattr(enrollment, "progress"): try: completed_sessions = enrollment.progress.completed_sessions.count() - except Exception: - pass + except AttributeError as exc: + logger.warning("Missing progress relation for enrollment_id=%s: %s", enrollment.id, exc)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/recommendations.py` around lines 203 - 207, The code currently swallows all exceptions when accessing enrollment.progress.completed_sessions.count(); replace the bare "except Exception: pass" with a targeted handler that logs the failure and preserves a safe fallback. Specifically, catch Exception as e (or narrower exceptions if appropriate), call your module/logger (e.g., logger.exception or process_logger.error) with context including enrollment id or repr(enrollment) and the exception, and then set completed_sessions to a sensible default (e.g., 0) so downstream code still has a value; update the block around enrollment.progress.completed_sessions.count() and the completed_sessions variable accordingly.tests/test_trackers.py (1)
63-64:⚠️ Potential issue | 🟠 MajorReplace hardcoded test passwords to resolve Ruff S106.
These literal password arguments will keep lint checks failing.
Suggested change pattern
+import secrets @@ - self.user = User.objects.create_user(username="analyticsuser", email="analytics@test.com", password="testpassword") - self.client.login(username="analyticsuser", password="testpassword") + self.password = secrets.token_urlsafe(12) + self.user = User.objects.create_user( + username="analyticsuser", + email="analytics@test.com", + password=self.password, + ) + self.client.login(username="analyticsuser", password=self.password)Apply the same pattern to the other setUp blocks and
other_usercreation.As per coding guidelines
**/*.py: Fix linting errors in code.Also applies to: 120-121, 181-182, 241-241, 271-271
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_trackers.py` around lines 63 - 64, Replace hardcoded password literals used in User.objects.create_user and self.client.login by reading a shared test password variable from the environment (e.g., TEST_PASSWORD = os.environ.get("TEST_PASSWORD")) and use that variable in the setUp blocks and other_user creation; update the usages in tests/test_trackers.py where User.objects.create_user(...) and self.client.login(...) are called (also apply the same change to the other setUp blocks and other_user creation referenced in the comment) so no literal password strings remain in the test code.web/templates/dashboard/study_plan.html (1)
39-40:⚠️ Potential issue | 🟡 MinorReplace non-standard blue/purple badge colors with the approved palette.
Line 39 and Lines 78-80 still use blue/purple variants that don’t match the template color contract.
Suggested change
- <span class="bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300 px-3 py-1 rounded-full"> + <span class="bg-gray-100 dark:bg-gray-700 text-gray-700 dark:text-gray-300 px-3 py-1 rounded-full"> <i class="fas fa-calendar mr-1"></i> {{ plan.weekly_goal_sessions }} sessions/week </span> @@ - {% if item.item_type == 'session' %}bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300 - {% elif item.item_type == 'quiz' %}bg-purple-100 dark:bg-purple-900 text-purple-700 dark:text-purple-300 + {% if item.item_type == 'session' %}bg-teal-100 dark:bg-teal-900 text-teal-700 dark:text-teal-300 + {% elif item.item_type == 'quiz' %}bg-gray-100 dark:bg-gray-700 text-gray-700 dark:text-gray-300 {% elif item.item_type == 'review' %}bg-yellow-100 dark:bg-yellow-900 text-yellow-700 dark:text-yellow-300 {% elif item.item_type == 'practice' %}bg-green-100 dark:bg-green-900 text-green-700 dark:text-green-300 {% else %}bg-gray-100 dark:bg-gray-700 text-gray-700 dark:text-gray-300{% endif %}">As per coding guidelines
Follow the project's color scheme using Tailwind's color classes (Primary: teal-300, Secondary: gray-600, Success: green-600, Warning: yellow-600, Danger: red-600, Dark mode variants using dark: prefix).Also applies to: 78-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/templates/dashboard/study_plan.html` around lines 39 - 40, The badge span uses non-approved blue/purple Tailwind classes; update the span class attribute (the <span ...> with classes like bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300) to use the project's color contract—for a primary badge use bg-teal-300 dark:bg-teal-700 text-gray-600 dark:text-gray-300 px-3 py-1 rounded-full—and make the same replacement for the other badge occurrence (the similar span at the later block).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@web/models.py`:
- Around line 3233-3235: Add a DB-level conditional unique constraint on the
model Meta to ensure each user has at most one active study plan: import Q from
django.db.models and in the model's Meta (the class that currently defines
ordering = ["-created_at"]) add a constraints list with
models.UniqueConstraint(fields=["user"], condition=Q(is_active=True),
name="unique_active_studyplan_per_user"). Ensure you reference the exact Meta in
the StudyPlan model (or the model defining is_active and user) and add the
import for models if not present.
In `@web/templates/dashboard/learning_analytics.html`:
- Around line 91-93: Replace the hardcoded blue/purple Tailwind classes in the
overview card accent div and its icon with the project's palette primary colors:
update the div with class "bg-blue-100 dark:bg-blue-900 rounded-full p-3" to use
primary background classes (e.g., "bg-teal-300 dark:bg-teal-700 rounded-full
p-3") and update the icon classes "text-blue-500 dark:text-blue-300 text-xl" to
the primary text variants (e.g., "text-teal-600 dark:text-teal-300 text-xl");
apply the same replacements to the other occurrence that currently uses
blue/purple (the block analogous to the one above).
In `@web/templates/dashboard/study_plan.html`:
- Around line 46-48: Remove the inline style on the progress bar by replacing
style="width: {{ plan.completion_percentage }}%" with a data attribute (e.g.,
data-progress="{{ plan.completion_percentage }}") on the element with id
"plan-progress-bar", and add a small script that reads that data-progress value
and sets the element's width (element.style.width = value + '%') at runtime;
target the DOM element by id "plan-progress-bar" and use the template variable
plan.completion_percentage to populate the data attribute.
---
Duplicate comments:
In `@tests/test_trackers.py`:
- Around line 63-64: Replace hardcoded password literals used in
User.objects.create_user and self.client.login by reading a shared test password
variable from the environment (e.g., TEST_PASSWORD =
os.environ.get("TEST_PASSWORD")) and use that variable in the setUp blocks and
other_user creation; update the usages in tests/test_trackers.py where
User.objects.create_user(...) and self.client.login(...) are called (also apply
the same change to the other setUp blocks and other_user creation referenced in
the comment) so no literal password strings remain in the test code.
In `@web/models.py`:
- Around line 3287-3290: mark_complete currently overwrites completed_at on
every call; make it idempotent by leaving completed_at as-is after the first
completion: check if self.is_completed or self.completed_at is already set and
only set self.is_completed = True and self.completed_at = timezone.now() (and
then call save()) when not previously completed; ensure you still persist the
change (or skip save if no changes) so repeated calls do not update the original
completion timestamp in the mark_complete method.
- Around line 3201-3212: In update_from_quiz, clamp the incoming score to the
valid range (0..max_score) and ensure max_score > 0 before computing new_score
so out-of-range inputs can't inflate totals; use the clamped_score when
incrementing total_correct and total_questions, compute new_score =
(clamped_score / max_score) * 100, apply the existing smoothing into
strength_score, then clamp strength_score to the 0–100 range before calling
save(); update references: method update_from_quiz and fields strength_score,
total_quizzes, total_correct, total_questions.
- Around line 3281-3282: Meta.ordering currently uses the string field
"priority" which sorts lexicographically (so "high","low","medium" won't produce
business urgency order); change the model to represent priority as an
integer-ranked choice and update ordering to use that numeric field (or add a
computed integer field like "priority_rank" and order by "-priority_rank").
Specifically, modify the priority field definition (the "priority" attribute on
the model) to use IntegerField with choices (e.g., HIGH=3, MEDIUM=2, LOW=1) or
add a new IntegerField "priority_rank" populated from the existing string
values, update Meta.ordering to reference the numeric field (e.g., ordering =
["order", "-priority_rank", "due_date"]), and add the corresponding data
migration to convert existing string priorities to the new numeric values.
In `@web/recommendations.py`:
- Line 108: The local variable `ninety_days_ago` is assigned but never used
(causing Ruff F841); remove the dead assignment `ninety_days_ago = now -
timedelta(days=90)` from the function in recommendations.py (or replace it with
a used expression if the 90-day cutoff is intended) so the unused symbol
`ninety_days_ago` is eliminated and the lint error is resolved.
- Around line 436-437: Wrap the pause-and-create sequence in a Django DB
transaction to avoid race conditions: import django.db.transaction and enclose
the block that pauses old plans (StudyPlan.objects.filter(user=user,
status="active").update(...)) and the subsequent creation of the new plan in a
with transaction.atomic() block; additionally acquire row locks before updating
by selecting the user's plans with
StudyPlan.objects.select_for_update().filter(user=user, status="active") (or
otherwise lock the relevant rows) so concurrent requests cannot interleave.
Apply the same fix to the other regeneration block referenced (the code around
lines 476-539).
- Line 493: The code currently trusts raw AI-supplied quiz IDs by calling
Quiz.objects.get(id=quiz_id) and doesn't use the computed quiz_map; replace
those unsafe lookups by constraining to the pre-approved available_quizzes: use
the existing quiz_map (built from available_quizzes) to validate each quiz_id
returned by the AI and only fetch or reference Quiz objects that exist in
quiz_map (reject or skip IDs not present). Update all occurrences (the current
Quiz.objects.get usage and the selection logic at the block around lines
referenced, e.g., where quiz_id is used and lines ~516-521) to consult quiz_map
first and only query the DB for validated IDs or reuse the mapped objects.
- Around line 203-207: The code currently swallows all exceptions when accessing
enrollment.progress.completed_sessions.count(); replace the bare "except
Exception: pass" with a targeted handler that logs the failure and preserves a
safe fallback. Specifically, catch Exception as e (or narrower exceptions if
appropriate), call your module/logger (e.g., logger.exception or
process_logger.error) with context including enrollment id or repr(enrollment)
and the exception, and then set completed_sessions to a sensible default (e.g.,
0) so downstream code still has a value; update the block around
enrollment.progress.completed_sessions.count() and the completed_sessions
variable accordingly.
In `@web/templates/dashboard/learning_analytics.html`:
- Line 231: The progressbar divs lack accessible labels; update the progress bar
elements (the divs with role="progressbar" and data-width="{{ rc.progress }}"
and the similar one using the same pattern) to include either an aria-label
(e.g. aria-label="{{ rc.name }} progress: {{ rc.progress }} percent") or
aria-labelledby pointing to a visually-hidden element that contains the readable
label (create a <span id="{{ rc.id }}-label" class="sr-only">...</span> and
reference it). Ensure aria-valuenow, aria-valuemin and aria-valuemax remain and
that the label text includes the item name and percent so assistive tech can
announce the progress correctly.
- Around line 355-361: The DOMContentLoaded handler sets inline styles from
unvalidated dataset values which can overflow the chart; update the listener
that iterates over querySelectorAll('[data-width]') and
querySelectorAll('[data-height]') to parse the dataset values (e.g.,
parseFloat(bar.dataset.width/height)), clamp them to a safe range (0–100), and
then apply the clamped value to bar.style.width and bar.style.height
respectively so invalid or out-of-range inputs cannot break layout.
In `@web/templates/dashboard/study_plan.html`:
- Around line 39-40: The badge span uses non-approved blue/purple Tailwind
classes; update the span class attribute (the <span ...> with classes like
bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300) to use the
project's color contract—for a primary badge use bg-teal-300 dark:bg-teal-700
text-gray-600 dark:text-gray-300 px-3 py-1 rounded-full—and make the same
replacement for the other badge occurrence (the similar span at the later
block).
In `@web/views.py`:
- Around line 8884-8886: Wrap the call to generate_study_plan(request.user) in a
try/except to prevent exceptions from bubbling to the user: catch broad
exceptions around generate_study_plan, log the exception (using existing logger
or error reporting), show a friendly messages.error to the user instead of
letting a 500 occur, and still return redirect("study_plan") so the POST flow
completes; reference generate_study_plan, messages.success, messages.error and
redirect in your change.
- Around line 8853-8867: The new view functions lack explicit type hints; update
the signatures for learning_analytics_dashboard and study_plan_view (and any
other new handlers around 8880-8891) to use Django types (e.g., request:
HttpRequest and return type HttpResponse or TemplateResponse), import
HttpRequest/HttpResponse from django.http (or TemplateResponse from
django.template.response) at top of the module if needed, and adjust any
internal type-sensitive variables accordingly so the views have explicit
request/response type annotations.
- Line 23: The code imports and uses the private function
send_verification_email_for_user (from
allauth.account.internal.flows.email_verification) which is unstable; replace
any usage of send_verification_email_for_user with the supported API
EmailAddress.send_confirmation(request, signup=False). Update imports to use
EmailAddress from allauth.account.models and call
EmailAddress.send_confirmation(request, signup=False) at each site where
send_verification_email_for_user is currently invoked (e.g., the send sites
referenced by send_verification_email_for_user) so you use the public stable
API.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (7)
tests/test_trackers.pyweb/models.pyweb/recommendations.pyweb/signals.pyweb/templates/dashboard/learning_analytics.htmlweb/templates/dashboard/study_plan.htmlweb/views.py
|
@A1L13N could you please review the pr and merge it |
Implemented AI-Powered Learning Analytics & Smart Study Planner. The system auto-tracks per-subject mastery from quiz scores using weighted moving averages, monitors attendance trends, detects at-risk courses, and generates personalized 2-week study plans. Works fully with rule-based algorithms, with optional OpenAI integration for personalized coaching messages.
Checklist
Summary by CodeRabbit
New Features
Admin
Tests