Skip to content

feat: Added AI-Powered Learning Analytics & Smart Study Planner#997

Open
Premkumar-2004 wants to merge 3 commits intoalphaonelabs:mainfrom
Premkumar-2004:child2
Open

feat: Added AI-Powered Learning Analytics & Smart Study Planner#997
Premkumar-2004 wants to merge 3 commits intoalphaonelabs:mainfrom
Premkumar-2004:child2

Conversation

@Premkumar-2004
Copy link

@Premkumar-2004 Premkumar-2004 commented Mar 2, 2026

Implemented AI-Powered Learning Analytics & Smart Study Planner. The system auto-tracks per-subject mastery from quiz scores using weighted moving averages, monitors attendance trends, detects at-risk courses, and generates personalized 2-week study plans. Works fully with rule-based algorithms, with optional OpenAI integration for personalized coaching messages.

Checklist

  • Did you run the pre-commit? (If not, your PR will most likely not pass — please ensure it passes pre-commit)
  • Did you test the change? (Ensure you didn’t just prompt the AI and blindly commit — test the code and confirm it works)
  • Added screenshots to the PR description (if applicable)
Screenshot 2026-03-02 at 3 03 27 PM Screenshot 2026-03-02 at 3 04 13 PM Screenshot 2026-03-02 at 3 04 53 PM

Summary by CodeRabbit

  • New Features

    • Learning Analytics dashboard with metrics, subject mastery, recommendations, and predicted completion
    • AI-powered study plan generation and in-app study plan UI with progress, item list, and complete/undo actions
    • Subject strength tracking that updates from quiz results
    • Dashboard action buttons linking to analytics and study plan
  • Admin

    • Admin interfaces for managing study plans, plan items, and subject strengths
  • Tests

    • Comprehensive tests for analytics, study plans, and subject strength tracking

@github-actions github-actions bot added the files-changed: 13 PR changes 13 files label Mar 2, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Mar 2, 2026

👀 Peer Review Required

Hi @Premkumar-2004! This pull request does not yet have a peer review.

Before this PR can be merged, please request a review from one of your peers:

  • Go to the PR page and click "Reviewers" on the right sidebar.
  • Select a team member or contributor to review your changes.
  • Once they approve, this reminder will be automatically removed.

Thank you for contributing! 🎉

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 2, 2026

Warning

Rate limit exceeded

@Premkumar-2004 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 2 minutes and 43 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 0a7a5c8 and bd95f93.

📒 Files selected for processing (10)
  • tests/test_trackers.py
  • web/migrations/0065_studyplanitem_priority_rank.py
  • web/migrations/0066_studyplan_unique_active_constraint.py
  • web/models.py
  • web/recommendations.py
  • web/signals.py
  • web/templates/dashboard/learning_analytics.html
  • web/templates/dashboard/student.html
  • web/templates/dashboard/study_plan.html
  • web/views.py

Walkthrough

Adds a learning analytics and study-planning feature: new models for SubjectStrength, StudyPlan, and StudyPlanItem; AI-integrated analytics and plan generation; signals to update strengths on quiz completion; admin, views, templates, URLs, and tests to support dashboards and study-plan workflows.

Changes

Cohort / File(s) Summary
Configuration & Environment
/.env.sample, web/settings.py
Added OpenAI configuration variables OPENAI_API_KEY and AI_MODEL.
Database Models & Migration
web/models.py, web/migrations/0064_add_learning_analytics_models.py
Added SubjectStrength, StudyPlan, and StudyPlanItem models with fields, ordering, unique constraint, and methods (update_from_quiz, completion_percentage, mark_complete).
Admin Interface
web/admin.py
Registered new models and added admin classes: SubjectStrengthAdmin, StudyPlanAdmin (with StudyPlanItemInline), and StudyPlanItemAdmin including list displays, filters, search, and raw_id_fields.
Signals
web/signals.py
Added post_save receiver update_subject_strength_on_quiz_complete to create/update SubjectStrength when UserQuiz is completed (validates user, subject, and max_score).
Analytics & Plan Generation Logic
web/recommendations.py
Added get_learning_analytics(user) and study-plan generation: AI helpers (_generate_ai_insights, _generate_ai_study_plan), deterministic fallbacks (_generate_fallback_recommendations, _generate_fallback_plan_items), and generate_study_plan(user). Includes AI error handling and fallbacks.
Views & URLs
web/views.py, web/urls.py
Added views: learning_analytics_dashboard, study_plan_view, generate_study_plan_view, complete_study_plan_item; registered URLs for analytics and study-plan endpoints.
Templates
web/templates/dashboard/learning_analytics.html, web/templates/dashboard/study_plan.html, web/templates/dashboard/student.html
New Learning Analytics and Study Plan templates (visualizations, AI tips, plan item completion JS). Student dashboard updated to include Learning Analytics button.
Tests
tests/test_trackers.py
Added extensive tests: SubjectStrengthTests, LearningAnalyticsTests, StudyPlanTests, SubjectStrengthSignalTests covering models, views, signals, analytics payloads, plan creation/regeneration, and item completion.

Sequence Diagram

sequenceDiagram
    actor User
    participant UI as Dashboard View
    participant Reco as Recommendations (analytics)
    participant DB as Database Models
    participant AI as OpenAI API
    participant Fallback as Rule-based Engine

    User->>UI: Request study plan / analytics page
    UI->>Reco: get_learning_analytics(user)
    Reco->>DB: Query quizzes, sessions, subjects, strengths
    DB-->>Reco: Return user performance data
    Reco->>Reco: Compute analytics & weak subjects
    Reco->>AI: _generate_ai_insights(student_profile) (if configured)
    alt AI responds
        AI-->>Reco: AI coaching & plan items
    else AI fails or unavailable
        Reco->>Fallback: _generate_fallback_plan_items(...)
        Fallback-->>Reco: Deterministic plan items
    end
    Reco->>DB: create StudyPlan and StudyPlanItem records
    UI->>DB: fetch StudyPlan for rendering
    DB-->>UI: StudyPlan + items
    UI->>User: Render analytics and study plan
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title clearly summarizes the main change: adding AI-powered learning analytics and a smart study planner, which is the primary focus of the comprehensive changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/test_trackers.py (1)

29-40: ⚠️ Potential issue | 🟠 Major

Uncomment tracker creation; this test currently cannot pass.

On Line 29, the POST request is inside a triple-quoted string, so the request never runs. The later assertions then query a tracker that was never created.

Proposed fix
     def test_create_tracker(self):
-        """response = self.client.post(reverse('create_tracker'), {
-            'title': 'New Tracker',
-            'description': 'New description',
-            'current_value': 10,
-            'target_value': 50,
-            'color': 'green-600',
-            'public': True
-        })"""
+        response = self.client.post(
+            reverse("create_tracker"),
+            {
+                "title": "New Tracker",
+                "description": "New description",
+                "current_value": 10,
+                "target_value": 50,
+                "color": "green-600",
+                "public": True,
+            },
+        )
+        self.assertEqual(response.status_code, 302)
         self.assertEqual(ProgressTracker.objects.count(), 2)
         new_tracker = ProgressTracker.objects.get(title="New Tracker")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_trackers.py` around lines 29 - 40, The test currently has the
tracker creation POST wrapped in a triple-quoted string so the request never
executes; remove the triple quotes around the
self.client.post(reverse('create_tracker'), {...}) call so the POST actually
runs, creating the ProgressTracker used by the subsequent assertions
(ProgressTracker.objects.count(), ProgressTracker.objects.get(title="New
Tracker"), new_tracker.current_value and new_tracker.percentage). Ensure the
payload keys match the view's expected fields (title, description,
current_value, target_value, color, public) and that reverse('create_tracker')
resolves in the test context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/test_trackers.py`:
- Around line 63-64: Replace the hardcoded password literals used in
User.objects.create_user and self.client.login by generating a non-hardcoded
test password and reusing it; for example, add a local variable password =
secrets.token_urlsafe(8) (or use django.utils.crypto.get_random_string) and pass
that variable to User.objects.create_user(...) and self.client.login(...), and
add the required import for secrets (or django.utils.crypto) at the top of the
test file so Ruff S106 is resolved for self.user, User.objects.create_user, and
self.client.login usages.

In `@web/models.py`:
- Around line 3289-3292: mark_complete currently overwrites completed_at on
every call; make it idempotent by checking existing completion state/field
before mutating: in mark_complete(), if self.is_completed is already True (or
self.completed_at is not None) return/do nothing, otherwise set
self.is_completed = True, set self.completed_at = timezone.now(), and then
save(); reference the mark_complete method and the is_completed and completed_at
fields when applying the change.
- Around line 3201-3212: The update_from_quiz method can produce strength_score
>100 and inconsistent totals when score is out of bounds; before computing
new_score or updating totals in update_from_quiz, clamp score to the range [0,
max_score] (or treat negative/over-max values appropriately), compute new_score
= (clamped_score / max_score) * 100 (and if desired also cap new_score at 100),
use the clamped_score when incrementing total_correct and max_score when
incrementing total_questions, then apply the existing EMA logic on
self.strength_score, increment self.total_quizzes, and call self.save();
reference update_from_quiz, strength_score, total_quizzes, total_correct, and
total_questions when making the changes.
- Around line 3283-3284: The Meta.ordering currently uses "-priority" which
sorts the CharField lexically instead of by business importance; add a numeric
field (e.g., priority_rank = IntegerField) to the same model, populate it for
existing rows (high=3, medium=2, low=1 or your mapping), update Meta.ordering to
use "-priority_rank" (and keep "order" and "due_date" as needed), and
create/update migration 0064_add_learning_analytics_models.py to add the new
column and backfill values so the DB and code stay in sync.

In `@web/recommendations.py`:
- Line 113: The variables `ninety_days_ago` and `quiz_map` are assigned but
unused causing lint F841; remove these unused local assignments or use them
meaningfully. Locate the `ninety_days_ago = now - timedelta(days=90)` assignment
and either delete it or replace it with a direct use of `now -
timedelta(days=90)` where needed; locate the `quiz_map` assignment at its usage
site and either remove the `quiz_map` variable or refactor code to actually use
`quiz_map` (e.g., replace intermediate mapping with direct lookups). Ensure no
other references depend on these locals after removal to keep behavior
unchanged.
- Line 294: The new helper functions _generate_ai_insights,
_generate_fallback_recommendations, _generate_ai_study_plan, and
_generate_fallback_plan_items should have explicit type hints: add parameter and
return annotations (e.g., student_profile: Dict[str, Any] or a concrete
StudentProfile model if available, logger: logging.Logger) and specify precise
return types such as Dict[str, Any] or List[Dict[str, Any]] as appropriate;
import required typing names (from typing import Any, Dict, List, Optional) and
logging, update each function signature (e.g., def
_generate_ai_insights(student_profile: Dict[str, Any], logger: logging.Logger)
-> Dict[str, Any]:) and adjust any internal variable annotations if needed to
satisfy the type checker.
- Around line 530-535: The code is currently loading a Quiz by primary key from
item_data["quiz_id"] using Quiz.objects.get without validating it against the
quizzes passed into the function; instead, validate quiz_id against the existing
quiz_map (built from available_quizzes) before assigning: check if quiz_id is
present in quiz_map and, if so, set item_kwargs["quiz"] = quiz_map[quiz_id]; do
not call Quiz.objects.get or rely on a try/except for this mapping so
unapproved/private quizzes from the AI output are ignored.
- Around line 357-362: The OpenAI client is initialized without an explicit
timeout which can cause long hangs; update the OpenAI client construction(s)
used before the chat completion calls (the variable named client created near
where feed completions are invoked) to pass a timeout argument (e.g.,
timeout=20.0) so the calls from client.chat.completions.create(...) time out
predictably; apply this change to both client initializations that feed the
completions calls around the blocks that call
client.chat.completions.create(...) (the usages at ~lines 357-362 and ~622-627).
- Around line 211-212: Replace the bare "except Exception: pass" that surrounds
access to enrollment.progress.completed_sessions with a targeted "except
AttributeError as e" (or similar) and log the issue instead of swallowing it;
specifically, in the function or block where enrollment.progress and its
.completed_sessions attribute are read (look for references to
enrollment.progress and completed_sessions), catch AttributeError, emit a
warning/error via the module logger (e.g., logger.warning or logger.error)
including the enrollment id/context and the exception message, and allow the
code to continue with a safe default (e.g., completed_sessions = 0) so
downstream predictions remain informed by the logged anomaly.
- Around line 450-451: Wrap the entire study plan regeneration flow in a
database transaction using transaction.atomic() so pausing old plans and
creating the new StudyPlan happen atomically; inside that atomic block lock the
existing active plans with select_for_update() (e.g.,
StudyPlan.objects.select_for_update().filter(user=user,
status="active").update(status="paused")) before inserting the new plan to
prevent concurrent requests from creating a second active plan. Ensure all
subsequent operations that create or set the new StudyPlan to active occur
inside the same transaction.atomic() block so the lock and update protect the
whole critical section.

In `@web/signals.py`:
- Around line 72-100: The post_save handler
update_subject_strength_on_quiz_complete is double-counting when the same
UserQuiz is re-saved with completed=True; add an idempotency guard by recording
the previous completed state in a pre_save receiver (for UserQuiz) and then in
the post_save handler only proceed when created is True or when completed
changed from False→True (use the cached previous value), then call
SubjectStrength.objects.get_or_create and strength.update_from_quiz(...) as
before; ensure you reference UserQuiz in the pre_save receiver and keep the
existing update_subject_strength_on_quiz_complete logic but gated by the created
or transitioned-completed condition.

In `@web/templates/dashboard/learning_analytics.html`:
- Line 231: The progressbar divs rendering rc.progress lack accessible names;
update both progress elements (the divs with role="progressbar" and
data-width="{{ rc.progress }}") to include an accessible label by adding either
an aria-label that describes what the progress measures (e.g.,
aria-label="Course completion progress for {{ rc.name }}" or similar) or an
aria-labelledby that points to a nearby visible label element ID; ensure the
label text is unique and meaningful for each RC instance and apply the change to
both occurrences referenced (the div using rc.progress at the two locations).
- Around line 355-361: Clamp dataset width/height values before applying styles
to prevent overflow: inside the DOMContentLoaded listener where
document.querySelectorAll('[data-width]') and
document.querySelectorAll('[data-height]') are handled, parse each
bar.dataset.width / bar.dataset.height as a number, coerce invalid values to 0,
then clamp with Math.max(0, Math.min(value, 100)) and use that clamped value
when setting bar.style.width and bar.style.height (append '%' as before).

In `@web/templates/dashboard/student.html`:
- Around line 12-18: The "Create Study Group" anchor (href="{% url
'create_study_group' %}") uses non-approved colors and lacks explicit
focus-visible styling; update its class list to use the project's approved
button palette and include hover and focus-visible utility classes (matching the
"Learning Analytics" button pattern or project button component), ensuring
consistent bg/hover colors, text color, and an explicit focus-visible ring so
keyboard users get a visible focus state.

In `@web/templates/dashboard/study_plan.html`:
- Around line 39-40: The badge at the span showing weekly_goal_sessions uses
blue/purple Tailwind classes that don't follow the project's color palette;
update the span's class list (the element rendering {{ plan.weekly_goal_sessions
}} sessions/week) to use the prescribed classes (Primary: use bg-teal-300
text-teal-600 with corresponding dark: variants, and ensure other badges
referenced in the same template around the block at the later occurrence (lines
~78-80) follow the same mapping: Secondary->gray-600, Success->green-600,
Warning->yellow-600, Danger->red-600 with dark: prefixes as needed) so all
badges conform to the project's color scheme.

In `@web/views.py`:
- Around line 8878-8886: The view generate_study_plan_view calls
generate_study_plan(request.user) without handling exceptions; wrap the call in
a try/except inside generate_study_plan_view to catch any exceptions from
generate_study_plan, log the exception (use the module logger), call
messages.error(request, "Could not generate study plan. Please try again.") and
then redirect to "study_plan" so the user does not get a 500; keep the existing
messages.success flow for the success path.
- Around line 8852-8902: Add explicit type hints on the new view functions:
annotate the request parameter as django.http.HttpRequest and set return types
accordingly—learning_analytics_dashboard and study_plan_view should return
django.http.HttpResponse, generate_study_plan_view should return
django.http.HttpResponse (or HttpResponseRedirect) and complete_study_plan_item
should return django.http.JsonResponse; update/ add the necessary imports
(HttpRequest, HttpResponse, JsonResponse, etc.) at the top of the module and
apply the annotations to the functions get_learning_analytics, study_plan_view,
generate_study_plan_view, and complete_study_plan_item.
- Line 23: Replace the private import send_verification_email_for_user and its
calls with the documented public API: import EmailAddress from
allauth.account.models, lookup the EmailAddress instance for the target user
(EmailAddress.objects.get(user=user, email=user.email)) and call its
send_confirmation(request, signup=False); update the places that currently call
send_verification_email_for_user (the calls referenced at lines where
send_verification_email_for_user is used) to use
email_address.send_confirmation(request, signup=False) instead.

---

Outside diff comments:
In `@tests/test_trackers.py`:
- Around line 29-40: The test currently has the tracker creation POST wrapped in
a triple-quoted string so the request never executes; remove the triple quotes
around the self.client.post(reverse('create_tracker'), {...}) call so the POST
actually runs, creating the ProgressTracker used by the subsequent assertions
(ProgressTracker.objects.count(), ProgressTracker.objects.get(title="New
Tracker"), new_tracker.current_value and new_tracker.percentage). Ensure the
payload keys match the view's expected fields (title, description,
current_value, target_value, color, public) and that reverse('create_tracker')
resolves in the test context.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e694ce8 and b8d4f21.

📒 Files selected for processing (13)
  • .env.sample
  • tests/test_trackers.py
  • web/admin.py
  • web/migrations/0064_add_learning_analytics_models.py
  • web/models.py
  • web/recommendations.py
  • web/settings.py
  • web/signals.py
  • web/templates/dashboard/learning_analytics.html
  • web/templates/dashboard/student.html
  • web/templates/dashboard/study_plan.html
  • web/urls.py
  • web/views.py

@github-actions
Copy link
Contributor

github-actions bot commented Mar 2, 2026

💬 Unresolved Review Conversations

Hi @Premkumar-2004! 👋

This pull request currently has 9 unresolved review conversations.

Please address all review feedback and push a new commit to resolve them before this PR can be merged.

Steps to resolve:

  1. Review each comment thread in the "Files changed" tab.
  2. Make the necessary changes to your code.
  3. Reply to each conversation to explain your changes or ask for clarification.
  4. Click "Resolve conversation" once the feedback has been addressed.
  5. Push a new commit with your changes.

Once all conversations are resolved, this notice will be removed automatically. Thank you! 🙏

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR has 18 unresolved review conversations. Please resolve them before this PR can be merged.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (14)
web/views.py (3)

8884-8886: ⚠️ Potential issue | 🟠 Major

Guard study-plan generation failures to avoid user-facing 500s.

generate_study_plan(request.user) is unhandled; failures in analytics/AI paths will bubble up and break the POST flow.

Suggested hardening
 def generate_study_plan_view(request):
     """Generate a study plan."""
     from .recommendations import generate_study_plan
 
-    generate_study_plan(request.user)
-    messages.success(request, "Your study plan has been generated!")
+    try:
+        generate_study_plan(request.user)
+    except Exception:
+        logger.exception("Failed to generate study plan for user_id=%s", request.user.id)
+        messages.error(request, "Could not generate study plan. Please try again.")
+    else:
+        messages.success(request, "Your study plan has been generated!")
     return redirect("study_plan")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/views.py` around lines 8884 - 8886, Wrap the call to
generate_study_plan(request.user) in a try/except to prevent exceptions from
bubbling to the user: catch broad exceptions around generate_study_plan, log the
exception (using existing logger or error reporting), show a friendly
messages.error to the user instead of letting a 500 occur, and still return
redirect("study_plan") so the POST flow completes; reference
generate_study_plan, messages.success, messages.error and redirect in your
change.

8853-8867: ⚠️ Potential issue | 🟡 Minor

Add explicit request/response type hints on the new views.

These newly added handlers still use untyped signatures.

Suggested typing update
 `@login_required`
-def learning_analytics_dashboard(request):
+def learning_analytics_dashboard(request: HttpRequest) -> HttpResponse:
@@
 `@login_required`
-def study_plan_view(request):
+def study_plan_view(request: HttpRequest) -> HttpResponse:
@@
 `@login_required`
 `@require_POST`
-def generate_study_plan_view(request):
+def generate_study_plan_view(request: HttpRequest) -> HttpResponse:
@@
 `@login_required`
 `@require_POST`
-def complete_study_plan_item(request, item_id):
+def complete_study_plan_item(request: HttpRequest, item_id: int) -> JsonResponse:

As per coding guidelines Use type hints in Python where appropriate.

Also applies to: 8880-8891

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/views.py` around lines 8853 - 8867, The new view functions lack explicit
type hints; update the signatures for learning_analytics_dashboard and
study_plan_view (and any other new handlers around 8880-8891) to use Django
types (e.g., request: HttpRequest and return type HttpResponse or
TemplateResponse), import HttpRequest/HttpResponse from django.http (or
TemplateResponse from django.template.response) at top of the module if needed,
and adjust any internal type-sensitive variables accordingly so the views have
explicit request/response type annotations.

23-23: ⚠️ Potential issue | 🟠 Major

Replace private allauth flow import with supported confirmation API.

This uses allauth.account.internal.* and can break on allauth upgrades. Use EmailAddress.send_confirmation(request, signup=False) at both send sites.

Suggested fix
-from allauth.account.internal.flows.email_verification import send_verification_email_for_user
@@
-                        send_verification_email_for_user(request, user)
+                        email_address, _ = EmailAddress.objects.get_or_create(
+                            user=user,
+                            email=email,
+                            defaults={"primary": True, "verified": False},
+                        )
+                        email_address.send_confirmation(request, signup=False)
@@
-                        EmailAddress.objects.create(user=user, email=email, primary=True, verified=False)
+                        email_address = EmailAddress.objects.create(
+                            user=user, email=email, primary=True, verified=False
+                        )
@@
-                        send_verification_email_for_user(request, user)
+                        email_address.send_confirmation(request, signup=False)

Also applies to: 1338-1338, 1368-1368

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/views.py` at line 23, The code imports and uses the private function
send_verification_email_for_user (from
allauth.account.internal.flows.email_verification) which is unstable; replace
any usage of send_verification_email_for_user with the supported API
EmailAddress.send_confirmation(request, signup=False). Update imports to use
EmailAddress from allauth.account.models and call
EmailAddress.send_confirmation(request, signup=False) at each site where
send_verification_email_for_user is currently invoked (e.g., the send sites
referenced by send_verification_email_for_user) so you use the public stable
API.
web/models.py (3)

3287-3290: ⚠️ Potential issue | 🟡 Minor

Make mark_complete() idempotent to preserve first completion timestamp.

Repeated calls currently overwrite completed_at.

Suggested change
     def mark_complete(self):
-        self.is_completed = True
-        self.completed_at = timezone.now()
-        self.save()
+        if self.is_completed:
+            return
+        self.is_completed = True
+        self.completed_at = timezone.now()
+        self.save(update_fields=["is_completed", "completed_at"])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/models.py` around lines 3287 - 3290, mark_complete currently overwrites
completed_at on every call; make it idempotent by leaving completed_at as-is
after the first completion: check if self.is_completed or self.completed_at is
already set and only set self.is_completed = True and self.completed_at =
timezone.now() (and then call save()) when not previously completed; ensure you
still persist the change (or skip save if no changes) so repeated calls do not
update the original completion timestamp in the mark_complete method.

3201-3212: ⚠️ Potential issue | 🟡 Minor

Clamp quiz scores before updating mastery totals.

Out-of-range score values can push strength_score above 100 and make totals inconsistent.

Suggested change
     def update_from_quiz(self, score, max_score):
         """Update strength score from a quiz result."""
-        if max_score > 0:
-            new_score = (score / max_score) * 100
+        if max_score <= 0:
+            return
+        bounded_score = max(0, min(score, max_score))
+        new_score = (bounded_score / max_score) * 100
             if self.total_quizzes == 0:
                 self.strength_score = new_score
             else:
                 self.strength_score = (0.7 * self.strength_score) + (0.3 * new_score)
             self.total_quizzes += 1
-            self.total_correct += score
+            self.total_correct += bounded_score
             self.total_questions += max_score
             self.save()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/models.py` around lines 3201 - 3212, In update_from_quiz, clamp the
incoming score to the valid range (0..max_score) and ensure max_score > 0 before
computing new_score so out-of-range inputs can't inflate totals; use the
clamped_score when incrementing total_correct and total_questions, compute
new_score = (clamped_score / max_score) * 100, apply the existing smoothing into
strength_score, then clamp strength_score to the 0–100 range before calling
save(); update references: method update_from_quiz and fields strength_score,
total_quizzes, total_correct, total_questions.

3281-3282: ⚠️ Potential issue | 🟠 Major

-priority sorting is lexicographic, not business-priority aware.

high/medium/low stored as strings will not sort in intended urgency order.

Suggested change
 class StudyPlanItem(models.Model):
+    PRIORITY_RANK = {"high": 0, "medium": 1, "low": 2}
@@
     priority = models.CharField(max_length=10, choices=PRIORITY_CHOICES, default="medium")
+    priority_rank = models.PositiveSmallIntegerField(default=1, editable=False, db_index=True)
@@
     class Meta:
-        ordering = ["order", "-priority", "due_date"]
+        ordering = ["order", "priority_rank", "due_date"]
+
+    def save(self, *args, **kwargs):
+        self.priority_rank = self.PRIORITY_RANK.get(self.priority, 1)
+        super().save(*args, **kwargs)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/models.py` around lines 3281 - 3282, Meta.ordering currently uses the
string field "priority" which sorts lexicographically (so "high","low","medium"
won't produce business urgency order); change the model to represent priority as
an integer-ranked choice and update ordering to use that numeric field (or add a
computed integer field like "priority_rank" and order by "-priority_rank").
Specifically, modify the priority field definition (the "priority" attribute on
the model) to use IntegerField with choices (e.g., HIGH=3, MEDIUM=2, LOW=1) or
add a new IntegerField "priority_rank" populated from the existing string
values, update Meta.ordering to reference the numeric field (e.g., ordering =
["order", "-priority_rank", "due_date"]), and add the corresponding data
migration to convert existing string priorities to the new numeric values.
web/templates/dashboard/learning_analytics.html (2)

231-231: ⚠️ Potential issue | 🟡 Minor

Add accessible names to progress bars.

Line 231 and Line 306 expose role="progressbar" without an accessible label.

Suggested change
-<div class="bg-yellow-500 h-2 rounded-full" role="progressbar" aria-valuenow="{{ rc.progress }}" aria-valuemin="0" aria-valuemax="100" data-width="{{ rc.progress }}"></div>
+<div class="bg-yellow-500 h-2 rounded-full" role="progressbar" aria-valuenow="{{ rc.progress }}" aria-valuemin="0" aria-valuemax="100" aria-label="{{ rc.course }} risk progress {{ rc.progress }}%" data-width="{{ rc.progress }}"></div>
@@
-<div class="bg-teal-500 h-2.5 rounded-full" role="progressbar" aria-valuenow="{{ pc.progress }}" aria-valuemin="0" aria-valuemax="100" data-width="{{ pc.progress }}"></div>
+<div class="bg-teal-500 h-2.5 rounded-full" role="progressbar" aria-valuenow="{{ pc.progress }}" aria-valuemin="0" aria-valuemax="100" aria-label="{{ pc.course }} completion progress {{ pc.progress }}%" data-width="{{ pc.progress }}"></div>

As per coding guidelines web/templates/**/*.html: Ensure proper HTML structure and accessibility in templates.

Also applies to: 306-306

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/templates/dashboard/learning_analytics.html` at line 231, The progressbar
divs lack accessible labels; update the progress bar elements (the divs with
role="progressbar" and data-width="{{ rc.progress }}" and the similar one using
the same pattern) to include either an aria-label (e.g. aria-label="{{ rc.name
}} progress: {{ rc.progress }} percent") or aria-labelledby pointing to a
visually-hidden element that contains the readable label (create a <span id="{{
rc.id }}-label" class="sr-only">...</span> and reference it). Ensure
aria-valuenow, aria-valuemin and aria-valuemax remain and that the label text
includes the item name and percent so assistive tech can announce the progress
correctly.

355-361: ⚠️ Potential issue | 🟠 Major

Clamp dynamic bar dimensions before applying styles.

Unbounded data-width / data-height values can overflow the chart layout.

Suggested change
 document.addEventListener('DOMContentLoaded', function() {
   document.querySelectorAll('[data-width]').forEach(function(bar) {
-    bar.style.width = bar.dataset.width + '%';
+    const width = Math.max(0, Math.min(100, Number(bar.dataset.width) || 0));
+    bar.style.width = width + '%';
   });
   document.querySelectorAll('[data-height]').forEach(function(bar) {
-    bar.style.height = bar.dataset.height + '%';
+    const height = Math.max(0, Math.min(100, Number(bar.dataset.height) || 0));
+    bar.style.height = height + '%';
   });
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/templates/dashboard/learning_analytics.html` around lines 355 - 361, The
DOMContentLoaded handler sets inline styles from unvalidated dataset values
which can overflow the chart; update the listener that iterates over
querySelectorAll('[data-width]') and querySelectorAll('[data-height]') to parse
the dataset values (e.g., parseFloat(bar.dataset.width/height)), clamp them to a
safe range (0–100), and then apply the clamped value to bar.style.width and
bar.style.height respectively so invalid or out-of-range inputs cannot break
layout.
web/recommendations.py (4)

108-108: ⚠️ Potential issue | 🟡 Minor

Remove unused ninety_days_ago local to keep lint clean.

This assignment is currently dead code and fails Ruff F841.

As per coding guidelines **/*.py: Fix linting errors in code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/recommendations.py` at line 108, The local variable `ninety_days_ago` is
assigned but never used (causing Ruff F841); remove the dead assignment
`ninety_days_ago = now - timedelta(days=90)` from the function in
recommendations.py (or replace it with a used expression if the 90-day cutoff is
intended) so the unused symbol `ninety_days_ago` is eliminated and the lint
error is resolved.

436-437: ⚠️ Potential issue | 🟠 Major

Wrap regeneration in an atomic transaction to prevent active-plan races.

Line 436 pauses old plans and later creates a new one without atomicity. Concurrent requests can interleave and create conflicting active plans.

Suggested change
 def generate_study_plan(user):
@@
+    from django.db import transaction
@@
-    # Deactivate old plans
-    StudyPlan.objects.filter(user=user, status="active").update(status="paused")
-
     analytics = get_learning_analytics(user)
@@
-    plan = StudyPlan.objects.create(
-        user=user,
-        title=f"Study Plan — {now.strftime('%B %d, %Y')}",
-        description=ai_plan_items.get("plan_description", "Plan based on your learning analytics."),
-        daily_goal_minutes=ai_plan_items.get("daily_goal_minutes", 30),
-        weekly_goal_sessions=ai_plan_items.get("weekly_goal_sessions", 5),
-    )
+    with transaction.atomic():
+        StudyPlan.objects.select_for_update().filter(user=user, status="active").update(status="paused")
+
+        plan = StudyPlan.objects.create(
+            user=user,
+            title=f"Study Plan — {now.strftime('%B %d, %Y')}",
+            description=ai_plan_items.get("plan_description", "Plan based on your learning analytics."),
+            daily_goal_minutes=ai_plan_items.get("daily_goal_minutes", 30),
+            weekly_goal_sessions=ai_plan_items.get("weekly_goal_sessions", 5),
+        )
@@
-    for idx, item_data in enumerate(items):
+        for idx, item_data in enumerate(items):
             ...
-        StudyPlanItem.objects.create(**item_kwargs)
+            StudyPlanItem.objects.create(**item_kwargs)
 
     return plan

Also applies to: 476-539

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/recommendations.py` around lines 436 - 437, Wrap the pause-and-create
sequence in a Django DB transaction to avoid race conditions: import
django.db.transaction and enclose the block that pauses old plans
(StudyPlan.objects.filter(user=user, status="active").update(...)) and the
subsequent creation of the new plan in a with transaction.atomic() block;
additionally acquire row locks before updating by selecting the user's plans
with StudyPlan.objects.select_for_update().filter(user=user, status="active")
(or otherwise lock the relevant rows) so concurrent requests cannot interleave.
Apply the same fix to the other regeneration block referenced (the code around
lines 476-539).

493-493: ⚠️ Potential issue | 🟠 Major

Constrain AI-selected quizzes to the pre-approved quiz set.

The current Quiz.objects.get(id=quiz_id) trusts raw AI output and can link unrelated quizzes. Also, quiz_map is currently unused.

Suggested change
-    quiz_map = {q["id"]: q for q in available_quizzes}
+    quiz_ids = [q["id"] for q in available_quizzes]
+    quiz_map = {quiz.id: quiz for quiz in Quiz.objects.filter(id__in=quiz_ids)}
@@
-        if quiz_id:
-            try:
-                item_kwargs["quiz"] = Quiz.objects.get(id=quiz_id)
-            except Quiz.DoesNotExist:
-                pass
+        if quiz_id in quiz_map:
+            item_kwargs["quiz"] = quiz_map[quiz_id]

Also applies to: 516-521

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/recommendations.py` at line 493, The code currently trusts raw
AI-supplied quiz IDs by calling Quiz.objects.get(id=quiz_id) and doesn't use the
computed quiz_map; replace those unsafe lookups by constraining to the
pre-approved available_quizzes: use the existing quiz_map (built from
available_quizzes) to validate each quiz_id returned by the AI and only fetch or
reference Quiz objects that exist in quiz_map (reject or skip IDs not present).
Update all occurrences (the current Quiz.objects.get usage and the selection
logic at the block around lines referenced, e.g., where quiz_id is used and
lines ~516-521) to consult quiz_map first and only query the DB for validated
IDs or reuse the mapped objects.

203-207: ⚠️ Potential issue | 🟠 Major

Do not silently swallow progress access errors.

The bare except Exception: pass hides data anomalies and makes analytics debugging difficult.

Suggested change
         if hasattr(enrollment, "progress"):
             try:
                 completed_sessions = enrollment.progress.completed_sessions.count()
-            except Exception:
-                pass
+            except AttributeError as exc:
+                logger.warning("Missing progress relation for enrollment_id=%s: %s", enrollment.id, exc)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/recommendations.py` around lines 203 - 207, The code currently swallows
all exceptions when accessing enrollment.progress.completed_sessions.count();
replace the bare "except Exception: pass" with a targeted handler that logs the
failure and preserves a safe fallback. Specifically, catch Exception as e (or
narrower exceptions if appropriate), call your module/logger (e.g.,
logger.exception or process_logger.error) with context including enrollment id
or repr(enrollment) and the exception, and then set completed_sessions to a
sensible default (e.g., 0) so downstream code still has a value; update the
block around enrollment.progress.completed_sessions.count() and the
completed_sessions variable accordingly.
tests/test_trackers.py (1)

63-64: ⚠️ Potential issue | 🟠 Major

Replace hardcoded test passwords to resolve Ruff S106.

These literal password arguments will keep lint checks failing.

Suggested change pattern
+import secrets
@@
-        self.user = User.objects.create_user(username="analyticsuser", email="analytics@test.com", password="testpassword")
-        self.client.login(username="analyticsuser", password="testpassword")
+        self.password = secrets.token_urlsafe(12)
+        self.user = User.objects.create_user(
+            username="analyticsuser",
+            email="analytics@test.com",
+            password=self.password,
+        )
+        self.client.login(username="analyticsuser", password=self.password)

Apply the same pattern to the other setUp blocks and other_user creation.

As per coding guidelines **/*.py: Fix linting errors in code.

Also applies to: 120-121, 181-182, 241-241, 271-271

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_trackers.py` around lines 63 - 64, Replace hardcoded password
literals used in User.objects.create_user and self.client.login by reading a
shared test password variable from the environment (e.g., TEST_PASSWORD =
os.environ.get("TEST_PASSWORD")) and use that variable in the setUp blocks and
other_user creation; update the usages in tests/test_trackers.py where
User.objects.create_user(...) and self.client.login(...) are called (also apply
the same change to the other setUp blocks and other_user creation referenced in
the comment) so no literal password strings remain in the test code.
web/templates/dashboard/study_plan.html (1)

39-40: ⚠️ Potential issue | 🟡 Minor

Replace non-standard blue/purple badge colors with the approved palette.

Line 39 and Lines 78-80 still use blue/purple variants that don’t match the template color contract.

Suggested change
-          <span class="bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300 px-3 py-1 rounded-full">
+          <span class="bg-gray-100 dark:bg-gray-700 text-gray-700 dark:text-gray-300 px-3 py-1 rounded-full">
             <i class="fas fa-calendar mr-1"></i> {{ plan.weekly_goal_sessions }} sessions/week
           </span>
@@
-                {% if item.item_type == 'session' %}bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300
-                {% elif item.item_type == 'quiz' %}bg-purple-100 dark:bg-purple-900 text-purple-700 dark:text-purple-300
+                {% if item.item_type == 'session' %}bg-teal-100 dark:bg-teal-900 text-teal-700 dark:text-teal-300
+                {% elif item.item_type == 'quiz' %}bg-gray-100 dark:bg-gray-700 text-gray-700 dark:text-gray-300
                 {% elif item.item_type == 'review' %}bg-yellow-100 dark:bg-yellow-900 text-yellow-700 dark:text-yellow-300
                 {% elif item.item_type == 'practice' %}bg-green-100 dark:bg-green-900 text-green-700 dark:text-green-300
                 {% else %}bg-gray-100 dark:bg-gray-700 text-gray-700 dark:text-gray-300{% endif %}">

As per coding guidelines Follow the project's color scheme using Tailwind's color classes (Primary: teal-300, Secondary: gray-600, Success: green-600, Warning: yellow-600, Danger: red-600, Dark mode variants using dark: prefix).

Also applies to: 78-80

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/templates/dashboard/study_plan.html` around lines 39 - 40, The badge span
uses non-approved blue/purple Tailwind classes; update the span class attribute
(the <span ...> with classes like bg-blue-100 dark:bg-blue-900 text-blue-700
dark:text-blue-300) to use the project's color contract—for a primary badge use
bg-teal-300 dark:bg-teal-700 text-gray-600 dark:text-gray-300 px-3 py-1
rounded-full—and make the same replacement for the other badge occurrence (the
similar span at the later block).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@web/models.py`:
- Around line 3233-3235: Add a DB-level conditional unique constraint on the
model Meta to ensure each user has at most one active study plan: import Q from
django.db.models and in the model's Meta (the class that currently defines
ordering = ["-created_at"]) add a constraints list with
models.UniqueConstraint(fields=["user"], condition=Q(is_active=True),
name="unique_active_studyplan_per_user"). Ensure you reference the exact Meta in
the StudyPlan model (or the model defining is_active and user) and add the
import for models if not present.

In `@web/templates/dashboard/learning_analytics.html`:
- Around line 91-93: Replace the hardcoded blue/purple Tailwind classes in the
overview card accent div and its icon with the project's palette primary colors:
update the div with class "bg-blue-100 dark:bg-blue-900 rounded-full p-3" to use
primary background classes (e.g., "bg-teal-300 dark:bg-teal-700 rounded-full
p-3") and update the icon classes "text-blue-500 dark:text-blue-300 text-xl" to
the primary text variants (e.g., "text-teal-600 dark:text-teal-300 text-xl");
apply the same replacements to the other occurrence that currently uses
blue/purple (the block analogous to the one above).

In `@web/templates/dashboard/study_plan.html`:
- Around line 46-48: Remove the inline style on the progress bar by replacing
style="width: {{ plan.completion_percentage }}%" with a data attribute (e.g.,
data-progress="{{ plan.completion_percentage }}") on the element with id
"plan-progress-bar", and add a small script that reads that data-progress value
and sets the element's width (element.style.width = value + '%') at runtime;
target the DOM element by id "plan-progress-bar" and use the template variable
plan.completion_percentage to populate the data attribute.

---

Duplicate comments:
In `@tests/test_trackers.py`:
- Around line 63-64: Replace hardcoded password literals used in
User.objects.create_user and self.client.login by reading a shared test password
variable from the environment (e.g., TEST_PASSWORD =
os.environ.get("TEST_PASSWORD")) and use that variable in the setUp blocks and
other_user creation; update the usages in tests/test_trackers.py where
User.objects.create_user(...) and self.client.login(...) are called (also apply
the same change to the other setUp blocks and other_user creation referenced in
the comment) so no literal password strings remain in the test code.

In `@web/models.py`:
- Around line 3287-3290: mark_complete currently overwrites completed_at on
every call; make it idempotent by leaving completed_at as-is after the first
completion: check if self.is_completed or self.completed_at is already set and
only set self.is_completed = True and self.completed_at = timezone.now() (and
then call save()) when not previously completed; ensure you still persist the
change (or skip save if no changes) so repeated calls do not update the original
completion timestamp in the mark_complete method.
- Around line 3201-3212: In update_from_quiz, clamp the incoming score to the
valid range (0..max_score) and ensure max_score > 0 before computing new_score
so out-of-range inputs can't inflate totals; use the clamped_score when
incrementing total_correct and total_questions, compute new_score =
(clamped_score / max_score) * 100, apply the existing smoothing into
strength_score, then clamp strength_score to the 0–100 range before calling
save(); update references: method update_from_quiz and fields strength_score,
total_quizzes, total_correct, total_questions.
- Around line 3281-3282: Meta.ordering currently uses the string field
"priority" which sorts lexicographically (so "high","low","medium" won't produce
business urgency order); change the model to represent priority as an
integer-ranked choice and update ordering to use that numeric field (or add a
computed integer field like "priority_rank" and order by "-priority_rank").
Specifically, modify the priority field definition (the "priority" attribute on
the model) to use IntegerField with choices (e.g., HIGH=3, MEDIUM=2, LOW=1) or
add a new IntegerField "priority_rank" populated from the existing string
values, update Meta.ordering to reference the numeric field (e.g., ordering =
["order", "-priority_rank", "due_date"]), and add the corresponding data
migration to convert existing string priorities to the new numeric values.

In `@web/recommendations.py`:
- Line 108: The local variable `ninety_days_ago` is assigned but never used
(causing Ruff F841); remove the dead assignment `ninety_days_ago = now -
timedelta(days=90)` from the function in recommendations.py (or replace it with
a used expression if the 90-day cutoff is intended) so the unused symbol
`ninety_days_ago` is eliminated and the lint error is resolved.
- Around line 436-437: Wrap the pause-and-create sequence in a Django DB
transaction to avoid race conditions: import django.db.transaction and enclose
the block that pauses old plans (StudyPlan.objects.filter(user=user,
status="active").update(...)) and the subsequent creation of the new plan in a
with transaction.atomic() block; additionally acquire row locks before updating
by selecting the user's plans with
StudyPlan.objects.select_for_update().filter(user=user, status="active") (or
otherwise lock the relevant rows) so concurrent requests cannot interleave.
Apply the same fix to the other regeneration block referenced (the code around
lines 476-539).
- Line 493: The code currently trusts raw AI-supplied quiz IDs by calling
Quiz.objects.get(id=quiz_id) and doesn't use the computed quiz_map; replace
those unsafe lookups by constraining to the pre-approved available_quizzes: use
the existing quiz_map (built from available_quizzes) to validate each quiz_id
returned by the AI and only fetch or reference Quiz objects that exist in
quiz_map (reject or skip IDs not present). Update all occurrences (the current
Quiz.objects.get usage and the selection logic at the block around lines
referenced, e.g., where quiz_id is used and lines ~516-521) to consult quiz_map
first and only query the DB for validated IDs or reuse the mapped objects.
- Around line 203-207: The code currently swallows all exceptions when accessing
enrollment.progress.completed_sessions.count(); replace the bare "except
Exception: pass" with a targeted handler that logs the failure and preserves a
safe fallback. Specifically, catch Exception as e (or narrower exceptions if
appropriate), call your module/logger (e.g., logger.exception or
process_logger.error) with context including enrollment id or repr(enrollment)
and the exception, and then set completed_sessions to a sensible default (e.g.,
0) so downstream code still has a value; update the block around
enrollment.progress.completed_sessions.count() and the completed_sessions
variable accordingly.

In `@web/templates/dashboard/learning_analytics.html`:
- Line 231: The progressbar divs lack accessible labels; update the progress bar
elements (the divs with role="progressbar" and data-width="{{ rc.progress }}"
and the similar one using the same pattern) to include either an aria-label
(e.g. aria-label="{{ rc.name }} progress: {{ rc.progress }} percent") or
aria-labelledby pointing to a visually-hidden element that contains the readable
label (create a <span id="{{ rc.id }}-label" class="sr-only">...</span> and
reference it). Ensure aria-valuenow, aria-valuemin and aria-valuemax remain and
that the label text includes the item name and percent so assistive tech can
announce the progress correctly.
- Around line 355-361: The DOMContentLoaded handler sets inline styles from
unvalidated dataset values which can overflow the chart; update the listener
that iterates over querySelectorAll('[data-width]') and
querySelectorAll('[data-height]') to parse the dataset values (e.g.,
parseFloat(bar.dataset.width/height)), clamp them to a safe range (0–100), and
then apply the clamped value to bar.style.width and bar.style.height
respectively so invalid or out-of-range inputs cannot break layout.

In `@web/templates/dashboard/study_plan.html`:
- Around line 39-40: The badge span uses non-approved blue/purple Tailwind
classes; update the span class attribute (the <span ...> with classes like
bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300) to use the
project's color contract—for a primary badge use bg-teal-300 dark:bg-teal-700
text-gray-600 dark:text-gray-300 px-3 py-1 rounded-full—and make the same
replacement for the other badge occurrence (the similar span at the later
block).

In `@web/views.py`:
- Around line 8884-8886: Wrap the call to generate_study_plan(request.user) in a
try/except to prevent exceptions from bubbling to the user: catch broad
exceptions around generate_study_plan, log the exception (using existing logger
or error reporting), show a friendly messages.error to the user instead of
letting a 500 occur, and still return redirect("study_plan") so the POST flow
completes; reference generate_study_plan, messages.success, messages.error and
redirect in your change.
- Around line 8853-8867: The new view functions lack explicit type hints; update
the signatures for learning_analytics_dashboard and study_plan_view (and any
other new handlers around 8880-8891) to use Django types (e.g., request:
HttpRequest and return type HttpResponse or TemplateResponse), import
HttpRequest/HttpResponse from django.http (or TemplateResponse from
django.template.response) at top of the module if needed, and adjust any
internal type-sensitive variables accordingly so the views have explicit
request/response type annotations.
- Line 23: The code imports and uses the private function
send_verification_email_for_user (from
allauth.account.internal.flows.email_verification) which is unstable; replace
any usage of send_verification_email_for_user with the supported API
EmailAddress.send_confirmation(request, signup=False). Update imports to use
EmailAddress from allauth.account.models and call
EmailAddress.send_confirmation(request, signup=False) at each site where
send_verification_email_for_user is currently invoked (e.g., the send sites
referenced by send_verification_email_for_user) so you use the public stable
API.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b8d4f21 and 0a7a5c8.

📒 Files selected for processing (7)
  • tests/test_trackers.py
  • web/models.py
  • web/recommendations.py
  • web/signals.py
  • web/templates/dashboard/learning_analytics.html
  • web/templates/dashboard/study_plan.html
  • web/views.py

@github-actions github-actions bot added files-changed: 15 PR changes 15 files and removed files-changed: 13 PR changes 13 files labels Mar 2, 2026
@Premkumar-2004
Copy link
Author

@A1L13N could you please review the pr and merge it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

files-changed: 15 PR changes 15 files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant