Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file.
Empty file.
88 changes: 88 additions & 0 deletions ami/users/management/commands/update_roles.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
import logging

from django.core.management.base import BaseCommand
from django.db import transaction

from ami.main.models import Project
from ami.users.models import RoleSchemaVersion
from ami.users.roles import create_roles_for_project

logger = logging.getLogger(__name__)


class Command(BaseCommand):
help = "Update roles and permissions for all projects or a specific project"

def add_arguments(self, parser):
parser.add_argument(
"--project-id",
type=int,
help="Update roles for a specific project by ID",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Preview changes without applying them",
)
parser.add_argument(
"--force",
action="store_true",
default=True,
help="Force update even if groups already exist (default: True)",
Comment on lines +28 to +31
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The --force argument uses action="store_true" with default=True, which is incorrect. With store_true, the default is False, and passing the flag sets it to True. Having default=True means it's always True regardless of whether the flag is passed. To make force default to True but allow users to disable it, use action="store_false" with dest="force" and default=True, or change the flag to --no-force with store_false action.

Suggested change
"--force",
action="store_true",
default=True,
help="Force update even if groups already exist (default: True)",
"--no-force",
action="store_false",
dest="force",
default=True,
help="Do not force update if groups already exist (default: force=True)",

Copilot uses AI. Check for mistakes.
)

def handle(self, *args, **options):
project_id = options.get("project_id")
dry_run = options.get("dry_run", False)
force = options.get("force", True)

if dry_run:
self.stdout.write(self.style.WARNING("DRY RUN MODE - No changes will be made"))

# Get projects to update
if project_id:
try:
projects = [Project.objects.get(pk=project_id)]
self.stdout.write(f"Updating roles for project {project_id}")
except Project.DoesNotExist:
self.stderr.write(self.style.ERROR(f"Project with ID {project_id} does not exist"))
return
else:
projects = Project.objects.all()
project_count = projects.count()
self.stdout.write(f"Updating roles for {project_count} projects")

success = 0
failed = 0

for project in projects:
try:
if dry_run:
self.stdout.write(f" Would update roles for project {project.pk} ({project.name})")
else:
with transaction.atomic():
create_roles_for_project(project, force_update=force)
self.stdout.write(
self.style.SUCCESS(f" ✓ Updated roles for project {project.pk} ({project.name})")
)
success += 1
except Exception as e:
self.stderr.write(self.style.ERROR(f" ✗ Failed to update project {project.pk} ({project.name}): {e}"))
failed += 1
logger.exception(f"Error updating roles for project {project.pk}")
Comment on lines +58 to +72
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the signal, the management command iterates through all projects without batching. For better memory efficiency with large numbers of projects, consider using iterator() or processing in batches.

Copilot uses AI. Check for mistakes.

# Summary
self.stdout.write("\n" + "=" * 50)
if dry_run:
self.stdout.write(self.style.WARNING(f"DRY RUN COMPLETE: Would update {success} projects"))
else:
self.stdout.write(self.style.SUCCESS(f"Successfully updated: {success} projects"))

if failed > 0:
self.stdout.write(self.style.ERROR(f"Failed: {failed} projects"))

# Update schema version if successful and not dry run
if success > 0 and not project_id:
RoleSchemaVersion.mark_updated(description="Manual update via management command")
current_version = RoleSchemaVersion.get_current_version()
self.stdout.write(f"Schema version updated to: {current_version}")
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When updating a specific project with --project-id, the schema version is not updated (line 85 condition checks "not project_id"). This means if you manually update roles for a specific project when the schema has changed, the schema version won't be marked as updated, and the next migration will still try to update all projects. Consider either always updating the schema version when successful, or adding a warning that single-project updates don't affect schema version tracking.

Suggested change
self.stdout.write(f"Schema version updated to: {current_version}")
self.stdout.write(f"Schema version updated to: {current_version}")
elif success > 0 and project_id:
# When updating a single project, the global role schema version is not advanced.
# This means future migrations that rely on RoleSchemaVersion may still run for all projects.
self.stdout.write(
self.style.WARNING(
"Roles were updated for the specified project, but the global role schema "
"version was NOT updated because --project-id was used. "
"Run this command without --project-id to advance the global schema version "
"after all projects have been updated."
)
)

Copilot uses AI. Check for mistakes.
Comment on lines +13 to +88
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new update_roles management command lacks test coverage. Tests should verify the command works with different options (--project-id, --dry-run, --force), handles errors gracefully, and properly updates the schema version.

Copilot uses AI. Check for mistakes.
24 changes: 24 additions & 0 deletions ami/users/migrations/0004_roleschemaversion.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Generated by Django 4.2.10 on 2026-01-22 03:36

from django.db import migrations, models


class Migration(migrations.Migration):
dependencies = [
("users", "0003_lowercase_existing_emails"),
]

operations = [
migrations.CreateModel(
name="RoleSchemaVersion",
fields=[
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
("version", models.CharField(max_length=100, unique=True)),
("description", models.TextField()),
("updated_at", models.DateTimeField(auto_now=True)),
],
options={
"ordering": ["-updated_at"],
},
),
]
49 changes: 49 additions & 0 deletions ami/users/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,3 +42,52 @@ def get_absolute_url(self) -> str:
"""
# @TODO return frontend URL, not API URL
return reverse("api:user-detail", kwargs={"id": self.pk})


class RoleSchemaVersion(models.Model):
"""
Tracks the current role/permission schema version.
Updated when Role classes or Project.Permissions change.
"""

version = models.CharField(max_length=100, unique=True)
description = models.TextField()
updated_at = models.DateTimeField(auto_now=True)

class Meta:
ordering = ["-updated_at"]

def __str__(self):
return f"RoleSchemaVersion {self.version}"

@classmethod
def get_current_version(cls):
"""Get the current schema version from code."""
import hashlib

from ami.users.roles import Role

role_data = []
for role_class in sorted(Role.__subclasses__(), key=lambda r: r.__name__):
perms = sorted(role_class.permissions)
role_data.append(f"{role_class.__name__}:{','.join(perms)}")

schema_str = "|".join(role_data)
return hashlib.md5(schema_str.encode()).hexdigest()[:16]
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MD5 hash is truncated to 16 characters, which could lead to collisions. While MD5 collisions are rare for legitimate data, a 16-character truncation (64 bits) significantly increases the collision probability. Consider using the full MD5 hash (32 characters) or switching to a more collision-resistant approach like SHA-256. The max_length of 100 on the version field can accommodate this.

Suggested change
return hashlib.md5(schema_str.encode()).hexdigest()[:16]
return hashlib.sha256(schema_str.encode()).hexdigest()

Copilot uses AI. Check for mistakes.

@classmethod
def needs_update(cls):
"""Check if roles need updating based on schema version."""
current = cls.get_current_version()
try:
latest = cls.objects.first()
return latest is None or latest.version != current
except Exception:
# Table doesn't exist yet (first migration)
return False
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When schema version tracking is first introduced, needs_update() will return False during the initial migration because the table doesn't exist yet (line 87 returns False on exception). This means the first migration won't create roles for existing projects. The PR description mentions "Initial migration updates all projects" but the code logic prevents this from happening. The schema update will only run after the table is created and on subsequent deployments when the schema actually changes.

Suggested change
return False
return True

Copilot uses AI. Check for mistakes.
Comment on lines +82 to +87
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exception handling here catches all exceptions generically and returns False. This means if the table doesn't exist (expected during first migration), the schema update will be skipped. However, this also silently catches and ignores other database errors or unexpected exceptions. Consider being more specific about which exception to catch (e.g., OperationalError, ProgrammingError for table not existing) and logging unexpected exceptions.

Copilot uses AI. Check for mistakes.

@classmethod
def mark_updated(cls, description="Schema updated"):
"""Mark schema as updated to current version."""
current = cls.get_current_version()
cls.objects.create(version=current, description=description)
Comment on lines +89 to +93
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential IntegrityError if mark_updated is called concurrently with same version.

The version field has a unique=True constraint. If mark_updated is called twice with the same schema version (e.g., concurrent migrations or re-runs), it will raise an IntegrityError. Consider using get_or_create or handling the conflict.

🐛 Proposed fix
     `@classmethod`
     def mark_updated(cls, description="Schema updated"):
         """Mark schema as updated to current version."""
         current = cls.get_current_version()
-        cls.objects.create(version=current, description=description)
+        cls.objects.get_or_create(
+            version=current,
+            defaults={"description": description}
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@classmethod
def mark_updated(cls, description="Schema updated"):
"""Mark schema as updated to current version."""
current = cls.get_current_version()
cls.objects.create(version=current, description=description)
`@classmethod`
def mark_updated(cls, description="Schema updated"):
"""Mark schema as updated to current version."""
current = cls.get_current_version()
cls.objects.get_or_create(
version=current,
defaults={"description": description}
)
🤖 Prompt for AI Agents
In `@ami/users/models.py` around lines 89 - 93, The mark_updated classmethod can
raise IntegrityError because version is unique; change it to atomically ensure a
single row per version by using cls.objects.get_or_create(version=current,
defaults={'description': description}) after computing current via
cls.get_current_version(), or wrap the cls.objects.create call in a transaction
and catch IntegrityError to ignore duplicate inserts; update mark_updated to use
cls.objects.get_or_create(...) (or handle IntegrityError) so concurrent calls
for the same version do not crash.

Comment on lines +47 to +93
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new RoleSchemaVersion model and its methods (get_current_version, needs_update, mark_updated) lack test coverage. Given that this is core functionality for the optimization and affects when role updates occur, comprehensive tests should be added to verify version detection, schema change detection, and the update marking behavior.

Copilot uses AI. Check for mistakes.
Comment on lines +89 to +93
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a potential race condition in mark_updated where multiple processes could call this concurrently. Since version has unique=True, if two processes both detect the schema needs updating and both try to create the same version record, one will fail with an IntegrityError. Consider using get_or_create instead of create, or wrapping in try-except to handle the IntegrityError gracefully.

Copilot uses AI. Check for mistakes.
74 changes: 56 additions & 18 deletions ami/users/roles.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

from django.contrib.auth.models import Group, Permission
from django.contrib.contenttypes.models import ContentType
from guardian.shortcuts import assign_perm, get_perms, remove_perm

from ami.main.models import Project

Expand Down Expand Up @@ -194,29 +193,68 @@ class ProjectManager(Role):
)


def create_roles_for_project(project):
"""Creates role-based permission groups for a given project."""
def create_roles_for_project(project, force_update=False):
"""
Creates role-based permission groups for a given project.

Args:
project: The project to create roles for
force_update: If False, skip updates for existing groups (default: False)
If True, always update permissions even if group exists
Comment on lines +198 to +203
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function now has a force_update parameter with specific behavior, but the post_save signal in ami/main/signals.py (line 18) calls create_roles_for_project without this parameter. The docstring states the default is False, which means for new projects, if groups somehow already exist (e.g., from a previous failed creation), they won't be updated. Consider documenting this behavior more clearly or adding a note about when force_update should be True vs False.

Suggested change
Creates role-based permission groups for a given project.
Args:
project: The project to create roles for
force_update: If False, skip updates for existing groups (default: False)
If True, always update permissions even if group exists
Create or update role-based permission groups for a given project.
This function is typically called from the project ``post_save`` signal
without passing ``force_update`` explicitly, which means the default
``force_update=False`` behavior is used.
When ``force_update`` is ``False`` (the default), groups are created if
they do not already exist, but existing groups and their permissions are
left unchanged. This is appropriate for normal project creation, where
it is assumed that no stale groups already exist.
When ``force_update`` is ``True``, the permissions for each role group
are updated even if the group already exists. This should be used in
maintenance or recovery contexts, for example:
* Re-running role creation for a project after a previous failure where
some groups were created but not fully configured.
* After changing the role permission definitions in this module and
needing to resync existing groups.
Args:
project: The project to create roles for.
force_update: Whether to always update permissions for existing groups.
- ``False`` (default): Skip updates for existing groups; only create
missing groups and permissions.
- ``True``: Always update permissions even if the group already exists.

Copilot uses AI. Check for mistakes.
"""
from guardian.models import GroupObjectPermission

project_ct = ContentType.objects.get_for_model(Project)

# Pre-fetch all permissions we might need (single query)
all_perm_codenames = set()
for role_class in Role.__subclasses__():
all_perm_codenames.update(role_class.permissions)

existing_perms = {
perm.codename: perm
for perm in Permission.objects.filter(codename__in=all_perm_codenames, content_type=project_ct)
}

# Create any missing permissions
missing_perms = []
for codename in all_perm_codenames:
if codename not in existing_perms:
missing_perms.append(
Permission(codename=codename, content_type=project_ct, name=f"Can {codename.replace('_', ' ')}")
)

if missing_perms:
Permission.objects.bulk_create(missing_perms, ignore_conflicts=True)
# Refresh existing_perms dict after bulk create
existing_perms = {
perm.codename: perm
for perm in Permission.objects.filter(codename__in=all_perm_codenames, content_type=project_ct)
}

Comment on lines +219 to +234
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When creating missing permissions with bulk_create, if there's a collision due to ignore_conflicts=True, the newly created permission objects won't have their IDs set. This could cause a KeyError on line 256 when trying to access existing_perms[codename] if the permission was created by another concurrent process but not yet in the refreshed existing_perms dict. Consider using get_or_create for each missing permission, or handle potential KeyErrors when accessing existing_perms.

Suggested change
# Create any missing permissions
missing_perms = []
for codename in all_perm_codenames:
if codename not in existing_perms:
missing_perms.append(
Permission(codename=codename, content_type=project_ct, name=f"Can {codename.replace('_', ' ')}")
)
if missing_perms:
Permission.objects.bulk_create(missing_perms, ignore_conflicts=True)
# Refresh existing_perms dict after bulk create
existing_perms = {
perm.codename: perm
for perm in Permission.objects.filter(codename__in=all_perm_codenames, content_type=project_ct)
}
# Ensure all required permissions exist, handling concurrency safely
for codename in all_perm_codenames:
if codename not in existing_perms:
perm, _ = Permission.objects.get_or_create(
codename=codename,
content_type=project_ct,
defaults={"name": f"Can {codename.replace('_', ' ')}"},
)
existing_perms[codename] = perm

Copilot uses AI. Check for mistakes.
for role_class in Role.__subclasses__():
role_name = f"{project.pk}_{project.name}_{role_class.__name__}"
permissions = role_class.permissions
group, created = Group.objects.get_or_create(name=role_name)

if created:
logger.debug(f"Role created {role_class} for project {project}")
else:
# Reset permissions to make sure permissions are updated
# every time we call this function
group.permissions.clear()
assigned_perms = get_perms(group, project)
for perm_codename in assigned_perms:
remove_perm(perm_codename, group, project)
for perm_codename in permissions:
permission, perm_created = Permission.objects.get_or_create(
codename=perm_codename,
content_type=project_ct,
defaults={"name": f"Can {perm_codename.replace('_', ' ')}"},
)
elif not force_update:
# Skip updates for existing groups unless force_update=True
continue

group.permissions.add(permission) # Assign the permission group to the project
assign_perm(perm_codename, group, project)
# Use set() instead of clear() + add() loop (single query)
role_perm_objects = [existing_perms[codename] for codename in permissions]
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a permission codename exists in role_class.permissions but is not in existing_perms dictionary (due to a race condition or if bulk_create with ignore_conflicts didn't properly populate it), this line will raise a KeyError. Add error handling or validation to ensure all required permissions are available before attempting to access them.

Copilot uses AI. Check for mistakes.
group.permissions.set(role_perm_objects)

# Bulk update Guardian object permissions
# Remove all existing, then bulk create new ones
GroupObjectPermission.objects.filter(group=group, content_type=project_ct, object_pk=project.pk).delete()

group_obj_perms = [
GroupObjectPermission(
group=group, permission=existing_perms[codename], content_type=project_ct, object_pk=project.pk
Comment on lines +254 to +256
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to line 247, if a permission codename is not in existing_perms, this will raise a KeyError when building the GroupObjectPermission objects. This could happen due to race conditions or if the permission creation failed silently.

Copilot uses AI. Check for mistakes.
)
for codename in permissions
]
GroupObjectPermission.objects.bulk_create(group_obj_perms)
Comment on lines +250 to +260
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Delete-then-bulk-create pattern could leave inconsistent state on failure.

The delete operation at line 252 followed by bulk_create at line 260 is not atomic within this function. If bulk_create fails (e.g., database error), the existing permissions will already be deleted, leaving the group without object permissions.

The caller (signals.py and management command) wraps this in a transaction for the management command, but the signal handler processes each project in a separate iteration without transaction wrapping around the delete+create.

🐛 Consider wrapping in a savepoint or documenting caller responsibility
+        # Note: Caller should wrap in transaction.atomic() for consistency
         # Bulk update Guardian object permissions
         # Remove all existing, then bulk create new ones
         GroupObjectPermission.objects.filter(group=group, content_type=project_ct, object_pk=project.pk).delete()

         group_obj_perms = [
             GroupObjectPermission(
                 group=group, permission=existing_perms[codename], content_type=project_ct, object_pk=project.pk
             )
             for codename in permissions
         ]
         GroupObjectPermission.objects.bulk_create(group_obj_perms)

Alternatively, wrap the critical section in a savepoint:

from django.db import transaction

with transaction.atomic():
    GroupObjectPermission.objects.filter(...).delete()
    GroupObjectPermission.objects.bulk_create(group_obj_perms)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Bulk update Guardian object permissions
# Remove all existing, then bulk create new ones
GroupObjectPermission.objects.filter(group=group, content_type=project_ct, object_pk=project.pk).delete()
group_obj_perms = [
GroupObjectPermission(
group=group, permission=existing_perms[codename], content_type=project_ct, object_pk=project.pk
)
for codename in permissions
]
GroupObjectPermission.objects.bulk_create(group_obj_perms)
# Note: Caller should wrap in transaction.atomic() for consistency
# Bulk update Guardian object permissions
# Remove all existing, then bulk create new ones
GroupObjectPermission.objects.filter(group=group, content_type=project_ct, object_pk=project.pk).delete()
group_obj_perms = [
GroupObjectPermission(
group=group, permission=existing_perms[codename], content_type=project_ct, object_pk=project.pk
)
for codename in permissions
]
GroupObjectPermission.objects.bulk_create(group_obj_perms)
🤖 Prompt for AI Agents
In `@ami/users/roles.py` around lines 250 - 260, The delete-then-bulk_create
sequence for GroupObjectPermission can leave permissions missing if bulk_create
fails; wrap the critical section in an atomic transaction (using
transaction.atomic) around the deletion and bulk_create so they succeed or roll
back together, e.g., in the function that builds group_obj_perms and calls
GroupObjectPermission.objects.filter(...).delete() and
GroupObjectPermission.objects.bulk_create(group_obj_perms); ensure to import
django.db.transaction and apply the atomic context there (or add a
savepoint/atomic in the signal handler that invokes this code) so the operation
is executed as one transactional unit.

Comment on lines +196 to +260
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bulk operations optimization in create_roles_for_project lacks test coverage. Tests should verify that bulk_create works correctly with ignore_conflicts, that permissions are properly assigned to groups, and that the force_update parameter behaves as expected.

Copilot uses AI. Check for mistakes.
Comment on lines +252 to +260
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The delete operation on line 252 removes all GroupObjectPermission records for the group/project combination before bulk creating new ones. If an error occurs between the delete and bulk_create (e.g., KeyError from missing permission), the group will be left with no object-level permissions. Consider either using a transaction (which is handled by the caller in the management command but not in the signal) or using a delete-after-create pattern to avoid this window of vulnerability.

Copilot uses AI. Check for mistakes.
33 changes: 27 additions & 6 deletions ami/users/signals.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,41 @@


def create_roles(sender, **kwargs):
"""Creates predefined roles with specific permissions ."""
"""
Creates predefined roles with specific permissions.
Only runs when role schema version has changed.
"""
from ami.users.models import RoleSchemaVersion

# Quick check - does schema need updating?
if not RoleSchemaVersion.needs_update():
logger.debug("Role schema is up to date, skipping role creation")
return

logger.info("Role schema version changed - updating roles for all projects")
project_count = Project.objects.count()

if project_count > 100:
logger.warning(
f"Updating roles for {project_count} projects. "
f"This may take a while. Consider running 'python manage.py update_roles' "
f"separately for better control."
)

logger.info("Creating roles for all projects")
try:
for project in Project.objects.all():
try:
create_roles_for_project(project)
create_roles_for_project(project, force_update=True)
except Exception as e:
logger.warning(f"Failed to create roles for project {project.pk} ({project.name}): {e}")
continue
Comment on lines 37 to 42
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The signal iterates through all projects without batching (line 37). For deployments with thousands of projects, this could cause memory issues as Project.objects.all() loads all project instances into memory. Consider using iterator() method or batch processing with chunks to reduce memory footprint during large-scale updates.

Copilot uses AI. Check for mistakes.

# Mark schema as updated
RoleSchemaVersion.mark_updated(description="Post-migration role update")
logger.info(f"Successfully updated roles for {project_count} projects")
Comment on lines 36 to +46
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The create_roles signal doesn't use a transaction to wrap the entire update process. If the loop fails partway through updating projects, some projects will have updated roles while others won't, but RoleSchemaVersion.mark_updated() on line 45 will still execute (it's outside the inner try-except). This could leave the system in an inconsistent state where the schema version is marked as updated but not all projects have the new roles. Consider wrapping the entire operation in a transaction.atomic() block.

Copilot uses AI. Check for mistakes.

except Exception as e:
logger.warning(
f"Failed to create roles during migration: {e}. This can be run manually via management command."
)
logger.error(f"Failed to create roles during migration: {e}")
Comment on lines +43 to +49
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Schema marked as updated even when some projects fail.

If some projects fail during the loop (line 40-41), mark_updated is still called at line 45. This means subsequent migrations won't retry the failed projects. Consider either:

  1. Not marking as updated if any failures occurred
  2. Or documenting that failed projects require manual intervention via the management command
🐛 Suggested approach
+        failed_count = 0
         for project in Project.objects.all():
             try:
                 create_roles_for_project(project, force_update=True)
             except Exception as e:
                 logger.warning(f"Failed to create roles for project {project.pk} ({project.name}): {e}")
+                failed_count += 1
                 continue

-        # Mark schema as updated
-        RoleSchemaVersion.mark_updated(description="Post-migration role update")
-        logger.info(f"Successfully updated roles for {project_count} projects")
+        # Mark schema as updated only if all projects succeeded
+        if failed_count == 0:
+            RoleSchemaVersion.mark_updated(description="Post-migration role update")
+            logger.info(f"Successfully updated roles for {project_count} projects")
+        else:
+            logger.warning(
+                f"Roles updated for {project_count - failed_count}/{project_count} projects. "
+                f"Run 'manage.py update_roles' to retry failed projects."
+            )
🧰 Tools
🪛 Ruff (0.14.13)

48-48: Do not catch blind exception: Exception

(BLE001)


49-49: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In `@ami/users/signals.py` around lines 43 - 49, The code currently calls
RoleSchemaVersion.mark_updated() unconditionally even if some projects failed;
change the migration flow in the function handling the project loop to track
failures (e.g., a failed_projects list or a boolean flag updated_failed) while
iterating projects, and only call
RoleSchemaVersion.mark_updated(description="Post-migration role update") when no
failures were recorded; if failures exist, log the failed project ids via
logger.error/info and skip calling mark_updated (or re-raise/return a non-zero
status) so failed projects can be retried or handled manually. Ensure you
reference the existing logger and RoleSchemaVersion.mark_updated symbols and
preserve the existing exception logging behavior for per-project errors.


Comment on lines 14 to 50
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The updated create_roles signal with schema version checking lacks test coverage. Tests should verify that roles are only created when the schema changes, that the schema version is properly marked after updates, and that the signal behaves correctly when the RoleSchemaVersion table doesn't exist yet.

Copilot uses AI. Check for mistakes.

@receiver(m2m_changed, sender=Group.user_set.through)
Expand Down
Loading