Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion backend/priority_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,9 @@ def _calculate_urgency(self, text: str, severity_score: int):
# Pre-extract literal keywords for fast substring pre-filtering
# Only apply this optimization if the pattern is a simple list of words like \b(word1|word2)\b
keywords = []
if re.fullmatch(r'\\b\([a-zA-Z0-9\s|]+\)\\b', pattern):
# Optimization: Extract literal keywords from simple regex strings like "\b(word1|word2)\b"
# This allows us to use a fast substring check (`in text`) before executing the regex engine.
if pattern.startswith('\\b(') and pattern.endswith(')\\b') and not any(c in pattern[3:-3] for c in ['.', '*', '+', '?', '^', '$', '[', ']', '{', '}']):
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The new “simple regex” detection is too permissive and can skip valid regex matches, causing urgency false negatives.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/priority_engine.py, line 130:

<comment>The new “simple regex” detection is too permissive and can skip valid regex matches, causing urgency false negatives.</comment>

<file context>
@@ -125,7 +125,9 @@ def _calculate_urgency(self, text: str, severity_score: int):
-                if re.fullmatch(r'\\b\([a-zA-Z0-9\s|]+\)\\b', pattern):
+                # Optimization: Extract literal keywords from simple regex strings like "\b(word1|word2)\b"
+                # This allows us to use a fast substring check (`in text`) before executing the regex engine.
+                if pattern.startswith('\\b(') and pattern.endswith(')\\b') and not any(c in pattern[3:-3] for c in ['.', '*', '+', '?', '^', '$', '[', ']', '{', '}']):
                     clean_pattern = pattern.replace('\\b', '').replace('(', '').replace(')', '')
                     keywords = [k.strip() for k in clean_pattern.split('|') if k.strip()]
</file context>
Suggested change
if pattern.startswith('\\b(') and pattern.endswith(')\\b') and not any(c in pattern[3:-3] for c in ['.', '*', '+', '?', '^', '$', '[', ']', '{', '}']):
if re.fullmatch(r'\\b\([a-zA-Z0-9\s|]+\)\\b', pattern):
Fix with Cubic

clean_pattern = pattern.replace('\\b', '').replace('(', '').replace(')', '')
keywords = [k.strip() for k in clean_pattern.split('|') if k.strip()]
self._regex_cache.append((re.compile(pattern), weight, pattern, keywords))
Comment on lines +128 to 133
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description says it only removes a duplicate cache clear and fixes a SyntaxError, but this change also modifies the urgency keyword-extraction optimization logic. If this behavioral change is intentional, it should be called out in the PR description (or split into a separate PR) so reviewers can evaluate the runtime impact and correctness independently.

Copilot uses AI. Check for mistakes.
Expand Down
3 changes: 1 addition & 2 deletions backend/routers/issues.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,8 +236,7 @@ async def create_issue(
# Invalidate cache so new issue appears
try:
recent_issues_cache.clear()
recent_issues_cache.clear()
user_issues_cache.clear()
user_issues_cache.clear()
except Exception as e:
logger.error(f"Error clearing cache: {e}")

Expand Down
102 changes: 102 additions & 0 deletions backend/tests/benchmark_closure_status.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
import time
from sqlalchemy.orm import Session
from sqlalchemy import func, create_engine
from backend.database import Base
from backend.models import Grievance, GrievanceFollower, ClosureConfirmation, Issue, Jurisdiction, JurisdictionLevel, SeverityLevel
from sqlalchemy import case, distinct
import datetime

# Create a temporary in-memory database for testing
engine = create_engine("sqlite:///:memory:")
Base.metadata.create_all(bind=engine)
SessionLocal = Session(bind=engine)

Comment on lines +9 to +13
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This benchmark module performs DB setup at import time (create_engine, create_all, and creating a Session instance). Even though it’s a benchmark script, keeping heavy side effects at module import makes accidental imports expensive and surprising. Consider moving setup into the __main__ block and using a sessionmaker/context-managed session created inside main() so resources are properly closed.

Copilot uses AI. Check for mistakes.
def populate_db(db: Session, grievance_id: int):
# Add Jurisdiction
j = Jurisdiction(id=1, level=JurisdictionLevel.STATE, geographic_coverage={"states": ["Maharashtra"]}, responsible_authority="PWD", default_sla_hours=48)
db.add(j)

# Add Grievance
g = Grievance(
id=grievance_id,
current_jurisdiction_id=1,
sla_deadline=datetime.datetime.now(datetime.timezone.utc),
status="open",
category="Road",
unique_id="123",
severity=SeverityLevel.LOW,
assigned_authority="PWD"
)
db.add(g)

# Add Followers
for i in range(50):
db.add(GrievanceFollower(grievance_id=grievance_id, user_email=f"user{i}@test.com"))

# Add Confirmations
for i in range(30):
db.add(ClosureConfirmation(grievance_id=grievance_id, user_email=f"conf_user{i}@test.com", confirmation_type="confirmed"))
for i in range(10):
db.add(ClosureConfirmation(grievance_id=grievance_id, user_email=f"disp_user{i}@test.com", confirmation_type="disputed"))

db.commit()

def benchmark_old(db: Session, grievance_id: int, iterations=1000):
start = time.perf_counter()
for _ in range(iterations):
total_followers = db.query(func.count(GrievanceFollower.id)).filter(
GrievanceFollower.grievance_id == grievance_id
).scalar()

counts = db.query(
ClosureConfirmation.confirmation_type,
func.count(ClosureConfirmation.id)
).filter(ClosureConfirmation.grievance_id == grievance_id).group_by(ClosureConfirmation.confirmation_type).all()

counts_dict = {ctype: count for ctype, count in counts}
confirmations_count = counts_dict.get("confirmed", 0)
disputes_count = counts_dict.get("disputed", 0)
end = time.perf_counter()
if iterations > 10:
print(f"Old approach ({iterations} iters): {end - start:.4f}s")
return total_followers, confirmations_count, disputes_count

def benchmark_new_agg(db: Session, grievance_id: int, iterations=1000):
start = time.perf_counter()
for _ in range(iterations):
total_followers = db.query(func.count(GrievanceFollower.id)).filter(
GrievanceFollower.grievance_id == grievance_id
).scalar()

# Optimize the two counts into one aggregate without group_by
stats = db.query(
func.sum(case((ClosureConfirmation.confirmation_type == 'confirmed', 1), else_=0)).label('confirmed'),
func.sum(case((ClosureConfirmation.confirmation_type == 'disputed', 1), else_=0)).label('disputed')
).filter(ClosureConfirmation.grievance_id == grievance_id).first()

confirmations_count = stats.confirmed or 0
disputes_count = stats.disputed or 0
end = time.perf_counter()
if iterations > 10:
print(f"New approach (Agg) ({iterations} iters): {end - start:.4f}s")
return total_followers, confirmations_count, disputes_count

if __name__ == "__main__":
db = SessionLocal
populate_db(db, 1)

# Warm up
benchmark_old(db, 1, 10)
benchmark_new_agg(db, 1, 10)

res_old = benchmark_old(db, 1)
res_agg = benchmark_new_agg(db, 1)

print(f"Old Results: {res_old}")
print(f"New Agg Results: {res_agg}")
def benchmark_new_single(db: Session, grievance_id: int, iterations=1000):
start = time.perf_counter()
for _ in range(iterations):
# We can't easily join them perfectly without cross product, but what if we do subqueries?
# Actually it's probably better to just leave it. Let's look for N+1 queries instead.
pass
Comment on lines +97 to +102
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an unused/incomplete stub (benchmark_new_single) defined after the __main__ block with only pass. Keeping dead stubs in backend/tests makes it harder to tell what is runnable/maintained and can confuse future readers. Either remove it, or implement it and call it from __main__ if it’s intended to be part of the benchmark.

Suggested change
def benchmark_new_single(db: Session, grievance_id: int, iterations=1000):
start = time.perf_counter()
for _ in range(iterations):
# We can't easily join them perfectly without cross product, but what if we do subqueries?
# Actually it's probably better to just leave it. Let's look for N+1 queries instead.
pass

Copilot uses AI. Check for mistakes.
51 changes: 51 additions & 0 deletions backend/tests/benchmark_urgency.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import time
from backend.priority_engine import priority_engine
import cProfile
import pstats
import io

# We create a sample text that does not contain any of the urgency keywords
# but is long enough to simulate a real-world scenario.
sample_text = (
"There is a small pothole on the corner of 5th and Main. "
"It has been there for a few days and is causing some inconvenience to the drivers. "
"Please send someone to look at it when possible. "
"The road condition is generally poor in this area and needs attention. "
"We have noticed an increase in traffic recently, which might be contributing to the wear and tear. "
"No one has been injured, but we would like to avoid any accidents."
) * 10 # Make it reasonably long

def benchmark(iterations=10000):
start_time = time.perf_counter()
for _ in range(iterations):
# We only benchmark _calculate_urgency. We give it a base severity of 10.
priority_engine._calculate_urgency(sample_text, 10)
end_time = time.perf_counter()

total_time = end_time - start_time
avg_time_ms = (total_time / iterations) * 1000

print(f"Benchmark: _calculate_urgency")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n backend/tests/benchmark_urgency.py | sed -n '20,35p'

Repository: RohanExploit/VishwaGuru

Length of output: 736


Remove unnecessary f-string prefix.

Line 28 contains a non-interpolated string and should not use the f prefix.

Minimal fix
-    print(f"Benchmark: _calculate_urgency")
+    print("Benchmark: _calculate_urgency")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(f"Benchmark: _calculate_urgency")
print("Benchmark: _calculate_urgency")
🧰 Tools
🪛 Ruff (0.15.6)

[error] 28-28: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/benchmark_urgency.py` at line 28, The print call in
backend/tests/benchmark_urgency.py is using an unnecessary f-string for a static
message ("Benchmark: _calculate_urgency"); change the statement to use a regular
string literal (remove the leading f) so it becomes print("Benchmark:
_calculate_urgency") to avoid an unused interpolation marker.

print(f"Iterations: {iterations}")
print(f"Total time: {total_time:.4f} seconds")
print(f"Average time per call: {avg_time_ms:.4f} ms")
return avg_time_ms

if __name__ == "__main__":
# Warm up
priority_engine._calculate_urgency(sample_text, 10)

print("--- Running Benchmark ---")
benchmark()

# Profile to show where time is spent
print("\n--- Running Profiler ---")
pr = cProfile.Profile()
pr.enable()
for _ in range(5000):
priority_engine._calculate_urgency(sample_text, 10)
pr.disable()
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).sort_stats('cumulative')
ps.print_stats(15)
print(s.getvalue())
58 changes: 58 additions & 0 deletions backend/tests/benchmark_urgency_unoptimized.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
import time
from backend.priority_engine import priority_engine
import cProfile
import pstats
import io
import re

# We create a sample text that does not contain any of the urgency keywords
# but is long enough to simulate a real-world scenario.
sample_text = (
"There is a small pothole on the corner of 5th and Main. "
"It has been there for a few days and is causing some inconvenience to the drivers. "
"Please send someone to look at it when possible. "
"The road condition is generally poor in this area and needs attention. "
"We have noticed an increase in traffic recently, which might be contributing to the wear and tear. "
"No one has been injured, but we would like to avoid any accidents."
) * 10 # Make it reasonably long

def benchmark(iterations=10000):
start_time = time.perf_counter()
for _ in range(iterations):
priority_engine._calculate_urgency(sample_text, 10)
end_time = time.perf_counter()

total_time = end_time - start_time
avg_time_ms = (total_time / iterations) * 1000

print(f"Benchmark: _calculate_urgency")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find and read the file
find . -name "benchmark_urgency_unoptimized.py" -type f

Repository: RohanExploit/VishwaGuru

Length of output: 115


🏁 Script executed:

# Read the file to check line 28
cat -n ./backend/tests/benchmark_urgency_unoptimized.py | sed -n '20,35p'

Repository: RohanExploit/VishwaGuru

Length of output: 761


Remove unnecessary f-string prefix.

Line 28 is a plain string and triggers Ruff F541.

Minimal fix
-    print(f"Benchmark: _calculate_urgency")
+    print("Benchmark: _calculate_urgency")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(f"Benchmark: _calculate_urgency")
print("Benchmark: _calculate_urgency")
🧰 Tools
🪛 Ruff (0.15.6)

[error] 28-28: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/benchmark_urgency_unoptimized.py` at line 28, The print
statement uses an unnecessary f-string: replace the expression
print(f"Benchmark: _calculate_urgency") with a plain string (e.g.,
print("Benchmark: _calculate_urgency")) to remove the f-prefix and resolve Ruff
F541; locate the statement in the benchmark test that references
_calculate_urgency and update it accordingly.

print(f"Iterations: {iterations}")
print(f"Total time: {total_time:.4f} seconds")
print(f"Average time per call: {avg_time_ms:.4f} ms")
return avg_time_ms

if __name__ == "__main__":
# Force the engine to clear its cache and simulate the old unoptimized behavior
# where the keywords list is empty and regex.search is always called.
from backend.adaptive_weights import adaptive_weights
priority_engine._regex_cache = []
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The benchmark setup is ineffective: _calculate_urgency immediately overwrites your manually injected unoptimized cache because _last_reload_count is left stale.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/tests/benchmark_urgency_unoptimized.py, line 38:

<comment>The benchmark setup is ineffective: `_calculate_urgency` immediately overwrites your manually injected unoptimized cache because `_last_reload_count` is left stale.</comment>

<file context>
@@ -0,0 +1,58 @@
+    # Force the engine to clear its cache and simulate the old unoptimized behavior
+    # where the keywords list is empty and regex.search is always called.
+    from backend.adaptive_weights import adaptive_weights
+    priority_engine._regex_cache = []
+    for pattern, weight in adaptive_weights.get_urgency_patterns():
+        priority_engine._regex_cache.append((re.compile(pattern), weight, pattern, []))
</file context>
Fix with Cubic

for pattern, weight in adaptive_weights.get_urgency_patterns():
priority_engine._regex_cache.append((re.compile(pattern), weight, pattern, []))

# Warm up
priority_engine._calculate_urgency(sample_text, 10)
Comment on lines +35 to +43
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

“Unoptimized” cache is being overwritten before measurement.

Line [43] calls _calculate_urgency, which can rebuild _regex_cache immediately because _last_reload_count is not synchronized after manual cache seeding (Line [38]-Line [40]). This makes the unoptimized benchmark results unreliable.

Fix to preserve unoptimized cache during benchmark
 if __name__ == "__main__":
@@
     from backend.adaptive_weights import adaptive_weights
     priority_engine._regex_cache = []
     for pattern, weight in adaptive_weights.get_urgency_patterns():
         priority_engine._regex_cache.append((re.compile(pattern), weight, pattern, []))
+    priority_engine._last_reload_count = adaptive_weights.reload_count
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/benchmark_urgency_unoptimized.py` around lines 35 - 43, You
seed priority_engine._regex_cache manually but don't update
priority_engine._last_reload_count, so calling
priority_engine._calculate_urgency() immediately can rebuild the cache and ruin
the unoptimized benchmark; after populating _regex_cache (using
adaptive_weights.get_urgency_patterns()), set priority_engine._last_reload_count
to match adaptive_weights' current reload counter (e.g.,
adaptive_weights._reload_count or via a provided reload-count accessor) so
_calculate_urgency will not consider the cache stale and will measure the
intended unoptimized state.


print("--- Running Unoptimized Benchmark ---")
benchmark()

# Profile to show where time is spent
print("\n--- Running Profiler ---")
pr = cProfile.Profile()
pr.enable()
for _ in range(5000):
priority_engine._calculate_urgency(sample_text, 10)
pr.disable()
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).sort_stats('cumulative')
ps.print_stats(15)
print(s.getvalue())
26 changes: 26 additions & 0 deletions test_grievances_opt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import time
from backend.database import SessionLocal
from backend.models import Grievance, GrievanceFollower, ClosureConfirmation
from backend.routers.grievances import get_closure_status
from sqlalchemy import func

def bench():
db = SessionLocal()
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Close the SQLAlchemy session (or use a context manager) to prevent connection/resource leaks.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At test_grievances_opt.py, line 8:

<comment>Close the SQLAlchemy session (or use a context manager) to prevent connection/resource leaks.</comment>

<file context>
@@ -0,0 +1,26 @@
+from sqlalchemy import func
+
+def bench():
+    db = SessionLocal()
+    start = time.perf_counter()
+    for _ in range(100):
</file context>
Fix with Cubic

start = time.perf_counter()
for _ in range(100):
total_followers = db.query(func.count(GrievanceFollower.id)).filter(
GrievanceFollower.grievance_id == 1
).scalar()

counts = db.query(
ClosureConfirmation.confirmation_type,
func.count(ClosureConfirmation.id)
).filter(ClosureConfirmation.grievance_id == 1).group_by(ClosureConfirmation.confirmation_type).all()
print(f"Old approach: {time.perf_counter() - start}")

start = time.perf_counter()
for _ in range(100):
# Instead of two queries, we could potentially do this in one, or just measure DB hits
pass
Comment on lines +22 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The “new approach” benchmark does no work.

Line [22]-Line [24] uses pass, so this timing is not comparable to the old path and the result is misleading.

Use the intended code path in the loop
     start = time.perf_counter()
     for _ in range(100):
-         # Instead of two queries, we could potentially do this in one, or just measure DB hits
-         pass
+        get_closure_status(grievance_id=1, db=db)
+    print(f"New approach: {time.perf_counter() - start}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_grievances_opt.py` around lines 22 - 24, The benchmark loop in
test_grievances_opt.py (the for _ in range(100) block) contains only pass, so
the "new approach" isn't exercised; replace pass with the actual new-path call
(or the same query sequence used in the old benchmark) so each iteration invokes
the function/method under test (e.g., call the new implementation function or
the same query logic used by the old path), using the same inputs and DB setup
so the timings are comparable and DB hits can be measured.


bench()
Comment on lines +9 to +26
Copy link

Copilot AI Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bench() is executed at import time (bench() at the bottom of the module). Because the file name matches pytest’s default discovery pattern (test_*.py), pytest collection will import this module and run the benchmark, causing DB access/side effects and potentially hanging/failing the test suite. Move this benchmark to a non-test location/name (e.g., scripts/ or backend/tests/benchmark_*.py) and guard execution with if __name__ == "__main__":; also ensure the SQLAlchemy session is closed.

Suggested change
start = time.perf_counter()
for _ in range(100):
total_followers = db.query(func.count(GrievanceFollower.id)).filter(
GrievanceFollower.grievance_id == 1
).scalar()
counts = db.query(
ClosureConfirmation.confirmation_type,
func.count(ClosureConfirmation.id)
).filter(ClosureConfirmation.grievance_id == 1).group_by(ClosureConfirmation.confirmation_type).all()
print(f"Old approach: {time.perf_counter() - start}")
start = time.perf_counter()
for _ in range(100):
# Instead of two queries, we could potentially do this in one, or just measure DB hits
pass
bench()
try:
start = time.perf_counter()
for _ in range(100):
total_followers = db.query(func.count(GrievanceFollower.id)).filter(
GrievanceFollower.grievance_id == 1
).scalar()
counts = db.query(
ClosureConfirmation.confirmation_type,
func.count(ClosureConfirmation.id)
).filter(ClosureConfirmation.grievance_id == 1).group_by(ClosureConfirmation.confirmation_type).all()
print(f"Old approach: {time.perf_counter() - start}")
start = time.perf_counter()
for _ in range(100):
# Instead of two queries, we could potentially do this in one, or just measure DB hits
pass
finally:
db.close()
if __name__ == "__main__":
bench()

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Avoid executing benchmark code at module import time in a test_*.py file; it will run during pytest collection.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At test_grievances_opt.py, line 26:

<comment>Avoid executing benchmark code at module import time in a `test_*.py` file; it will run during pytest collection.</comment>

<file context>
@@ -0,0 +1,26 @@
+         # Instead of two queries, we could potentially do this in one, or just measure DB hits
+         pass
+
+bench()
</file context>
Fix with Cubic

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid executing benchmarks at import time in a test module.

Line [26] runs bench() immediately on import. In a test_*.py file, this can execute during pytest collection and cause unwanted DB/network/runtime side effects.

Guard execution
-bench()
+if __name__ == "__main__":
+    bench()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
bench()
if __name__ == "__main__":
bench()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_grievances_opt.py` at line 26, The test module currently calls bench()
at import time which triggers side effects during pytest collection; change this
so bench() is only executed intentionally — either remove the top-level call and
invoke bench() from a dedicated test function or script, or wrap the call in an
if __name__ == "__main__": guard, or convert it into a pytest test/fixture
(e.g., def test_bench(): bench()) so execution only happens when explicitly run;
locate the standalone bench() invocation and apply one of these guards or moves
to prevent import-time execution.

Loading