Skip to content

Dev#110

Merged
AB-Law merged 43 commits intomainfrom
dev
Apr 18, 2026
Merged

Dev#110
AB-Law merged 43 commits intomainfrom
dev

Conversation

@AB-Law
Copy link
Copy Markdown
Owner

@AB-Law AB-Law commented Apr 18, 2026

Summary

Changes

Testing

  • dotnet test
  • pytest -m unit
  • npm test
  • npm run test:e2e
  • Not applicable

Screenshots

Risks / Notes

Checklist

  • PR targets dev (or main for release PRs)
  • Branch name follows project conventions (feature/<name> or fix/<name>)
  • Relevant tests pass for the areas changed
  • No local.settings.json files are committed
  • No credentials, keys, or secrets were added to code or comments
  • New environment variables are documented in the relevant local.settings.json.example

Summary by CodeRabbit

  • Refactor
    • Optimized database querying and data partitioning strategy for the scraper data source system to improve performance.

AB-Law and others added 30 commits March 15, 2026 21:08
- Rewrite QUICKSTART.md with Azurite + Cosmos emulator setup, container
  creation steps, port reference, and 5-tab run guide
- Add CONTRIBUTING.md: branch strategy, tech stack, per-layer conventions,
  PR checklist, and secrets policy
- Add local.settings.json.example for both PluckIt.Functions and
  PluckIt.Processor pre-filled with emulator connection strings and
  placeholder values for Azure OpenAI credentials

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…in permissions

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* feat: integrate ESLint for improved code quality

- Added ESLint configuration to the project for TypeScript and Angular files.
- Updated `angular.json` to include linting options.
- Installed necessary ESLint packages for Angular and TypeScript support.
- Refactored code to replace `any` with `unknown` in several service files for better type safety.
- Enhanced unit tests to align with the new type definitions and ensure consistency.

* refactor: remove unused page

* fix: correct Cosmos emulator key (invalid base64 Tg== -> g==)

* fix: update Cosmos DB keys and enhance Azurite compatibility for local development

* fix: enhance UUID generation and improve error handling in BlobSasService and setup-local-cosmos.py

* chore: update CI/CD workflows to include 'dev' branch for push and pull request triggers

* fix: improve UUID generation logic in DashboardComponent and update CI to disable watch mode for tests
* feat: enhance wardrobe loading logic
- Updated the `_load_user_wardrobe` function to prefetch item IDs and limit the number of items returned based on wear count.
- Enhanced unit tests for `_load_user_wardrobe` to verify correct behavior and query execution.

* test: enhance user wardrobe loading tests to verify wearCount conditions
- Added assertions to check for the presence of "IS_DEFINED(c.wearCount)" and "c.wearCount > 0" in the SQL query for loading user wardrobe items.
* feat: Enhance deduplication logic in RunDeduplicator by implementing prefix-based pHash storage. Introduced methods for registering pHashes by prefix and identifying candidate buckets for Hamming distance checks. Updated tests to validate detection of duplicates with nearby prefix variations.

* refactor: Replace candidate bucket method with a cached version for improved performance in deduplication logic. The new method optimizes pHash prefix bucket retrieval by caching results, enhancing efficiency during duplicate checks.

* refactor: Simplify candidate bucket retrieval by removing max value check in deduplicator. Added new tests to validate deduplication behavior at threshold boundaries and just under threshold conditions.
* Refactor wardrobe item loading and enhance query limits

- Updated `_load_user_wardrobe` to first fetch item IDs and then retrieve top scored items with a limit of 50.
- Introduced a constant `_WARDROBE_SCAN_LIMIT` to cap the number of items fetched in wardrobe queries to 500.
- Enhanced `_load_wardrobe_items` in both `gaps.py` and `wear_patterns.py` to include the new limit and ensure recent wear events are capped.
- Added unit tests to verify the new loading logic and query limits for wardrobe items.

* refactor:Refactor item loading in `_load_user_wardrobe` for improved query

* fix: remove duplicate sorting in `get_wear_patterns` function

* test: Update assertions in `test_load_user_wardrobe_fetches_ids_then_top_scored_items` to validate query parameters and limits

- Refactored test to extract query parameters from the second call of `sync_wardrobe.query_items`.
- Updated assertions to check for the correct limit parameter in the SQL query.
- Introduced a new Cosmos DB container for image cleanup indexing, enhancing the wardrobe management system.
- Updated the `WardrobeRepository` to support syncing and deleting entries in the image cleanup index.
- Modified `CleanupFunctions` to utilize the new index for identifying known item IDs.
- Enhanced local settings and infrastructure scripts to accommodate the new container.
- Added unit tests to ensure correct behavior of the image cleanup index operations.
- Removed the `_normalise_query_text` and `_get_cache_scope` functions to simplify the codebase.
- Updated `_expand_query_cached` to use a shared cache key based on normalized terms, improving cache efficiency across users and sessions.
- Adjusted unit tests to reflect changes in caching behavior and ensure correct functionality with normalized queries.
- Added in-memory caching to the GenerateSasUrl method to improve performance by returning cached SAS URLs for repeated requests within a validity window.
- Introduced MemoryCache for managing cached SAS URLs and defined cache skew and minimum cache duration constants.
- Updated unit tests to verify caching behavior for allowed containers and validity windows, ensuring correct functionality under various scenarios.
- Included Microsoft.Extensions.Caching.Memory package for caching support.
- Implemented Redis caching for SAS URL generation in BlobSasService to enhance performance and scalability.
- Updated local.settings.json.example and QUICKSTART.md to include configuration for Redis cache.
- Modified Program.cs to conditionally use Redis or in-memory caching based on configuration settings.
- Enhanced unit tests to validate caching behavior with Redis and ensure correct functionality.
- Updated Terraform scripts to support new SAS cache settings.
- Added `enable_cross_partition_query=True` to various query items in `function_app.py` and `scraper_runner.py` to enhance data retrieval across partitions.
- Updated relevant functions to ensure efficient querying of active and global scraper sources, improving overall performance and reliability.
…seconds for improved queue processing efficiency
…inclusion

- Added a new robots.txt file to allow all user agents and specify the sitemap location.
- Updated staticwebapp.config.json to exclude robots.txt and well-known paths from navigation fallback.
Limit wardrobe and wear-event scans for vault insights and extend cache TTL to reduce RU usage while preserving bounded analytics freshness.
Prevent unbounded profile reads during weekly digest startup by switching
from list(read_all_items()) to a paginated query on user profile id only.
AB-Law and others added 11 commits March 16, 2026 22:43
Canonicalized mood names are now re-embedded in one Azure OpenAI batch call
instead of one request per item, with a safe fallback to existing embeddings
when batch re-embedding fails.
…mprove error logging in digest agent. Added partition key to user data fetch in vault insights. Updated tests for mood processor and vault insights to reflect changes.
…digest_agent (#103)

- Added support for extracting trace ID from headers in `_metadata_request_id_from_headers`.
- Updated `_set_trace_identifier_kwargs` to accept additional metadata and conditionally include it in the kwargs.
- Modified `_build_langfuse_callbacks` to accept metadata for improved context in callback generation.
- Enhanced `run_digest_now` to forward trace ID from the request headers.
- Introduced `_build_digest_langfuse_callbacks` in `digest_agent` for consistent metadata handling.
- Updated tests to verify trace ID propagation and metadata inclusion in LLM calls.
Propagate load errors from run_global_scrapers so scraper jobs fail visible and can retry.
* Fix: reuse digest LLM instance across batch runs

Digest generation now keeps a single Azure OpenAI client in-process to avoid
creating a fresh client per user while preserving prompt and save behavior.

* Fix test mock for digest llm invoke config path
…n_app.py and update ScraperSources partition key in setup-local-cosmos.py

- Removed `enable_cross_partition_query=True` from several query items in `function_app.py` to streamline query execution.
- Updated the partition key for the `ScraperSources` container in `setup-local-cosmos.py` from `/id` to `/sourceType` for improved data organization.
Copilot AI review requested due to automatic review settings April 18, 2026 18:35
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 18, 2026

Warning

Rate limit exceeded

@AB-Law has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 53 minutes and 58 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 53 minutes and 58 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5a36c4b5-164d-49c3-b6e5-e54335553e3a

📥 Commits

Reviewing files that changed from the base of the PR and between f5eb8e4 and 2fb2b2e.

📒 Files selected for processing (2)
  • PluckIt.Processor/function_app.py
  • PluckIt.Processor/tests/unit/test_auth_contract.py
📝 Walkthrough

Walkthrough

Updates Cosmos DB partition key strategy from "/id" to "/sourceType" in the ScraperSources container, while removing explicit cross-partition query enablement flags from four query calls in function_app.py and addressing a formatting issue in scraper_runner.py.

Changes

Cohort / File(s) Summary
Cross-Partition Query Argument Removal
PluckIt.Processor/function_app.py
Removed enable_cross_partition_query=True parameter from query calls in list_scraper_sources, _get_user_subscriptions, acquire_scraper_lease, and _get_source_and_verify_sub functions (4 instances total).
Partition Key Configuration
scripts/setup-local-cosmos.py
Changed ScraperSources container partition key from "/id" to "/sourceType" in the CONTAINERS definition.
Formatting Issue
PluckIt.Processor/agents/scraper_runner.py
Whitespace change to enable_cross_partition_query=True argument in sources_container.query_items() call within run_global_scrapers(), potentially affecting parameter binding.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A shuffle through partitions, /sourceType now leads the way,
Cross-partition flags removed, simplified the query's day,
Small formatting tweak in the runner's domain,
Cosmos DB shifts its partition reign! 🔑✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the empty template structure with no actual content filled in the Summary or Changes sections, making it impossible to understand the intent or scope of the changes. Fill in the Summary section with what changed and why, and the Changes section with a bullet-point list of the actual modifications made.
Title check ❓ Inconclusive The title 'Dev' is vague and does not meaningfully describe the actual changes in the pull request, which involve Cosmos DB partition key updates, query argument removals, and configuration changes. Use a more descriptive title that captures the primary change, such as 'Update Cosmos DB partition key and query configuration' or similar.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch dev

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

- Updated `test_refresh_session_rotates_tokens_and_replaces_previous` and `test_refresh_session_rejects_expired_refresh_token` to include a mock for the current UTC time, improving the accuracy of token expiration handling in tests.
- Refactored the context manager for patching to streamline the mocking process.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@PluckIt.Processor/agents/scraper_runner.py`:
- Around line 268-271: The enable_cross_partition_query argument in the call to
sources_container.query_items is misaligned; fix the indentation so
enable_cross_partition_query=True lines up with the other keyword args (e.g.,
align it with query=query,) inside the sources =
list(sources_container.query_items(...)) call to avoid visual confusion and
formatter churn; locate the query_items invocation in scraper_runner.py and
adjust the indentation for enable_cross_partition_query accordingly.

In `@scripts/setup-local-cosmos.py`:
- Line 44: The change of the ScraperSources partition key from "/id" to
"/sourceType" in the local setup will diverge local from deployed containers
because Cosmos DB partition keys are immutable; ensure you add a migration plan
instead of only changing the local script: document and implement creating a new
ScraperSources container partitioned on "/sourceType", backfilling data from the
old container, and performing a safe swap (or IaC update) for dev/staging/prod.
Also verify the query behavior in function_app.py — the synchronous sync call
(sync method around line ~1689) must keep enable_cross_partition_query=True if
it relies on cross-partition scans, while the async calls (async queries around
lines ~2815, ~2959, ~3057) require no flag because the async client performs
cross-partition queries automatically; update docs/comments to reflect this and
ensure tests or runbooks show how to migrate production data.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0e3ed3f7-f9f8-4620-88c3-2c516d56a742

📥 Commits

Reviewing files that changed from the base of the PR and between 854d10f and f5eb8e4.

📒 Files selected for processing (3)
  • PluckIt.Processor/agents/scraper_runner.py
  • PluckIt.Processor/function_app.py
  • scripts/setup-local-cosmos.py
💤 Files with no reviewable changes (1)
  • PluckIt.Processor/function_app.py

Comment thread PluckIt.Processor/agents/scraper_runner.py
("WearEvents", _PK_USER),
("StylingActivity", _PK_USER),
("ScraperSources", "/id"),
("ScraperSources", "/sourceType"),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect every query_items call against ScraperSources-like usage to confirm they either
# filter by sourceType or explicitly enable cross-partition queries.
rg -nP -C5 'query_items\s*\(' --type=py
# Specifically look at function_app.py calls and check WHERE clauses / flag presence.
fd -a 'function_app.py' | xargs -I{} rg -nP -C10 'ScraperSources|sources_container|query_items' {}
# Check the pinned azure-cosmos version — behavior of enable_cross_partition_query default
# differs across releases.
fd -a 'requirements*.txt|pyproject.toml|poetry.lock' | xargs rg -nP 'azure-cosmos'

Repository: AB-Law/Pluck-It

Length of output: 50371


🌐 Web query:

azure-cosmos python SDK enable_cross_partition_query default value deprecated

💡 Result:

In the Azure Cosmos DB Python SDK (azure-cosmos), the enable_cross_partition_query parameter for query_items (sync client) has a default value of None. When None, the SDK determines cross-partition behavior based on whether a partition key is provided: no partition key means cross-partition query (equivalent to True); partition key provided means single-partition (equivalent to False). Explicitly setting it to False when a partition key is provided can cause issues like errors with continuation tokens, as the service sees the flag and treats it as cross-partition regardless. It is not deprecated in the sync client; examples and docs still use it (e.g., set to True for cross-partition without partition key). For the async client (azure.cosmos.aio), it is unsupported/not needed: cross-partition happens automatically without partition key, and passing the parameter raises errors (as of v4.4+; previously undocumented behavior). No changelog entries indicate deprecation.

Citations:


🏁 Script executed:

# Confirm container assignment around the problematic lines
rg -n -B5 'query="SELECT.*c.isActive = true"' PluckIt.Processor/function_app.py | head -30
rg -n -B5 'query="SELECT.*c.id = `@id`"' PluckIt.Processor/function_app.py | grep -A5 -B5 'get_scraper_sources_container'
# Check SDK version
fd -a 'requirements*.txt' | xargs cat 2>/dev/null

Repository: AB-Law/Pluck-It

Length of output: 1632


🏁 Script executed:

# Get full requirements files to find azure-cosmos version
find PluckIt.Processor -name 'requirements*.txt' -o -name 'pyproject.toml' | xargs -I {} sh -c 'echo "=== {} ===" && cat {}'

Repository: AB-Law/Pluck-It

Length of output: 2843


Partition key change — production migration is critical, but downstream query risk is lower than stated.

Two concerns with flipping ScraperSources from /id to /sourceType:

  1. Immutable partition keys: Cosmos DB does not allow partition-key changes on existing containers. This script recreates the local emulator cleanly, but any deployed ScraperSources container (dev/staging/prod) partitioned on /id will need an explicit migration (create a new container on /sourceType, backfill, swap). If that migration/IaC change isn't part of this PR, shipping only the local-setup change will silently diverge local from deployed environments.

  2. Downstream queries: The query risk is less severe than initially assessed. The sync call in function_app.py line 1689 querying ScraperSources retains enable_cross_partition_query=True. The three other ScraperSources queries in function_app.py (lines 2815, 2959, 3057) are async calls without the flag — this is correct and expected, as the async client automatically performs cross-partition queries when no partition_key is provided (the flag is not supported/needed for async). They will query across all sourceType partitions without issue.

Verify production migration plan for ScraperSources container recreation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/setup-local-cosmos.py` at line 44, The change of the ScraperSources
partition key from "/id" to "/sourceType" in the local setup will diverge local
from deployed containers because Cosmos DB partition keys are immutable; ensure
you add a migration plan instead of only changing the local script: document and
implement creating a new ScraperSources container partitioned on "/sourceType",
backfilling data from the old container, and performing a safe swap (or IaC
update) for dev/staging/prod. Also verify the query behavior in function_app.py
— the synchronous sync call (sync method around line ~1689) must keep
enable_cross_partition_query=True if it relies on cross-partition scans, while
the async calls (async queries around lines ~2815, ~2959, ~3057) require no flag
because the async client performs cross-partition queries automatically; update
docs/comments to reflect this and ensure tests or runbooks show how to migrate
production data.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates local Cosmos container configuration and adjusts ScraperSources query behavior in the Processor service, likely to align local dev setup with the provisioned (Terraform) partitioning scheme.

Changes:

  • Update local Cosmos emulator container definition for ScraperSources to use partition key /sourceType.
  • Modify several Cosmos query_items calls in function_app.py by removing enable_cross_partition_query.
  • Minor formatting change around enable_cross_partition_query in scraper_runner.py.

Reviewed changes

Copilot reviewed 3 out of 4 changed files in this pull request and generated 6 comments.

File Description
scripts/setup-local-cosmos.py Changes local Cosmos container PK for ScraperSources to /sourceType to match infrastructure.
PluckIt.Processor/function_app.py Removes cross-partition query enabling on several ScraperSources/subscriptions queries.
PluckIt.Processor/agents/scraper_runner.py Adjusts formatting of a cross-partition query argument when loading global scraper sources.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 43 to 45
("StylingActivity", _PK_USER),
("ScraperSources", "/id"),
("ScraperSources", "/sourceType"),
("ScrapedItems", "/sourceId"),
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing ScraperSources partition key to /sourceType won’t take effect for developers who already have the container created locally: on 409 this script just prints “already exists” and does not recreate the container. Since Cosmos partition keys are immutable, consider detecting an existing container with a different partition key and either (a) warning clearly that the local DB/container must be deleted/recreated, or (b) offering an opt-in flag to drop and recreate the container.

Copilot uses AI. Check for mistakes.
try:
sources = []
async for doc in sources_container.query_items(
query="SELECT c.id, c.name, c.sourceType, c.isGlobal, c.lastScrapedAt, c.config, c.leaseExpiresAt FROM c WHERE c.isActive = true",
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This query spans all active ScraperSources, but ScraperSources is partitioned by /sourceType (see infra). Without either a specific partition_key or enable_cross_partition_query=True, the Cosmos SDK will reject the query at runtime. Re-add enable_cross_partition_query=True for this query (or refactor to query per-partition if you want to avoid cross-partition reads).

Suggested change
query="SELECT c.id, c.name, c.sourceType, c.isGlobal, c.lastScrapedAt, c.config, c.leaseExpiresAt FROM c WHERE c.isActive = true",
query="SELECT c.id, c.name, c.sourceType, c.isGlobal, c.lastScrapedAt, c.config, c.leaseExpiresAt FROM c WHERE c.isActive = true",
enable_cross_partition_query=True,

Copilot uses AI. Check for mistakes.
@@ -2836,7 +2835,6 @@
async for sub in container.query_items(
query="SELECT c.sourceId FROM c WHERE c.userId = @uid AND c.isActive = true",
parameters=[{"name": _DB_USER_ID_PARAM, "value": user_id}],
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UserSourceSubscriptions is partitioned by /userId. This query filters by userId but does not pass partition_key=user_id, and enable_cross_partition_query was removed. That can fail at runtime and is less efficient than a single-partition query. Pass partition_key=user_id here (preferred), or re-enable enable_cross_partition_query=True if you intend a cross-partition query.

Suggested change
parameters=[{"name": _DB_USER_ID_PARAM, "value": user_id}],
parameters=[{"name": _DB_USER_ID_PARAM, "value": user_id}],
partition_key=user_id,

Copilot uses AI. Check for mistakes.
# Use a cross-partition query to find the source.
items = [i async for i in container.query_items(
query="SELECT * FROM c WHERE c.id = @id",
parameters=[{"name": "@id", "value": source_id}],
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This endpoint queries ScraperSources by id only, and the code comment notes the container is partitioned by /sourceType. Removing enable_cross_partition_query=True will cause the query to fail unless a partition_key is provided (which it isn’t). Re-add enable_cross_partition_query=True (or change the API to also supply sourceType so you can do a point read).

Suggested change
parameters=[{"name": "@id", "value": source_id}],
parameters=[{"name": "@id", "value": source_id}],
enable_cross_partition_query=True,

Copilot uses AI. Check for mistakes.
try:
items = [i async for i in container.query_items(
query="SELECT * FROM c WHERE c.id = @id",
parameters=[{"name": "@id", "value": source_id}],
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as acquire_scraper_lease: ScraperSources is partitioned by /sourceType, and this query only provides id. Without enable_cross_partition_query=True (or a partition_key), this will fail at runtime. Re-add enable_cross_partition_query=True here, or refactor to carry sourceType alongside source_id so you can do a point read.

Suggested change
parameters=[{"name": "@id", "value": source_id}],
parameters=[{"name": "@id", "value": source_id}],
enable_cross_partition_query=True,

Copilot uses AI. Check for mistakes.
sources = list(sources_container.query_items(
query=query,
enable_cross_partition_query=True,
enable_cross_partition_query=True,
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newly formatted enable_cross_partition_query argument is misindented relative to the other keyword args in this call, which makes the block harder to read and easy to mis-edit later. Align the indentation of enable_cross_partition_query=True with query=query (and keep the closing parentheses aligned) for consistency with the rest of the file.

Suggested change
enable_cross_partition_query=True,
enable_cross_partition_query=True,

Copilot uses AI. Check for mistakes.
- Added `enable_cross_partition_query=True` back to several query items in `function_app.py` to improve data retrieval across partitions.
- Updated the partition key for user subscriptions to enhance query efficiency and ensure accurate data fetching.
@AB-Law AB-Law merged commit 35d85b1 into main Apr 18, 2026
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants