-
Notifications
You must be signed in to change notification settings - Fork 14
feat: sheets backfill frontend #8651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…Spaces types This update introduces the oauthStale field to the IntegrationGoogle and IntegrationGrowthSpaces types in the GraphQL schema, indicating whether OAuth credentials are stale. Additionally, the field is integrated into the relevant resolvers and service logic to manage OAuth token refresh states effectively.
This update introduces the StaleOAuthError class to manage scenarios where Google integration OAuth credentials are stale, requiring re-authorization. The error is integrated into the getTeamGoogleAccessToken and getIntegrationGoogleAccessToken functions, preventing token refresh attempts when the integration is marked as stale. Additionally, the appendEventToGoogleSheets function is updated to skip syncing silently when a StaleOAuthError is encountered.
…stale-google-integration
…logic This update removes the StaleOAuthError class and its associated handling from the Google authentication functions. The logic for marking integrations as stale and throwing errors has been eliminated, simplifying the getTeamGoogleAccessToken and getIntegrationGoogleAccessToken functions. The appendEventToGoogleSheets function has also been adjusted to directly retrieve the access token without handling stale integration scenarios.
…upport This update introduces the exportOrder property to journey blocks and modifies the export logic to utilize blockId for data mapping. The changes ensure that data is aligned correctly in exports, preserving the order of columns based on exportOrder. Additionally, the implementation includes updates to the CSV stringifier to handle blockId-based column matching, improving the overall export process for Google Sheets and CSV files.
…multiple schemas This commit removes the oauthStale field from the IntegrationGoogle type in various GraphQL schema files, including api-gateway, api-journeys, and api-journeys-modern. The change ensures consistency across the integration types and simplifies the integration model.
This commit enhances the journey visitor export functionality by building a lookup map from the row data, ensuring that values are aligned correctly with the final header order. This change simplifies the process of matching column values and improves the overall export process for Google Sheets.
This commit introduces error logging for the block export order updates in the journeyVisitorExportToGoogleSheet mutation. If an error occurs during the update process, it logs the error details without rethrowing, ensuring that the mutation can still return success after the sheet export. This enhances the robustness of the export functionality.
…tion This update introduces a job queue for syncing events to Google Sheets, ensuring that events are processed sequentially to avoid race conditions. The appendEventToGoogleSheets function has been modified to fetch sync configurations upfront and add jobs to the queue with retry logic. Additionally, the server's run function now supports concurrency settings, specifically set to 1 for the Google Sheets sync worker to maintain data integrity during processing.
…' into 00-00-MA-refactor-sheets-bull-queue
This commit refactors the event utility tests to incorporate a new Google Sheets sync queue. It replaces the previous email queue mock with a dedicated Google Sheets sync queue and updates the test cases to ensure proper setup and assertions for both email and Google Sheets sync functionalities. Additionally, it introduces checks for null queue scenarios to enhance robustness in the testing framework.
…leSheet mutation This commit refactors the column mapping logic in the journeyVisitorExportToGoogleSheet mutation by removing the unnecessary index parameter from the map function. This change enhances code readability and maintains the functionality of identifying column positions for export.
…com/JesusFilm/core into 00-00-MA-refactor-sheets-bull-queue
This commit introduces a new clearSheet function that allows for clearing all data from a specified Google Sheets tab while preserving the sheet itself. This functionality is particularly useful for backfill operations that require replacing existing content. Additionally, the Google Sheets sync job types have been expanded to include backfill and create operations, enhancing the overall sync capabilities.
…mezone and simplify event handling This commit modifies the journeyVisitorExportToGoogleSheet mutation to accept a timezone parameter, allowing for better handling of date and time during exports. Additionally, it removes unused filter and select arguments, streamlining the mutation's functionality. The test cases have been updated accordingly to reflect these changes.
This commit introduces a new mutation, googleSheetsSyncBackfill, which triggers a backfill of the Google Sheets sync. It clears existing data and re-exports all events, enhancing the data management capabilities. Additionally, the journeyVisitorExportToGoogleSheet mutation has been updated to remove unused filter and select arguments, streamlining its functionality. The GoogleSheetsSyncDialog component has been modified to include a backfill action, allowing users to initiate this process directly from the UI.
WalkthroughAdds asynchronous Google Sheets sync: new backfill mutation, enqueues export/create/append jobs to a BullMQ queue, implements worker services (create/backfill/append), updates export ordering and CSV alignment, shifts runtime export logic to workers, and updates UI, tests, and DB schema accordingly. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI
participant API
participant Queue
participant Worker
participant Sheets
User->>UI: Trigger export or backfill
UI->>API: GraphQL mutation (export/backfill)
API->>API: validate permissions & inputs
API->>Queue: enqueue job (create/backfill/append) with sync metadata
API-->>UI: return sync record / success
Note right of Queue: Job persisted in BullMQ
Worker->>Queue: fetch job
Queue-->>Worker: job payload
Worker->>API: load journey, sync, blocks (DB)
Worker->>Sheets: get access token (integration)
Worker->>Sheets: ensure sheet exists
Worker->>Sheets: clear/create headers
Worker->>Sheets: write rows (append or replace)
Sheets-->>Worker: operation result
Worker->>API: update sync status / exportOrder
Worker-->>Queue: job completion
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit 98848e3
☁️ Nx Cloud last updated this comment at |
|
The latest updates on your projects.
|
|
The latest updates on your projects.
|
|
The latest updates on your projects.
|
|
The latest updates on your projects.
|
|
The latest updates on your projects.
|
|
The latest updates on your projects.
|
|
The latest updates on your projects.
|
|
The latest updates on your projects.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@apis/api-journeys-modern/src/schema/journeyVisitor/journeyVisitor.ts`:
- Around line 409-416: The export alignment currently uses row[col.key] which
relies on the label part of keys and will drop values if labels change; update
the columns.forEach block that populates alignedRowObj to map by blockId
instead: for each col (in the columns.forEach), derive the value by locating a
key in the row whose blockId segment matches col.blockId (e.g. find a key where
key.split('-')[0] === col.blockId) and use that row value (or '' if not found)
rather than row[col.key]; keep the same alignedRowObj assignment but replace the
direct col.key lookup with this blockId-based lookup.
In `@apis/api-journeys-modern/src/workers/googleSheetsSync/service/backfill.ts`:
- Around line 97-122: The backfill currently keys event columns by event.blockId
+ event.label which can drop values if labels changed; build or receive a
blockId→headerLabel mapping (e.g., from finalHeader or by passing a map into
getJourneyVisitors) and use that header label when constructing keys instead of
the raw event.label: when iterating journeyVisitor.events in backfill.ts replace
the normalizedLabel with headerLabel = headerMap.get(event.blockId) ??
normalizedLabel and then use `${event.blockId}-${headerLabel}` for the key so
row keys match finalHeader and no values are omitted.
In `@apis/api-journeys-modern/src/workers/googleSheetsSync/service/service.ts`:
- Around line 176-238: The appendService currently grabs a single team-level
token via getTeamGoogleAccessToken(teamId) and uses it for all syncs; change
appendService to include integrationId when querying syncs (select
integrationId) and, inside the syncs.map handler (the async function passed to
Promise.allSettled), call getIntegrationGoogleAccessToken(sync.integrationId) to
obtain the per-sync Google token before writing to that sync's spreadsheet
(replace usage of accessToken from top-level getTeamGoogleAccessToken). Ensure
you update any variable names where the token is used so each sync uses its own
integration token.
In
`@apps/journeys-admin/src/components/JourneyVisitorsList/FilterDrawer/GoogleSheetsSyncDialog/GoogleSheetsSyncDialog.tsx`:
- Around line 632-641: The IconButton used for delete in the mobile card lacks
an accessible label; update the IconButton (the one rendering Trash2Icon and
invoking handleRequestDeleteSync(sync.id) with disabled based on
isDeleting/backfillingSyncId) to include an aria-label attribute with a
descriptive value (e.g., "Delete sync" or `Delete sync ${sync.id}`) so screen
readers can announce the action, leaving existing onClick, disabled logic, and
conditional CircularProgress/Trash2Icon rendering unchanged.
🧹 Nitpick comments (7)
apis/api-journeys-modern/src/workers/googleSheetsSync/service/create.ts (3)
43-125: Consider cursor-based pagination for large datasets.The offset-based pagination (
skip: offset) can lead to missed or duplicate records if records are inserted or deleted during iteration, sinceorderBy: createdAt: 'desc'combined with offset can shift results. For a backfill operation this is likely acceptable since it's processing historical data, but cursor-based pagination usingcreatedAt+idwould be more robust.
239-240: Type casting withas anyreduces type safety.The
as anycasts onidToBlockandjourneyBlockssuggest a type mismatch between the selected fields and whatgetCardHeadingHelperexpects. Consider aligning the types or creating a proper type adapter.
321-337: Consider using a transaction for batch updates.Multiple parallel
prisma.block.updatecalls viaPromise.allcould be more efficient and atomic as a single transaction. This also avoids partial updates if some fail.♻️ Suggested refactor using transaction
if (blocksToUpdate.length > 0) { try { - await Promise.all( - blocksToUpdate.map(({ blockId, exportOrder }) => - prisma.block.update({ - where: { id: blockId }, - data: { exportOrder } - }) - ) - ) + await prisma.$transaction( + blocksToUpdate.map(({ blockId, exportOrder }) => + prisma.block.update({ + where: { id: blockId }, + data: { exportOrder } + }) + ) + ) } catch (error) {apis/api-journeys-modern/src/schema/journeyVisitor/journeyVisitorExportToGoogleSheet.mutation.spec.ts (1)
115-194: Consider adding queue assertion for completeness.The test description says "should create new spreadsheet and enqueue export job," but there's no assertion that
googleSheetsSyncQueue.addwas called. While the mutation implementation does call the queue, the test doesn't verify this behavior since the queue is lazily loaded and skipped in test environment by default.If the intent is to verify the enqueue behavior, consider either:
- Injecting a mock queue via
__setGoogleSheetsSyncQueueForExportTestsand asserting on it- Updating the test description to reflect what's actually being tested (e.g., "should create new spreadsheet and googleSheetsSync record")
apis/api-journeys-modern/src/schema/journeyVisitor/journeyVisitorExportToGoogleSheet.mutation.ts (2)
11-21: Empty catch block silently swallows errors.The empty
catchblock on line 19-21 will silently swallow any errors during queue import, including legitimate issues like Redis connection failures or module resolution errors in non-test environments. Consider logging the error for observability.♻️ Proposed fix to add error logging
try { if (process.env.NODE_ENV !== 'test') { // eslint-disable-next-line `@typescript-eslint/no-var-requires` googleSheetsSyncQueue = require('../../workers/googleSheetsSync/queue').queue } -} catch { +} catch (error) { + console.error('Failed to load googleSheetsSync queue:', error) googleSheetsSyncQueue = null }
127-143: Consider reordering integration validation.The
getIntegrationGoogleAccessTokencall (line 128) occurs before validating that theintegrationRecordexists (line 136). WhilegetIntegrationGoogleAccessTokenlikely throws if the integration doesn't exist, it would be cleaner to validate the record first to provide consistent error messaging and avoid potentially unnecessary external calls.♻️ Proposed reordering
- // Validate integration - const accessToken = (await getIntegrationGoogleAccessToken(integrationId)) - .accessToken - const integrationRecord = await prisma.integration.findUnique({ where: { id: integrationId }, select: { id: true, accountEmail: true } }) if (integrationRecord == null) { throw new GraphQLError('Integration not found', { extensions: { code: 'NOT_FOUND' } }) } + // Validate integration and get access token + const accessToken = (await getIntegrationGoogleAccessToken(integrationId)) + .accessToken + const integrationIdUsed = integrationRecord.idapis/api-journeys-modern/src/workers/googleSheetsSync/queue.ts (1)
71-75: Type guard handles legacy data not represented in the union type.The
isAppendJobfunction handles legacy jobs that don't have atypefield (!('type' in data)), butGoogleSheetsSyncJobDataunion requires all variants to have atypefield. This creates a type-safety gap where the function can returntruefor data that doesn't match the declared interface.Consider adding a legacy type to the union to accurately represent this case:
♻️ Proposed fix for type safety
+/** + * Legacy job data format (before type discriminator was added). + * Treated as append jobs for backward compatibility. + */ +export interface GoogleSheetsSyncLegacyJobData { + journeyId: string + teamId: string + row: (string | number | null)[] + sheetName?: string + syncs: GoogleSheetsSyncRecord[] +} + /** * Union type for all Google Sheets sync job types. * Use discriminated union with 'type' field to determine the job type. */ export type GoogleSheetsSyncJobData = | GoogleSheetsSyncAppendJobData | GoogleSheetsSyncBackfillJobData | GoogleSheetsSyncCreateJobData + | GoogleSheetsSyncLegacyJobData
| // Align row data to columns using column key | ||
| // Column order is determined by exportOrder, matching uses key | ||
| const alignedRowObj: Record<string, string> = {} | ||
| columns.forEach((col, index) => { | ||
| const colKey = `col_${index}` | ||
| // Match row data by column key (includes blockId-label) | ||
| alignedRowObj[colKey] = row[col.key] ?? '' | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential data drop when labels change (blockId/label key mismatch).
Columns are now deduped by blockId, but rows are still keyed by blockId-label. If a block’s label changes over time, row[col.key] can be empty, causing lost values in the export.
A safer approach is to key row values by blockId and map via col.blockId during alignment. Example:
🛠️ Proposed fix (align by blockId)
- const eventValuesByKey = new Map<string, string[]>()
+ const eventValuesByKey = new Map<string, string[]>()
journeyVisitor.events.forEach((event) => {
if (event.blockId) {
- const normalizedLabel = (event.label ?? '')
- .replace(/\s+/g, ' ')
- .trim()
- const key = `${event.blockId}-${normalizedLabel}`
+ const key = event.blockId
const eventValue = event.value ?? ''
if (!eventValuesByKey.has(key)) {
eventValuesByKey.set(key, [])
}
eventValuesByKey.get(key)!.push(eventValue)
}
})
eventValuesByKey.forEach((values, key) => {
const sanitizedValues = values.map((value) => sanitizeCSVCell(value))
row[key] = sanitizedValues.join('; ')
})- columns.forEach((col, index) => {
+ columns.forEach((col, index) => {
const colKey = `col_${index}`
- alignedRowObj[colKey] = row[col.key] ?? ''
+ const rowKey = col.blockId ?? col.key
+ alignedRowObj[colKey] = row[rowKey] ?? ''
})🤖 Prompt for AI Agents
In `@apis/api-journeys-modern/src/schema/journeyVisitor/journeyVisitor.ts` around
lines 409 - 416, The export alignment currently uses row[col.key] which relies
on the label part of keys and will drop values if labels change; update the
columns.forEach block that populates alignedRowObj to map by blockId instead:
for each col (in the columns.forEach), derive the value by locating a key in the
row whose blockId segment matches col.blockId (e.g. find a key where
key.split('-')[0] === col.blockId) and use that row value (or '' if not found)
rather than row[col.key]; keep the same alignedRowObj assignment but replace the
direct col.key lookup with this blockId-based lookup.
| for (const journeyVisitor of journeyVisitors) { | ||
| const date = formatDateYmdInTimeZone(journeyVisitor.createdAt, timezone) | ||
| const row: JourneyVisitorExportRow = { | ||
| visitorId: journeyVisitor.visitor.id, | ||
| date | ||
| } | ||
| const eventValuesByKey = new Map<string, string[]>() | ||
| journeyVisitor.events.forEach((event) => { | ||
| if (event.blockId) { | ||
| const normalizedLabel = (event.label ?? '') | ||
| .replace(/\s+/g, ' ') | ||
| .trim() | ||
| const key = `${event.blockId}-${normalizedLabel}` | ||
| const eventValue = event.value ?? '' | ||
| if (!eventValuesByKey.has(key)) { | ||
| eventValuesByKey.set(key, []) | ||
| } | ||
| eventValuesByKey.get(key)!.push(eventValue) | ||
| } | ||
| }) | ||
| eventValuesByKey.forEach((values, key) => { | ||
| const sanitizedValues = values.map((value) => | ||
| sanitizeGoogleSheetsCell(value) | ||
| ) | ||
| row[key] = sanitizedValues.join('; ') | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Backfill can drop values when block labels change.
Rows are keyed by each event’s label, but headers are deduped by blockId (first label wins). If a block’s label changes, the row keys won’t exist in finalHeader, so those values are omitted. Map events to the header label for that block (or pass a blockId→label map into getJourneyVisitors).
🐛 Suggested fix (map events to header label)
-async function* getJourneyVisitors(
- journeyId: string,
- eventWhere: Prisma.EventWhereInput,
- timezone: string,
- batchSize: number = 1000
-): AsyncGenerator<JourneyVisitorExportRow> {
+async function* getJourneyVisitors(
+ journeyId: string,
+ eventWhere: Prisma.EventWhereInput,
+ timezone: string,
+ batchSize: number = 1000,
+ headerLabelByBlockId?: Map<string, string>
+): AsyncGenerator<JourneyVisitorExportRow> {
@@
- const key = `${event.blockId}-${normalizedLabel}`
+ const headerLabel =
+ headerLabelByBlockId?.get(event.blockId) ?? normalizedLabel
+ const key = `${event.blockId}-${headerLabel}`- for await (const row of getJourneyVisitors(
- journeyId,
- eventWhere,
- userTimezone
- )) {
+ const headerLabelByBlockId = new Map(
+ normalizedBlockHeaders.map(({ blockId, label }) => [blockId, label])
+ )
+ for await (const row of getJourneyVisitors(
+ journeyId,
+ eventWhere,
+ userTimezone,
+ 1000,
+ headerLabelByBlockId
+ )) {🤖 Prompt for AI Agents
In `@apis/api-journeys-modern/src/workers/googleSheetsSync/service/backfill.ts`
around lines 97 - 122, The backfill currently keys event columns by
event.blockId + event.label which can drop values if labels changed; build or
receive a blockId→headerLabel mapping (e.g., from finalHeader or by passing a
map into getJourneyVisitors) and use that header label when constructing keys
instead of the raw event.label: when iterating journeyVisitor.events in
backfill.ts replace the normalizedLabel with headerLabel =
headerMap.get(event.blockId) ?? normalizedLabel and then use
`${event.blockId}-${headerLabel}` for the key so row keys match finalHeader and
no values are omitted.
| const { accessToken } = await getTeamGoogleAccessToken(teamId) | ||
|
|
||
| const safe = (value: string | number | null | undefined): string => | ||
| value == null ? '' : String(value) | ||
| const visitorId = safe(row[0]) | ||
| const createdAt = safe(row[1]) | ||
| const dynamicKey = safe(row[5]) | ||
| const dynamicValue = safe(row[6]) | ||
|
|
||
| // Extract blockId and find the column key for this event | ||
| // The column key is used for matching row data to columns | ||
| let eventColumnKey: string | null = null | ||
| if (dynamicKey !== '') { | ||
| // Longest-prefix match to avoid prefix collisions (order-independent) | ||
| // e.g., if we have block IDs "block-1" and "block-1-extended", we need | ||
| // to match the longest one that fits | ||
| const matchedBlock = journeyBlocks | ||
| .filter((b) => dynamicKey === b.id || dynamicKey.startsWith(`${b.id}-`)) | ||
| .sort((a, b) => b.id.length - a.id.length)[0] | ||
|
|
||
| if (matchedBlock != null) { | ||
| // Check if there's already a header entry for this blockId | ||
| const existingHeader = headerMap.get(matchedBlock.id) | ||
|
|
||
| if (existingHeader != null) { | ||
| // Use the existing column key for this blockId | ||
| eventColumnKey = `${existingHeader.blockId}-${existingHeader.label}` | ||
| } else { | ||
| // No existing column for this block - normalize and add to headerMap | ||
| const prefix = `${matchedBlock.id}-` | ||
| const rawLabel = dynamicKey.startsWith(prefix) | ||
| ? dynamicKey.substring(prefix.length) | ||
| : '' | ||
| // Normalize label to match how historical headers are normalized | ||
| const normalizedLabel = rawLabel.replace(/\s+/g, ' ').trim() | ||
| eventColumnKey = `${matchedBlock.id}-${normalizedLabel}` | ||
|
|
||
| if (connectedBlockIds.has(matchedBlock.id)) { | ||
| headerMap.set(matchedBlock.id, { | ||
| blockId: matchedBlock.id, | ||
| label: normalizedLabel | ||
| }) | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Rebuild columns with the potentially updated headerMap | ||
| const updatedBlockHeaders = Array.from(headerMap.values()) | ||
| const updatedColumns = buildJourneyExportColumns({ | ||
| baseColumns, | ||
| blockHeaders: updatedBlockHeaders, | ||
| journeyBlocks, | ||
| orderIndex | ||
| }) | ||
| const updatedFinalHeader = updatedColumns.map((column) => column.key) | ||
|
|
||
| // Update all synced sheets - use allSettled so one failure doesn't abort others | ||
| const results = await Promise.allSettled( | ||
| syncs.map(async (sync) => { | ||
| // Use sync-specific timezone for header and data formatting | ||
| const syncTimezone = sync.timezone ?? 'UTC' | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect how team-level vs integration-level tokens are derived
echo "=== Searching for token functions ==="
fd -a "googleAuth.ts" apis -x rg -n "getTeamGoogleAccessToken|getIntegrationGoogleAccessToken" {}
echo ""
echo "=== Checking GoogleSheetsSync payloads for integrationId ==="
rg -n "GoogleSheetsSync.*integrationId|integrationId" apis/api-journeys-modern/src/workers/googleSheetsSync -g "*.ts"Repository: JesusFilm/core
Length of output: 950
🏁 Script executed:
#!/bin/bash
# Get the full function signature and context around line 176 in service.ts
echo "=== Function signature and context in service.ts ==="
head -n 180 apis/api-journeys-modern/src/workers/googleSheetsSync/service/service.ts | tail -n 50
echo ""
echo "=== Check what syncs object contains ==="
rg -B 10 "appendService" apis/api-journeys-modern/src/workers/googleSheetsSync/service/service.ts | head -n 30
echo ""
echo "=== Check the sync object structure ==="
rg -n "interface.*GoogleSheetsSync|type.*GoogleSheetsSync" apis/api-journeys-modern -A 20Repository: JesusFilm/core
Length of output: 22409
🏁 Script executed:
#!/bin/bash
# Check the full appendService function signature and how syncs are loaded
echo "=== Full appendService function signature ==="
rg -A 30 "^async function appendService" apis/api-journeys-modern/src/workers/googleSheetsSync/service/service.ts
echo ""
echo "=== Check if syncs are fetched from the job or from database ==="
rg -B 5 -A 10 "syncs\.map\|syncs\.forEach\|for.*syncs" apis/api-journeys-modern/src/workers/googleSheetsSync/service/service.ts | head -n 40Repository: JesusFilm/core
Length of output: 986
🏁 Script executed:
#!/bin/bash
# Find where GoogleSheetsSyncAppendJobData is created with syncs
echo "=== Where are syncs populated for append jobs ==="
rg -B 10 -A 5 "type.*append|syncs.*map\|syncs.*=.*\[\|GoogleSheetsQueue\|\.add\(" apis/api-journeys-modern/src -g "*.ts" | grep -A 10 -B 10 "syncs"
echo ""
echo "=== Search for event creation that triggers syncs ==="
rg -n "GoogleSheetsQueue\|enqueueSyncs\|appendJob" apis/api-journeys-modern/src -g "*.ts" | head -n 30Repository: JesusFilm/core
Length of output: 4765
🏁 Script executed:
#!/bin/bash
# Get the full appendEventToGoogleSheets function including the syncs query
echo "=== Full appendEventToGoogleSheets function ==="
rg -B 5 -A 50 "async function appendEventToGoogleSheets" apis/api-journeys-modern/src/schema/event/utils.tsRepository: JesusFilm/core
Length of output: 1315
Pass integration-specific access tokens for each sync instead of single team-level token.
appendService queries all syncs by journeyId and teamId, but doesn't include integrationId in the selected fields. It then uses a single getTeamGoogleAccessToken(teamId) to write to all syncs' spreadsheets. If a team has multiple Google integrations, this token will only have access to the first integration's account. Syncs belonging to other integrations will fail.
Unlike appendService, both createService and backfillService correctly pass and use integrationId with getIntegrationGoogleAccessToken(integrationId). Update appendService to follow the same pattern: include integrationId in the syncs query and fetch the appropriate token per sync before writing to each spreadsheet.
🤖 Prompt for AI Agents
In `@apis/api-journeys-modern/src/workers/googleSheetsSync/service/service.ts`
around lines 176 - 238, The appendService currently grabs a single team-level
token via getTeamGoogleAccessToken(teamId) and uses it for all syncs; change
appendService to include integrationId when querying syncs (select
integrationId) and, inside the syncs.map handler (the async function passed to
Promise.allSettled), call getIntegrationGoogleAccessToken(sync.integrationId) to
obtain the per-sync Google token before writing to that sync's spreadsheet
(replace usage of accessToken from top-level getTeamGoogleAccessToken). Ensure
you update any variable names where the token is used so each sync uses its own
integration token.
Summary by CodeRabbit
New Features
Updates
✏️ Tip: You can customize this high-level summary in your review settings.