-
Notifications
You must be signed in to change notification settings - Fork 704
feat: Support Otel logging #4981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
shrutip90
wants to merge
1
commit into
main
Choose a base branch
from
sp/tel
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,66 @@ | ||
| /** | ||
| * Copyright 2024 Google LLC | ||
| * | ||
| * Licensed under the Apache License, Version 2.0 (the "License"); | ||
| * you may not use this file except in compliance with the License. | ||
| * You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| import { extendZodWithOpenApi } from '@asteasolutions/zod-to-openapi'; | ||
| import * as z from 'zod'; | ||
|
|
||
| import { InstrumentationLibrarySchema } from './trace'; | ||
|
|
||
| extendZodWithOpenApi(z); | ||
|
|
||
| /** | ||
| * Zod schema for log record data. | ||
| */ | ||
| export const LogRecordSchema = z | ||
| .object({ | ||
| logId: z.string(), // Server-generated if omitted | ||
| traceId: z.string().optional(), | ||
| spanId: z.string().optional(), | ||
| timestamp: z | ||
| .number() | ||
| .describe('log recorded time in milliseconds since the epoch'), | ||
| severityNumber: z.number().optional(), | ||
| severityText: z.string().optional(), | ||
| body: z.unknown().optional(), // Represents any value: string, number, boolean, array, or object | ||
| attributes: z.record(z.string(), z.unknown()).optional(), | ||
| instrumentationLibrary: InstrumentationLibrarySchema.optional(), | ||
| }) | ||
| .openapi('LogRecordData'); | ||
|
|
||
| export type LogRecordData = z.infer<typeof LogRecordSchema>; | ||
|
|
||
| /** | ||
| * Log query parameters. | ||
| */ | ||
| export const LogQuerySchema = z.object({ | ||
| limit: z.coerce.number().optional(), | ||
| continuationToken: z.string().optional(), | ||
| traceId: z.string().optional(), | ||
| spanId: z.string().optional(), | ||
| }); | ||
|
|
||
| export type LogQuery = z.infer<typeof LogQuerySchema>; | ||
|
|
||
| export interface LogQueryResponse { | ||
| logs: LogRecordData[]; | ||
| continuationToken?: string; | ||
| } | ||
|
|
||
| export interface LogStore { | ||
| init(): Promise<void>; | ||
| save(logs: LogRecordData[]): Promise<void>; | ||
| list(query?: LogQuery): Promise<LogQueryResponse>; | ||
| } | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,263 @@ | ||
| /** | ||
| * Copyright 2024 Google LLC | ||
| * | ||
| * Licensed under the Apache License, Version 2.0 (the "License"); | ||
| * you may not use this file except in compliance with the License. | ||
| * You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| import { | ||
| LogRecordData, | ||
| type LogQuery, | ||
| type LogQueryResponse, | ||
| type LogStore, | ||
| } from '@genkit-ai/tools-common'; | ||
| import { logger } from '@genkit-ai/tools-common/utils'; | ||
| import { Mutex } from 'async-mutex'; | ||
| import fs from 'fs'; | ||
| import lockfile from 'lockfile'; | ||
| import path from 'path'; | ||
|
|
||
| function lockFile(file: string) { | ||
| return `${file}.lock`; | ||
| } | ||
|
|
||
| export class LocalFileLogStore implements LogStore { | ||
| private readonly storeRoot: string; | ||
| private readonly indexRoot: string; | ||
| private readonly index: LogIndex; | ||
| private mutex = new Mutex(); | ||
|
|
||
| constructor(options: { storeRoot: string; indexRoot: string }) { | ||
| this.storeRoot = path.resolve(options.storeRoot, '.genkit/logs'); | ||
| fs.mkdirSync(this.storeRoot, { recursive: true }); | ||
| this.indexRoot = path.resolve(options.indexRoot, '.genkit/logs_idx'); | ||
| fs.mkdirSync(this.indexRoot, { recursive: true }); | ||
| logger.debug( | ||
| `[Telemetry Server] initialized local file log store at root: ${this.storeRoot}` | ||
| ); | ||
| this.index = new LogIndex(this.indexRoot); | ||
| } | ||
|
|
||
| async init(): Promise<void> { | ||
| // Indexes and store are append-only. | ||
| } | ||
|
|
||
| private getCurrentLogFile(): string { | ||
| const now = new Date(); | ||
| // format YYYY-MM-DD-HH | ||
| const pad = (n: number) => n.toString().padStart(2, '0'); | ||
| const dateStr = `${now.getFullYear()}-${pad(now.getMonth() + 1)}-${pad(now.getDate())}-${pad(now.getHours())}`; | ||
| return path.resolve(this.storeRoot, `logs-${dateStr}.jsonl`); | ||
| } | ||
|
|
||
| async save(logs: LogRecordData[]): Promise<void> { | ||
| if (logs.length === 0) return; | ||
|
|
||
| await this.mutex.runExclusive(() => { | ||
| const logFile = this.getCurrentLogFile(); | ||
| const relativeFileName = path.basename(logFile); | ||
|
|
||
| let currentOffset = 0; | ||
| if (fs.existsSync(logFile)) { | ||
| currentOffset = fs.statSync(logFile).size; | ||
| } | ||
|
|
||
| const indexEntries: LogIndexEntry[] = []; | ||
| let offsetTracker = currentOffset; | ||
|
|
||
| const linesToAppend = logs.map((log) => { | ||
| // Ensure log id exists if not provided | ||
| if (!log.logId && log.traceId && log.spanId) { | ||
| log.logId = `${log.traceId}-${log.spanId}-${log.timestamp}-${Math.random().toString(36).substring(7)}`; | ||
| } else if (!log.logId) { | ||
| log.logId = `${log.timestamp}-${Math.random().toString(36).substring(7)}`; | ||
| } | ||
| const line = JSON.stringify(log) + '\n'; | ||
| const length = Buffer.byteLength(line, 'utf8'); | ||
|
|
||
| indexEntries.push({ | ||
| traceId: log.traceId, | ||
| spanId: log.spanId, | ||
| timestamp: log.timestamp, | ||
| severityText: log.severityText, | ||
| severityNumber: log.severityNumber, | ||
| file: relativeFileName, | ||
| offset: offsetTracker, | ||
| length: length, | ||
| }); | ||
|
|
||
| offsetTracker += length; | ||
| return line; | ||
| }); | ||
|
|
||
| fs.appendFileSync(logFile, linesToAppend.join('')); | ||
| this.index.add(indexEntries); | ||
| }); | ||
| } | ||
|
|
||
| async list(query?: LogQuery): Promise<LogQueryResponse> { | ||
| const startFromIndex = query?.continuationToken | ||
| ? Number.parseInt(query.continuationToken) | ||
| : 0; | ||
| const limit = query?.limit ?? 100; | ||
|
|
||
| const searchResult = this.index.search({ | ||
| limit, | ||
| startFromIndex, | ||
| traceId: query?.traceId, | ||
| spanId: query?.spanId, | ||
| }); | ||
|
|
||
| const logs: LogRecordData[] = []; | ||
|
|
||
| for (const entry of searchResult.entries) { | ||
| const logFile = path.resolve(this.storeRoot, entry.file); | ||
| if (fs.existsSync(logFile)) { | ||
| const buffer = Buffer.alloc(entry.length); | ||
| const fd = fs.openSync(logFile, 'r'); | ||
| fs.readSync(fd, buffer, 0, entry.length, entry.offset); | ||
| fs.closeSync(fd); | ||
| try { | ||
| logs.push( | ||
| JSON.parse(buffer.toString('utf8').trim()) as LogRecordData | ||
| ); | ||
| } catch (e) { | ||
| logger.error(`Error parsing log at ${entry.file}:${entry.offset}`); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return { | ||
| logs, | ||
| continuationToken: searchResult.pageLastIndex | ||
| ? `${searchResult.pageLastIndex}` | ||
| : undefined, | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| export interface LogIndexEntry { | ||
| traceId?: string; | ||
| spanId?: string; | ||
| timestamp: number; | ||
| severityText?: string; | ||
| severityNumber?: number; | ||
| file: string; | ||
| offset: number; | ||
| length: number; | ||
| } | ||
|
|
||
| export interface LogIndexSearchResult { | ||
| pageLastIndex?: number; | ||
| entries: LogIndexEntry[]; | ||
| } | ||
|
|
||
| export class LogIndex { | ||
| private currentIndexFile: string; | ||
|
|
||
| constructor(private indexRoot: string) { | ||
| this.currentIndexFile = path.resolve( | ||
| this.indexRoot, | ||
| this.newIndexFileName() | ||
| ); | ||
| fs.mkdirSync(this.indexRoot, { recursive: true }); | ||
| } | ||
|
|
||
| private newIndexFileName() { | ||
| return `idx_${(Date.now() + '').padStart(17, '0')}.jsonl`; | ||
| } | ||
|
|
||
| listIndexFiles() { | ||
| return fs.readdirSync(this.indexRoot).filter((f) => f.startsWith('idx_')); | ||
| } | ||
|
|
||
| add(entries: LogIndexEntry[]) { | ||
| if (entries.length === 0) return; | ||
| try { | ||
| lockfile.lockSync(lockFile(this.currentIndexFile)); | ||
| const lines = entries.map((e) => JSON.stringify(e) + '\n').join(''); | ||
| fs.appendFileSync(this.currentIndexFile, lines); | ||
| } catch (err) { | ||
| logger.error( | ||
| `Failed to lock log index file ${this.currentIndexFile}: ${err}` | ||
| ); | ||
| } finally { | ||
| if (fs.existsSync(lockFile(this.currentIndexFile))) { | ||
| lockfile.unlockSync(lockFile(this.currentIndexFile)); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| search(query: { | ||
| limit: number; | ||
| startFromIndex: number; | ||
| traceId?: string; | ||
| spanId?: string; | ||
| }): LogIndexSearchResult { | ||
| const indexFiles = this.listIndexFiles().sort().reverse(); | ||
|
|
||
| let skipped = 0; | ||
| const entries: LogIndexEntry[] = []; | ||
| let hasMore = false; | ||
|
|
||
| for (const idxFile of indexFiles) { | ||
| if (hasMore) break; | ||
|
|
||
| const idxTxt = fs.readFileSync( | ||
| path.resolve(this.indexRoot, idxFile), | ||
| 'utf8' | ||
| ); | ||
| const fileData = idxTxt | ||
| .split('\n') | ||
| .filter((l) => l.trim().length > 0) | ||
| .map((l) => { | ||
| try { | ||
| return JSON.parse(l) as LogIndexEntry; | ||
| } catch { | ||
| return undefined; | ||
| } | ||
| }) | ||
| .filter((d): d is LogIndexEntry => { | ||
| if (!d) return false; | ||
| if (query.traceId && d.traceId !== query.traceId) return false; | ||
| if (query.spanId && d.spanId !== query.spanId) return false; | ||
| return true; | ||
| }); | ||
|
|
||
| fileData.sort((a, b) => b.timestamp - a.timestamp); // Newest first | ||
|
|
||
| for (const entry of fileData) { | ||
| if (skipped < query.startFromIndex) { | ||
| skipped++; | ||
| continue; | ||
| } | ||
|
|
||
| if (entries.length < query.limit) { | ||
| entries.push(entry); | ||
| } else { | ||
| hasMore = true; | ||
| break; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| const result: LogIndexSearchResult = { | ||
| entries, | ||
| }; | ||
|
|
||
| if (hasMore) { | ||
| result.pageLastIndex = query.startFromIndex + query.limit; | ||
| } | ||
|
|
||
| return result; | ||
| } | ||
shrutip90 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| } | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the source for
LogRecord?In particular
eventNameis missing, and I think in OTel 2.0 at leastinstrumentationLibraryhas been supplanted byinstrumentationScope.That said, the Otel docs and various libraries also seem to be a bit out of sync with each other. :-/
See also:
If we can align on the current format from OTel, and do any translation internally (if needed), I think that's preferable.
Alternatively if this is low effort to change (assuming the log store is pretty "ephemeral") that is OK too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eventName in particular is important, because it allows us to know if a given log adheres to an otel semantic convention (i.e. structured log) or not.