Skip to content

Conversation

@91pavan
Copy link
Contributor

@91pavan 91pavan commented Dec 5, 2025

  • Add OpenLit translator, follows closely with what we did in the traceloop translator

Example trace dump of the resulting spans after running the example:

2999ad5b4013 :) select * from otel_traces\G;

SELECT *
FROM otel_traces

Query id: 95b25a41-4017-4384-aa7e-8ef05cfc02e4

Row 1:
──────
Timestamp:          2025-12-09 10:55:14.151148000
TraceId:            5eed9f9fea5522d88d8395446bb6ea9a
SpanId:             a88c55195a21c30b
ParentSpanId:       e6928e260ddd73b0
TraceState:
SpanName:           POST
SpanKind:           Client
ServiceName:        default
ResourceAttributes: {'deployment.environment':'default','service.name':'default','telemetry.sdk.language':'python','telemetry.sdk.name':'openlit','telemetry.sdk.version':'1.39.0'}
ScopeName:          opentelemetry.instrumentation.httpx
ScopeVersion:       0.60b0
SpanAttributes:     {'http.method':'POST','http.status_code':'200','http.url':'https://api.openai.com/v1/chat/completions'}
Duration:           2103346000 -- 2.10 billion
StatusCode:         Unset
StatusMessage:
Events.Timestamp:   []
Events.Name:        []
Events.Attributes:  []
Links.TraceId:      []
Links.SpanId:       []
Links.TraceState:   []
Links.Attributes:   []

Row 2:
──────
Timestamp:          2025-12-09 10:55:14.130954000
TraceId:            5eed9f9fea5522d88d8395446bb6ea9a
SpanId:             e6928e260ddd73b0
ParentSpanId:
TraceState:
SpanName:           genai.chat
SpanKind:           Client
ServiceName:        default
ResourceAttributes: {'deployment.environment':'default','service.name':'default','telemetry.sdk.language':'python','telemetry.sdk.name':'openlit','telemetry.sdk.version':'1.39.0'}
ScopeName:          opentelemetry.util.genai.handler
ScopeVersion:       0.1.4
SpanAttributes:     {'_openlit_processed':'true','deployment.environment':'default','gen_ai.client.token.usage':'124','gen_ai.evaluation.sampled':'true','gen_ai.input.messages':'[{"role": "user", "parts": [{"type": "text", "content": "user: What is LLM Observability?"}]}]','gen_ai.operation.name':'chat','gen_ai.output.messages':'[{"role": "assistant", "parts": [{"type": "text", "content": "LLM Observability stands for Logs, Metrics, and Traces observability. It refers to the practice of monitoring and measuring the performance and behavior of software systems by collecting and analyzing logs, metrics, and traces. Logs provide detailed records of events and activities within a system, metrics track key performance indicators and statistics, while traces enable understanding of the flow of requests and interactions between different components of a system. By leveraging LLM observability, organizations can gain insights into the health and performance of their systems, troubleshoot issues, and optimize resource utilization."}], "finish_reason": "stop"}]','gen_ai.output.type':'text','gen_ai.request.frequency_penalty':'0','gen_ai.request.is_stream':'false','gen_ai.request.max_tokens':'-1','gen_ai.request.model':'gpt-3.5-turbo','gen_ai.request.presence_penalty':'0','gen_ai.request.seed':'','gen_ai.request.stop_sequences':'[]','gen_ai.request.temperature':'1','gen_ai.request.top_p':'1','gen_ai.request.user':'','gen_ai.response.finish_reasons':'["stop"]','gen_ai.response.id':'chatcmpl-Ckpf9lY6sTPKKHqxKYj395bb355ym','gen_ai.response.model':'gpt-3.5-turbo-0125','gen_ai.sdk.version':'1.109.1','gen_ai.server.time_per_output_token':'0','gen_ai.server.time_to_first_token':'2.1309502124786377','gen_ai.service.tier':'default','gen_ai.system':'openai','gen_ai.usage.cost':'0.000172','gen_ai.usage.input_tokens':'14','gen_ai.usage.output_tokens':'110','server.address':'api.openai.com','server.port':'443','service.name':'default','telemetry.sdk.name':'openlit'}
Duration:           2131456000 -- 2.13 billion
StatusCode:         Ok
StatusMessage:
Events.Timestamp:   ['2025-12-09 10:55:16.262128000','2025-12-09 10:55:16.262137000']
Events.Name:        ['gen_ai.content.prompt','gen_ai.content.completion']
Events.Attributes:  [{'gen_ai.prompt':'user: What is LLM Observability?'},{'gen_ai.completion':'LLM Observability stands for Logs, Metrics, and Traces observability. It refers to the practice of monitoring and measuring the performance and behavior of software systems by collecting and analyzing logs, metrics, and traces. Logs provide detailed records of events and activities within a system, metrics track key performance indicators and statistics, while traces enable understanding of the flow of requests and interactions between different components of a system. By leveraging LLM observability, organizations can gain insights into the health and performance of their systems, troubleshoot issues, and optimize resource utilization.'}]
Links.TraceId:      []
Links.SpanId:       []
Links.TraceState:   []
Links.Attributes:   []

2 rows in set. Elapsed: 0.015 sec.

Signed-off-by: Pavan Sudheendra <pavan0591@gmail.com>
@91pavan 91pavan requested review from a team as code owners December 5, 2025 13:46
@91pavan 91pavan marked this pull request as draft December 5, 2025 13:46
Signed-off-by: Pavan Sudheendra <pavan0591@gmail.com>
…op translator

Signed-off-by: Pavan Sudheendra <pavan0591@gmail.com>
@91pavan 91pavan marked this pull request as ready for review December 9, 2025 10:56
Signed-off-by: Pavan Sudheendra <pavan0591@gmail.com>
Signed-off-by: Pavan Sudheendra <pavan0591@gmail.com>
_DEFAULT_ATTR_TRANSFORMATIONS = {
"rename": {
# OpenLit uses indexed content format, OTel uses structured messages
"gen_ai.completion.0.content": "gen_ai.output.messages",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure there will only be one completion response all the time? I mean would there be gen_ai.completion.1.content and so on?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Usually it's one, but there could be more. will update the code to handle it

return result

# Install the wrapper
trace.set_tracer_provider = wrapped_set_tracer_provider

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am concerned if both translator packages are installed one will override the other. I am not sure if both translators will be installed at the same time, but if they are then this a problem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The underlying framework could either be traceloop or openlit but not both right? So, for a single app, I doubt both will be installed. We could exit or warn the user about the presence of the other translator package.

Thoughts?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe I am overthinking, but the execution environment could have both the packages installed and might lead to unexpected results. Maybe a warning or handling the wrapping in a thread safe manner and using wraps from functools?

rules_spec = data.get("rules") if isinstance(data, dict) else None
if not isinstance(rules_spec, list):
logging.warning(
"[TL_PROCESSOR] %s must contain a 'rules' list", _ENV_RULES

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: OpenLit_PROCESSOR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@pradystar
Copy link

some test files names have to be updated -> replace traceloop with openlit?, example test_nested_traceloop_reconstruction.py

Signed-off-by: Pavan Sudheendra <pavan0591@gmail.com>
Signed-off-by: Pavan Sudheendra <pavan0591@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants