feat(ollama): add meter STTG to ollama instrumentation#3053
feat(ollama): add meter STTG to ollama instrumentation#3053nirga merged 23 commits intotraceloop:mainfrom
Conversation
There was a problem hiding this comment.
Caution
Changes requested ❌
Reviewed everything up to e96286b in 1 minute and 15 seconds. Click for details.
- Reviewed
395lines of code in4files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py:291
- Draft comment:
Consider capturing the final response explicitly instead of relying on theif 'res' in locals()check for obtaining the model attribute. This would improve clarity and maintainability. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 0% vs. threshold = 50% The comment identifies a real code clarity issue. Using locals() to check variable existence is a bit hacky. However, the suggested fix is wrong - simply removing the check would cause errors if the loop never executes, since 'res' wouldn't be defined. The code needs the safety check. The current implementation, while not ideal, is actually doing the right thing functionally. The comment correctly identifies a code clarity issue but proposes a solution that would introduce bugs. There may be better ways to handle this edge case than using locals(). While the code could be clearer, the current implementation is actually correct and handles edge cases properly. The suggested change would break functionality. The comment should be deleted because its suggested fix would introduce bugs, even though it correctly identifies a clarity issue.
2. packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py:11
- Draft comment:
Verify that the updated metric name 'llm.chat_completions.streaming_time_to_generate' aligns with the intended semantic conventions for Ollama instrumentation. The prefix change from 'llm.openai...' may be intentional but warrants confirmation. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is asking the PR author to confirm their intention regarding the metric name change, which violates the rule against asking for confirmation of intention. It does not provide a specific code suggestion or ask for a specific test to be written.
Workflow ID: wflow_toUpoPR3hoAbh7XH
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
packages/opentelemetry-instrumentation-ollama/tests/test_ollama_metrics.py
Outdated
Show resolved
Hide resolved
e96286b to
0a21005
Compare
There was a problem hiding this comment.
Bug: Metric Dependency Issue in Streaming Code
The streaming_time_to_generate metric is not recorded when streaming_time_to_first_token is disabled. This occurs because first_token_time, which is required for streaming_time_to_generate, is only set if streaming_time_to_first_token is enabled and start_time is not None. This creates an unintended dependency between these two independent streaming metrics, affecting both synchronous and asynchronous code paths. first_token_time should be set if either metric is enabled.
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py#L277-L303
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py#L335-L342
Bug: Empty Response Data Not Processed
The if response_data: condition prevents _set_response_attributes from being called when response_data is an empty dictionary {}. This affects both synchronous and asynchronous paths. Since empty dictionaries are falsy, valid empty responses are not processed, resulting in missing span attributes. The condition should be if response_data is not None:.
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py#L309-L311
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py#L367-L369
BugBot free trial expires on July 22, 2025
You have used $0.00 of your $50.00 spend limit so far. Manage your spend limit in the Cursor dashboard.
Was this report helpful? Give feedback by reacting with 👍 or 👎
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
feat(instrumentation): ...orfix(instrumentation): ....fixes #3048
Important
Adds
streaming_time_to_generatemetric to Ollama instrumentation for measuring time from first token to completion in streaming responses, with corresponding tests and updates.streaming_time_to_generatemetric to measure time from first token to completion in streaming responses in__init__.py._accumulate_streaming_response()and_aaccumulate_streaming_response()to recordstreaming_time_to_generate._wrap()and_awrap()to handle new metric.test_ollama_streaming_time_to_generate_metrics()intest_ollama_metrics.pyto verify new metric.test_ollama_streaming_time_to_generate_metrics.yamlfor HTTP interaction recording.Meters.LLM_STREAMING_TIME_TO_GENERATEinsemconv_ai/__init__.pyto reflect new metric naming.This description was created by
for e96286b. You can customize this summary. It will automatically update as commits are pushed.