-
Notifications
You must be signed in to change notification settings - Fork 0
Suggestion to refactor types.py #27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: genai-utils-e2e-dev
Are you sure you want to change the base?
Conversation
* cherry pick changes from previous PR * move span utils to new file * remove span state, use otel context for parent/child * flatten LLMInvocation to use attributes instead of dict keys * helper function and docstrings * refactor: store span and context token in LLMInvocation instead of SpanGenerator * refactor: rename prompts/chat_generations to input_messages/output_messages for clarity * refactor: simplify TelemetryHandler API by moving invocation data management to LLMInvocation class * refactor: update relative imports to absolute imports * Update handler to use a context manager instead of start_llm and stop_llm * resolve tox -e doc failure * safeguard against empty request-model * fix tox typecheck errors for utils * refactor: move tracer to generator, clean up dead code * remove unused linting hint * back off stricter request-model requirements * reintroduce manual start/stop for langchain callback flow * Fix typecheck in langchain instrumentation (open-telemetry#3773) * fix typecheck * fix ruff and added changelog * added lambda list * Update instrumentation-genai/opentelemetry-instrumentation-langchain/CHANGELOG.md --------- Co-authored-by: Riccardo Magliocchetti <[email protected]> * botocore: Add support for AWS Secrets Manager semantic convention attribute (open-telemetry#3765) * botocore: Add support for AWS Secrets Manager semantic convention attribute AWS Secrets Manager defines semantic convention attribute: AWS_SECRETSMANAGER_SECRET_ARN: Final = "aws.secretsmanager.secret.arn" https://github.com/open-telemetry/semantic-conventions/blob/main/docs/registry/attributes/aws.md#amazon-secrets-manager-attributes Currently, this attribute is not set in the botocore instrumentation library. This PR adds support for them by extracting values from both Request and Response objects. Tests Added new unit tests (passing). Verified with: tox -e py312-test-instrumentation-botocore tox -e spellcheck tox -e lint-instrumentation-botocore tox -e ruff Backward Compatibility This change is backward compatible. It only adds instrumentation for additional AWS resources and does not modify existing behavior in the auto-instrumentation library. * add ChangeLog. * Update instrumentation/opentelemetry-instrumentation-botocore/src/opentelemetry/instrumentation/botocore/extensions/secretsmanager.py Co-authored-by: Tammy Baylis <[email protected]> * Update instrumentation/opentelemetry-instrumentation-botocore/tests/test_botocore_secretsmanager.py --------- Co-authored-by: Tammy Baylis <[email protected]> Co-authored-by: Emídio Neto <[email protected]> Co-authored-by: Riccardo Magliocchetti <[email protected]> * clean up context handler, clarify unit tests * remove generator concept --------- Co-authored-by: wrisa <[email protected]> Co-authored-by: Riccardo Magliocchetti <[email protected]> Co-authored-by: Luke (GuangHui) Zhang <[email protected]> Co-authored-by: Tammy Baylis <[email protected]> Co-authored-by: Emídio Neto <[email protected]> Co-authored-by: Aaron Abbott <[email protected]>
* cherry pick changes from previous PR * move span utils to new file * remove span state, use otel context for parent/child * flatten LLMInvocation to use attributes instead of dict keys * helper function and docstrings * refactor: store span and context token in LLMInvocation instead of SpanGenerator * refactor: rename prompts/chat_generations to input_messages/output_messages for clarity * refactor: simplify TelemetryHandler API by moving invocation data management to LLMInvocation class * refactor: update relative imports to absolute imports * Update handler to use a context manager instead of start_llm and stop_llm * resolve tox -e doc failure * safeguard against empty request-model * fix tox typecheck errors for utils * refactor: move tracer to generator, clean up dead code * remove unused linting hint * back off stricter request-model requirements * reintroduce manual start/stop for langchain callback flow * Fix typecheck in langchain instrumentation (open-telemetry#3773) * fix typecheck * fix ruff and added changelog * added lambda list * Update instrumentation-genai/opentelemetry-instrumentation-langchain/CHANGELOG.md --------- Co-authored-by: Riccardo Magliocchetti <[email protected]> * botocore: Add support for AWS Secrets Manager semantic convention attribute (open-telemetry#3765) * botocore: Add support for AWS Secrets Manager semantic convention attribute AWS Secrets Manager defines semantic convention attribute: AWS_SECRETSMANAGER_SECRET_ARN: Final = "aws.secretsmanager.secret.arn" https://github.com/open-telemetry/semantic-conventions/blob/main/docs/registry/attributes/aws.md#amazon-secrets-manager-attributes Currently, this attribute is not set in the botocore instrumentation library. This PR adds support for them by extracting values from both Request and Response objects. Tests Added new unit tests (passing). Verified with: tox -e py312-test-instrumentation-botocore tox -e spellcheck tox -e lint-instrumentation-botocore tox -e ruff Backward Compatibility This change is backward compatible. It only adds instrumentation for additional AWS resources and does not modify existing behavior in the auto-instrumentation library. * add ChangeLog. * Update instrumentation/opentelemetry-instrumentation-botocore/src/opentelemetry/instrumentation/botocore/extensions/secretsmanager.py Co-authored-by: Tammy Baylis <[email protected]> * Update instrumentation/opentelemetry-instrumentation-botocore/tests/test_botocore_secretsmanager.py --------- Co-authored-by: Tammy Baylis <[email protected]> Co-authored-by: Emídio Neto <[email protected]> Co-authored-by: Riccardo Magliocchetti <[email protected]> * clean up context handler, clarify unit tests * Rename UploadHook -> CompletionHook (open-telemetry#3780) * Add opentelemetry-util-genai to the package release workflow (open-telemetry#3781) * Fix package release workflows version.py finding (open-telemetry#3782) Looking at the files in this repo, the version file is always called version.py (and it should be). Tested the find command locally. ```shell $ for f in $(git ls-files '*version*.py'); do basename $f; done | sort -u test_version.py version.py $ find util/opentelemetry-util-genai/ -type f -path "**/version.py" util/opentelemetry-util-genai/src/opentelemetry/util/genai/version.py ``` * Adjust opentelemetry-instrumentation-vertexai dependency on opentelemetry-genai-util (open-telemetry#3785) This fixes the CI failure on the release PRs for opentelemetry-util-genai - open-telemetry#3784 (needs cherry pick) - open-telemetry#3783 * Fix exception handling for JSON decoding (open-telemetry#3787) * Add rstcheck in pre-commit (open-telemetry#3777) * Fix a bunch of rstcheck warnings * Add rstcheck to pre-commit * Ignore automodule * Update changelog and contributing * tox -e ruff -> tox -e precommit But keep the old name for compat * remove generator concept * update token types * Update opentelemetry-util-genai version to v0.2b0 (open-telemetry#3783) Co-authored-by: otelbot <[email protected]> Co-authored-by: Aaron Abbott <[email protected]> --------- Co-authored-by: wrisa <[email protected]> Co-authored-by: Riccardo Magliocchetti <[email protected]> Co-authored-by: Luke (GuangHui) Zhang <[email protected]> Co-authored-by: Tammy Baylis <[email protected]> Co-authored-by: Emídio Neto <[email protected]> Co-authored-by: Aaron Abbott <[email protected]> Co-authored-by: Charlie Jonas <[email protected]> Co-authored-by: otelbot[bot] <197425009+otelbot[bot]@users.noreply.github.com> Co-authored-by: otelbot <[email protected]>
* agent support * cleanup * correct inference event and remove gen_ai.choice fix content capture mode * commented agent, workflow and task event emission * updated output
* agent support * cleanup * correct inference event and remove gen_ai.choice fix content capture mode * commented agent, workflow and task event emission * updated output
* agent support * cleanup * correct inference event and remove gen_ai.choice fix content capture mode * commented agent, workflow and task event emission * updated output
* sample apps * add operation to LLMInvocation and fixed operation value * fix hardcoded span name for llm
fix content capture on spans
Move from composition to inheritance
|
@codex review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
| @dataclass | ||
| class GenAI: | ||
| """Base type for all GenAI telemetry entities.""" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve keyword-only init for GenAI subclasses
Dropping kw_only=True from the GenAI dataclass changes the generated __init__ signature so all base fields become positional and are inserted before subclass fields. Previously subclasses like LLMInvocation or ToolCall could be instantiated positionally (LLMInvocation("gpt-4")) because the base attributes were keyword-only. After this change the same call now binds the argument to context_token and leaves request_model at its default, silently emitting spans with missing model information. This is a backward‑incompatible break for any consumer using positional arguments and will produce incorrect telemetry. Consider retaining kw_only=True or otherwise preserving the previous constructor signature.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@codex review
✅ Why This Solution Works Perfectly
Critical Discovery: Keyword-Only Usage Pattern
After reviewing actual usage in the codebase (particularly callback_handler.py), we found that 100% of instantiations use keyword arguments:
Pattern 1: LLM Invocation (lines 1055-1084)
llm_kwargs: dict[str, Any] = {
"request_model": request_model,
"provider": provider_name,
"framework": "langchain",
"input_messages": input_messages,
"request_functions": request_functions,
"attributes": attributes,
}
# Conditionally add more kwargs...
inv = UtilLLMInvocation(**llm_kwargs) # ✅ Keyword argumentsPattern 2: Agent Invocation (lines 604-616)
agent = UtilAgent(
name=name,
operation=operation,
agent_type=agent_type,
description=description,
framework=framework,
model=model,
tools=tools,
system_instructions=system_instructions,
attributes=attributes,
run_id=run_id,
parent_run_id=parent_run_id,
) # ✅ All keyword argumentsPattern 3: Input Messages (lines 829-832)
result.append(
UtilInputMessage(
role=role,
parts=[UtilText(content=str(content))]
)
) # ✅ Keyword argumentsWhy Positional Arguments Were Never a Risk
- Codebase Convention: The entire codebase uses a consistent pattern of keyword arguments
- Builder Pattern: Most invocations build a kwargs dictionary first, then use
**kwargsunpacking - Explicit Fields: All calls explicitly name the parameters they're passing
- No Positional Usage: We found zero instances of positional instantiation like
LLMInvocation("gpt-4")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codex Review: Didn't find any major issues. Bravo.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
|
Can we split the architecture recommendation from the python3.9 fixes? |
@keith-decker Here's the PR with just changes for This PR was built on top of the one for just |
a8d3a8c to
90e5276
Compare
Description
types_redesign.pyand the document isARCHITECTURE_RECOMMENDATION.mdFixes # (issue)
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Does This PR Require a Core Repo Change?
Checklist:
See contributing.md for styleguide, changelog guidelines, and more.