Skip to content

OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT设置为true后发起请求报错 #105

@chenjn268-sudo

Description

@chenjn268-sudo

Describe your environment

项目基于langchain
项目的关键依赖:
a2a-sdk 0.3.8
ag-ui-langgraph 0.0.17
ag-ui-protocol 0.1.9
langchain 1.1.0
langchain-anthropic 0.3.22
langchain-classic 1.0.0
langchain-community 0.4.1
langchain-core 1.1.0
langchain-deepseek 1.0.1
langchain-experimental 0.4.0
langchain-mcp-adapters 0.1.14
langchain-milvus 0.2.1
langchain-mongodb 0.7.1
langchain-openai 1.1.0
langchain-text-splitters 1.0.0
langgraph 1.0.4
langgraph-checkpoint 2.1.2
langgraph-checkpoint-mongodb 0.2.1
langgraph-prebuilt 1.0.2
langgraph-sdk 0.2.9
langsmith 0.4.41
openai 1.109.1
opencv-contrib-python 4.10.0.84
openpyxl 3.1.5
opentelemetry-api 1.40.0.dev0
opentelemetry-distro 0.60b1
opentelemetry-exporter-otlp 1.39.1
opentelemetry-exporter-otlp-proto-common 1.39.1
opentelemetry-exporter-otlp-proto-grpc 1.39.1
opentelemetry-exporter-otlp-proto-http 1.39.1
opentelemetry-instrumentation 0.61b0.dev0 D:\project\ai\cursorProject\tcmAgent\loongsuite-python-agent\opentelemetry-instrumentation
opentelemetry-instrumentation-aiohttp-client 0.60b1
opentelemetry-instrumentation-aiohttp-server 0.60b1
opentelemetry-instrumentation-asgi 0.60b1
opentelemetry-instrumentation-asyncio 0.60b1
opentelemetry-instrumentation-click 0.60b1
opentelemetry-instrumentation-dbapi 0.60b1
opentelemetry-instrumentation-fastapi 0.60b1
opentelemetry-instrumentation-grpc 0.60b1
opentelemetry-instrumentation-httpx 0.60b1
opentelemetry-instrumentation-jinja2 0.60b1
opentelemetry-instrumentation-langchain 2.0b0.dev0
opentelemetry-instrumentation-logging 0.60b1
opentelemetry-instrumentation-openai-v2 2.3b0
opentelemetry-instrumentation-pymongo 0.60b1
opentelemetry-instrumentation-redis 0.60b1
opentelemetry-instrumentation-requests 0.60b1
opentelemetry-instrumentation-sqlalchemy 0.60b1
opentelemetry-instrumentation-sqlite3 0.60b1
opentelemetry-instrumentation-starlette 0.60b1
opentelemetry-instrumentation-system-metrics 0.60b1
opentelemetry-instrumentation-threading 0.60b1
opentelemetry-instrumentation-tornado 0.60b1
opentelemetry-instrumentation-tortoiseorm 0.60b1
opentelemetry-instrumentation-urllib 0.60b1
opentelemetry-instrumentation-urllib3 0.60b1
opentelemetry-instrumentation-wsgi 0.60b1
opentelemetry-proto 1.39.1
opentelemetry-sdk 1.39.1
opentelemetry-semantic-conventions 0.61b0.dev0
opentelemetry-semantic-conventions-ai 0.4.13
opentelemetry-util-http 0.60b1
pydantic 2.12.3
pydantic-core 2.41.4
pydantic-settings 2.11.0

What happened?

OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true后发起请求报错,如果不设置的话,能正常运行,也能采集到traces数据

(base) (.venv) PS D:\project\ai\cursorProject\tcmAgent> $env:OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT = "true"
(base) (.venv) PS D:\project\ai\cursorProject\tcmAgent> loongsuite-instrument --exporter_otlp_protocol grpc --traces_exporter otlp --metrics_exporter otlp --exporter_otlp_insecure true --exporter_otlp_endpoint 10.57.10.40:11810 --service_name opentelemetry-python-langchain-zero-code uvicorn main:app --host 0.0.0.0 --port 8000
(base) (.venv) PS D:\project\ai\cursorProject\tcmAgent> Attempting to instrument while already instrumented
2026-01-21 16:34:22 - root - INFO - logging_config.py:41 - 日志系统初始化完成 - 级别: INFO
2026-01-21 16:34:23 - src.tools.tcm_llm_tools - INFO - tcm_llm_tools.py:35 - 龙印AI Agent列表查询失败: None - None
D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
2026-01-21 16:34:30 - src.api.file_qa_api - INFO - file_qa_api.py:26 - Chunk upload directory ensured: D:\project\ai\cursorProject\tcmAgent\uploads\chunks
INFO: Started server process [38844]
INFO: Waiting for application startup.
2026-01-21 16:34:30 - main - INFO - main.py:31 - Starting application...
2026-01-21 16:34:30 - apscheduler.scheduler - INFO - base.py:507 - Adding job tentatively -- it will be properly scheduled when the scheduler starts
2026-01-21 16:34:30 - apscheduler.scheduler - INFO - base.py:1090 - Added job "清理过期分片上传" to job store "default"
2026-01-21 16:34:30 - apscheduler.scheduler - INFO - base.py:214 - Scheduler started
2026-01-21 16:34:30 - src.scheduler.chunk_cleanup - INFO - chunk_cleanup.py:101 - Chunk cleanup scheduler started: runs every hour
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: 172.16.55.150:61963 - "POST /tcmAgent/send HTTP/1.1" 200 OK
<opentelemetry.instrumentation.openai_v2.patch.ToolCallBuffer object at 0x0000022024688C20>
<opentelemetry.instrumentation.openai_v2.patch.ToolCallBuffer object at 0x0000022024688C20>
ERROR: Exception in ASGI application

  • Exception Group Traceback (most recent call last):
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_utils.py", line 79, in collapse_excgroups
    | yield
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\responses.py", line 270, in call
    | async with anyio.create_task_group() as task_group:
    | ~~~~~~~~~~~~~~~~~~~~~~~^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\anyio_backends_asyncio.py", line 781, in aexit
    | raise BaseExceptionGroup(
    | "unhandled errors in a TaskGroup", self.exceptions
    | ) from None
    | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
    +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 409, in run_asgi
    | result = await app( # type: ignore[func-returns-value]
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | self.scope, self.receive, self.send
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | )
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 60, in call
    | return await self.app(scope, receive, send)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\applications.py", line 1134, in call
    | await super().call(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\applications.py", line 113, in call
    | await self.middleware_stack(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in call
    | raise exc
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in call
    | await self.app(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asgi_init
    .py", line 810, in call
    | await self.app(scope, otel_receive, otel_send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in call
    | raise exc
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in call
    | await self.app(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\fastapi_init
    .py", line 307, in call
    | await self.app(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\exceptions.py", line 63, in call
    | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
    | raise exc
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 42, in wrapped_app
    | await app(scope, receive, sender)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
    | await self.app(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\routing.py", line 716, in call
    | await self.middleware_stack(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\routing.py", line 736, in app
    | await route.handle(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\routing.py", line 290, in handle
    | await self.app(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\routing.py", line 125, in app
    | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
    | raise exc
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 42, in wrapped_app
    | await app(scope, receive, sender)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\routing.py", line 112, in app
    | await response(scope, receive, send)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\responses.py", line 269, in call
    | with collapse_excgroups():
    | ~~~~~~~~~~~~~~~~~~^^
    | File "D:\devEnv\python3137\Lib\contextlib.py", line 162, in exit
    | self.gen.throw(value)
    | ~~~~~~~~~~~~~~^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_utils.py", line 85, in collapse_excgroups
    | raise exc
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asyncio_init
    .py", line 299, in trace_coroutine
    | return await coro
    | ^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\responses.py", line 273, in wrap
    | await func()
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\responses.py", line 253, in stream_response
    | async for chunk in self.body_iterator:
    | ...<2 lines>...
    | await send({"type": "http.response.body", "body": chunk, "more_body": True})
    | File "D:\project\ai\cursorProject\tcmAgent\src\ag_ui_langgraph\endpoint.py", line 36, in event_generator
    | async for event in agent.run(input_data):
    | yield encoder.encode(event)
    | File "D:\project\ai\cursorProject\tcmAgent\src\ag_ui_langgraph\agent.py", line 117, in run
    | async for event_str in self._handle_stream_events(input.copy(update={"forwarded_props": forwarded_props})):
    | yield event_str
    | File "D:\project\ai\cursorProject\tcmAgent\src\ag_ui_langgraph\agent.py", line 174, in _handle_stream_events
    | async for event in stream:
    | ...<50 lines>...
    | yield single_event
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 1506, in astream_events
    | async for event in event_stream:
    | yield event
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 1077, in astream_events_implementation_v2
    | await task
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asyncio_init
    .py", line 299, in trace_coroutine
    | return await coro
    | ^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 1032, in consume_astream
    | async for _ in event_streamer.tap_output_aiter(run_id, stream):
    | # All the content will be picked up
    | pass
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 191, in tap_output_aiter
    | first = await py_anext(output, default=sentinel)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\utils\aiter.py", line 76, in anext_impl
    | return await anext(iterator)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel\main.py", line 2971, in astream
    | async for _ in runner.atick(
    | ...<13 lines>...
    | yield o
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_runner.py", line 304, in atick
    | await arun_with_retry(
    | ...<15 lines>...
    | )
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_retry.py", line 132, in arun_with_retry
    | async for _ in task.proc.astream(task.input, config):
    | pass
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 839, in astream
    | output = await asyncio.create_task(
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^
    | consume_aiter(aiterator), context=context
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | )
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asyncio_init
    .py", line 299, in trace_coroutine
    | return await coro
    | ^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 904, in consume_aiter
    | async for chunk in it:
    | ...<8 lines>...
    | output = chunk
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 191, in tap_output_aiter
    | first = await py_anext(output, default=sentinel)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\utils\aiter.py", line 76, in anext_impl
    | return await anext(iterator)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 1579, in atransform
    | async for ichunk in input:
    | ...<14 lines>...
    | final = ichunk
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 1166, in astream
    | yield await self.ainvoke(input, config, **kwargs)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 473, in ainvoke
    | ret = await self.afunc(*args, **kwargs)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\config.py", line 603, in run_in_executor
    | return await asyncio.get_running_loop().run_in_executor(
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | ...<2 lines>...
    | )
    | ^
    | File "D:\devEnv\python3137\Lib\concurrent\futures\thread.py", line 59, in run
    | result = self.fn(*self.args, **self.kwargs)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\threading_init
    .py", line 171, in wrapped_func
    | return original_func(*func_args, **func_kwargs)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\config.py", line 594, in wrapper
    | return func(*args, **kwargs)
    | File "D:\project\ai\cursorProject\tcmAgent\src\agents\supervisor_agent.py", line 113, in supervisor_node
    | response = agent.invoke({
    | "messages": messages
    | })
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel\main.py", line 3068, in invoke
    | for chunk in self.stream(
    | ~~~~~~~~~~~^
    | input,
    | ^^^^^^
    | ...<10 lines>...
    | **kwargs,
    | ^^^^^^^^^
    | ):
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel\main.py", line 2643, in stream
    | for _ in runner.tick(
    | ~~~~~~~~~~~^
    | [t for t in loop.tasks.values() if not t.writes],
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | ...<2 lines>...
    | schedule_task=loop.accept_push,
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | ):
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_runner.py", line 167, in tick
    | run_with_retry(
    | ~~~~~~~~~~~~~~^
    | t,
    | ^^
    | ...<10 lines>...
    | },
    | ^^
    | )
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_retry.py", line 42, in run_with_retry
    | return task.proc.invoke(task.input, config)
    | ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 656, in invoke
    | input = context.run(step.invoke, input, config, **kwargs)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 400, in invoke
    | ret = self.func(*args, **kwargs)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain\agents\factory.py", line 1129, in model_node
    | response = _execute_model_sync(request)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain\agents\factory.py", line 1102, in execute_model_sync
    | output = model
    .invoke(messages)
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 5534, in invoke
    | return self.bound.invoke(
    | ~~~~~~~~~~~~~~~~~^
    | input,
    | ^^^^^^
    | self._merge_configs(config),
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | **{**self.kwargs, **kwargs},
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | )
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 398, in invoke
    | self.generate_prompt(
    | ~~~~~~~~~~~~~~~~~~~~^
    | [self._convert_input(input)],
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | ...<6 lines>...
    | **kwargs,
    | ^^^^^^^^^
    | ).generations[0][0],
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1117, in generate_prompt
    | return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
    | ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 927, in generate
    | self._generate_with_cache(
    | ~~~~~~~~~~~~~~~~~~~~~~~~~^
    | m,
    | ^^
    | ...<2 lines>...
    | **kwargs,
    | ^^^^^^^^^
    | )
    | ^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1178, in _generate_with_cache
    | for chunk in self._stream(messages, stop=stop, **kwargs):
    | ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_openai\chat_models\base.py", line 1273, in _stream
    | with context_manager as response:
    | ^^^^^^^^^^^^^^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\openai_v2\patch.py", line 593, in exit
    | self.cleanup()
    | ~~~~~~~~~~~~^^
    | File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\openai_v2\patch.py", line 551, in cleanup
    | function["arguments"] = "".join(
    | ~~~~~~~^
    | tool_call.arguments
    | ^^^^^^^^^^^^^^^^^^^
    | )
    | ^
    | TypeError: sequence item 3: expected str instance, NoneType found
    | During task with name 'model' and id 'c433e6e4-f00e-f82f-17c7-e69c5e0b08cb'
    | During task with name 'supervisor' and id 'b3a85ba6-a6b1-49d3-2872-bec6873e9100'
    +------------------------------------

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 409, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self.scope, self.receive, self.send
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 60, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\applications.py", line 1134, in call
await super().call(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\applications.py", line 113, in call
await self.middleware_stack(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in call
raise exc
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in call
await self.app(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asgi_init
.py", line 810, in call
await self.app(scope, otel_receive, otel_send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in call
raise exc
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in call
await self.app(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\fastapi_init
.py", line 307, in call
await self.app(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\middleware\exceptions.py", line 63, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
raise exc
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\routing.py", line 716, in call
await self.middleware_stack(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\routing.py", line 736, in app
await route.handle(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\routing.py", line 290, in handle
await self.app(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\routing.py", line 125, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 53, in wrapped_app
raise exc
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\fastapi\routing.py", line 112, in app
await response(scope, receive, send)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\responses.py", line 269, in call
with collapse_excgroups():
~~~~~~~~~~~~~~~~~~^^
File "D:\devEnv\python3137\Lib\contextlib.py", line 162, in exit
self.gen.throw(value)
~~~~~~~~~~~~~~^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette_utils.py", line 85, in collapse_excgroups
raise exc
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asyncio_init_.py", line 299, in trace_coroutine
return await coro
^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\responses.py", line 273, in wrap
await func()
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\starlette\responses.py", line 253, in stream_response
async for chunk in self.body_iterator:
...<2 lines>...
await send({"type": "http.response.body", "body": chunk, "more_body": True})
File "D:\project\ai\cursorProject\tcmAgent\src\ag_ui_langgraph\endpoint.py", line 36, in event_generator
async for event in agent.run(input_data):
yield encoder.encode(event)
File "D:\project\ai\cursorProject\tcmAgent\src\ag_ui_langgraph\agent.py", line 117, in run
async for event_str in self._handle_stream_events(input.copy(update={"forwarded_props": forwarded_props})):
yield event_str
File "D:\project\ai\cursorProject\tcmAgent\src\ag_ui_langgraph\agent.py", line 174, in _handle_stream_events
async for event in stream:
...<50 lines>...
yield single_event
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 1506, in astream_events
async for event in event_stream:
yield event
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 1077, in astream_events_implementation_v2
await task
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asyncio_init
.py", line 299, in trace_coroutine
return await coro
^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 1032, in consume_astream
async for _ in event_streamer.tap_output_aiter(run_id, stream):
# All the content will be picked up
pass
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 191, in tap_output_aiter
first = await py_anext(output, default=sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\utils\aiter.py", line 76, in anext_impl
return await anext(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel\main.py", line 2971, in astream
async for _ in runner.atick(
...<13 lines>...
yield o
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_runner.py", line 304, in atick
await arun_with_retry(
...<15 lines>...
)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_retry.py", line 132, in arun_with_retry
async for _ in task.proc.astream(task.input, config):
pass
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 839, in astream
output = await asyncio.create_task(
^^^^^^^^^^^^^^^^^^^^^^^^^^
consume_aiter(aiterator), context=context
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\asyncio_init
.py", line 299, in trace_coroutine
return await coro
^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 904, in consume_aiter
async for chunk in it:
...<8 lines>...
output = chunk
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\tracers\event_stream.py", line 191, in tap_output_aiter
first = await py_anext(output, default=sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\utils\aiter.py", line 76, in anext_impl
return await anext(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 1579, in atransform
async for ichunk in input:
...<14 lines>...
final = ichunk
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 1166, in astream
yield await self.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 473, in ainvoke
ret = await self.afunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\config.py", line 603, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "D:\devEnv\python3137\Lib\concurrent\futures\thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\threading_init
.py", line 171, in wrapped_func
return original_func(*func_args, **func_kwargs)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\config.py", line 594, in wrapper
return func(*args, **kwargs)
File "D:\project\ai\cursorProject\tcmAgent\src\agents\supervisor_agent.py", line 113, in supervisor_node
response = agent.invoke({
"messages": messages
})
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel\main.py", line 3068, in invoke
for chunk in self.stream(
~~~~~~~~~~~^
input,
^^^^^^
...<10 lines>...
**kwargs,
^^^^^^^^^
):
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel\main.py", line 2643, in stream
for _ in runner.tick(
~~~~~~~~~~~^
[t for t in loop.tasks.values() if not t.writes],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
schedule_task=loop.accept_push,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_runner.py", line 167, in tick
run_with_retry(
~~~~~~~~~~~~~~^
t,
^^
...<10 lines>...
},
^^
)
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph\pregel_retry.py", line 42, in run_with_retry
return task.proc.invoke(task.input, config)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 656, in invoke
input = context.run(step.invoke, input, config, **kwargs)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langgraph_internal_runnable.py", line 400, in invoke
ret = self.func(*args, **kwargs)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain\agents\factory.py", line 1129, in model_node
response = _execute_model_sync(request)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain\agents\factory.py", line 1102, in execute_model_sync
output = model
.invoke(messages)
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\runnables\base.py", line 5534, in invoke
return self.bound.invoke(
~~~~~~~~~~~~~~~~~^
input,
^^^^^^
self._merge_configs(config),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**{**self.kwargs, **kwargs},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 398, in invoke
self.generate_prompt(
~~~~~~~~~~~~~~~~~~~~^
[self._convert_input(input)],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<6 lines>...
**kwargs,
^^^^^^^^^
).generations[0][0],
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1117, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 927, in generate
self._generate_with_cache(
~~~~~~~~~~~~~~~~~~~~~~~~~^
m,
^^
...<2 lines>...
**kwargs,
^^^^^^^^^
)
^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1178, in _generate_with_cache
for chunk in self._stream(messages, stop=stop, **kwargs):
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\langchain_openai\chat_models\base.py", line 1273, in _stream
with context_manager as response:
^^^^^^^^^^^^^^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\openai_v2\patch.py", line 593, in exit
self.cleanup()
~~~~~~~~~~~~^^
File "D:\project\ai\cursorProject\tcmAgent.venv\Lib\site-packages\opentelemetry\instrumentation\openai_v2\patch.py", line 551, in cleanup
function["arguments"] = "".join(
~~~~~~~^
tool_call.arguments
^^^^^^^^^^^^^^^^^^^
)
^
TypeError: sequence item 3: expected str instance, NoneType found
During task with name 'model' and id 'c433e6e4-f00e-f82f-17c7-e69c5e0b08cb'
During task with name 'supervisor' and id 'b3a85ba6-a6b1-49d3-2872-bec6873e9100'

Steps to Reproduce

设置OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT = "true"
然后发起一个普通的大语言对话请求,并没有调用工具

Expected Result

正常运行

Actual Result

报错

Additional context

No response

Would you like to implement a fix?

None

Tip

React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions