Skip to content

Commit 1d22a1f

Browse files
committed
fix: get llm token usage add result type
1 parent 334428e commit 1d22a1f

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

src/backend/bisheng/llm/domain/utils.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,8 @@ def parse_token_usage(result: Any) -> tuple[int, int, int, int]:
6464
cache_token += tmp3
6565
total_token += tmp4
6666
elif isinstance(result, ChatGenerationChunk):
67-
token_usage = result.message.response_metadata.get('token_usage', {})
67+
token_usage = result.message.response_metadata.get('token_usage', {}) or result.generation_info.get(
68+
'token_usage', {})
6869
input_token, output_token, cache_token, total_token = get_token_from_usage(token_usage)
6970
else:
7071
logger.warning(f'unknown result type: {type(result)}')

0 commit comments

Comments
 (0)