Commit d233bc4
authored
fix(gemini): gemini input token calculation when implicit cache is hit using langchain (#1451)
fix: gemini caching token calculation when using langchain
Currently:
When `input_modality_1` contains tokens, `input` token count is 0.
The cached token logic only subtracts cached tokens from `input`, when they should be subtracted from the `input_modality_1`.
Proposed fix:
Subtract `cache_tokens_details` from the corresponding `input_modality` in addition to subtracting from `input`.1 parent 478e7e2 commit d233bc4
1 file changed
+3
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1175 | 1175 | | |
1176 | 1176 | | |
1177 | 1177 | | |
| 1178 | + | |
| 1179 | + | |
| 1180 | + | |
1178 | 1181 | | |
1179 | 1182 | | |
1180 | 1183 | | |
| |||
0 commit comments