Skip to content
This repository was archived by the owner on Jul 22, 2025. It is now read-only.

Conversation

@nattsw
Copy link
Contributor

@nattsw nattsw commented Jun 23, 2025

In discourse/discourse-translator#249 we introduced splitting content (post.raw) prior to sending to translation as we were using a sync api.

Now that we're streaming thanks to #1424, we'll chunk based on the LlmModel.max_output_tokens.

martin-brennan
martin-brennan previously approved these changes Jun 23, 2025
@nattsw nattsw marked this pull request as draft June 23, 2025 07:29
@nattsw nattsw changed the title DEV: No need to split content as we're streaming responses DEV: Split content based on llmmodel's max_output_tokens Jun 23, 2025
@nattsw nattsw marked this pull request as ready for review June 23, 2025 08:59
@nattsw nattsw requested review from martin-brennan and removed request for martin-brennan June 23, 2025 09:01
@nattsw nattsw dismissed martin-brennan’s stale review June 23, 2025 09:05

PR content changed

@nattsw nattsw merged commit 683bb57 into main Jun 23, 2025
6 checks passed
@nattsw nattsw deleted the dont-split branch June 23, 2025 13:11
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Development

Successfully merging this pull request may close these issues.

4 participants