-
Notifications
You must be signed in to change notification settings - Fork 109
Open
Description
Description:
When using DeepAgent to generate code, I frequently encounter an issue where the generation process stops abruptly due to the LLM's output token limit. This often happens before the code is complete, leaving partial or unusable results.
I've tried adding prompts instructing the agent to generate code in segments/batches, but the issue persists.
Has anyone experienced this or found a workaround? Some potential solutions I'm considering:
- Automatic resumption when output is truncated - detect incomplete code and continue generation
- Built-in chunking strategy for large code generation tasks
- Configurable token limit awareness in the agent
Questions:
- Is there a recommended way to handle long code generation?
- Are there any built-in features I might be missing to mitigate this?
- Would it be possible to add support for streaming/chunked code generation?
Any guidance would be appreciated!
Reactions are currently unavailable
Metadata
Metadata
Labels
No labels