Replies: 1 comment 1 reply
-
Huh, that is quite a lot of tokens - 251k. Tbh, I've never seen an llm conversation that's that big because, usually, when it gets over 8k, you get an error so you really can't get to that big of a number. Can you check how does the conversation look like? That is in AgentConvo class, just before it calls the llm_connection function to stream the GPT data. Maybe it's fetching files that need to be ignored? What language is the app being written in? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I think this project is really clever and a much better approach to its contemporaries.
I'm having an issue using it, though. I have got about half way through my app I want to build and am now getting this error:
Traceback (most recent call last):
File "/Users/samuelbrent/Code/gpt-pilot/pilot/main.py", line 35, in
project.start()
File "/Users/samuelbrent/Code/gpt-pilot/pilot/helpers/Project.py", line 81, in start
self.developer.start_coding()
File "/Users/samuelbrent/Code/gpt-pilot/pilot/helpers/agents/Developer.py", line 32, in start_coding
self.implement_task()
File "/Users/samuelbrent/Code/gpt-pilot/pilot/helpers/agents/Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File "/Users/samuelbrent/Code/gpt-pilot/pilot/helpers/agents/Developer.py", line 116, in execute_task
self.continue_development(convo)
File "/Users/samuelbrent/Code/gpt-pilot/pilot/helpers/agents/Developer.py", line 130, in continue_development
iteration_convo.send_message('development/iteration.prompt', {
File "/Users/samuelbrent/Code/gpt-pilot/pilot/helpers/AgentConvo.py", line 51, in send_message
response = create_gpt_chat_completion(self.messages, self.high_level_step, function_calls=function_calls)
File "/Users/samuelbrent/Code/gpt-pilot/pilot/utils/llm_connection.py", line 94, in create_gpt_chat_completion
raise ValueError(f'Too many tokens in messages: {tokens_in_messages}. Please try a different test.')
ValueError: Too many tokens in messages: 251802. Please try a different test.
Beta Was this translation helpful? Give feedback.
All reactions