Skip to content

maximum context length Β #247

@aydinmyilmaz

Description

@aydinmyilmaz

Hi There,

Is it possible to add an option to limit the token or word counts with a parameter, or just skip a request with this error instead of breaking all evaluation process?

evaluating with [answer_relevancy]


InvalidRequestError Traceback (most recent call last)
in <cell line: 63>()
62
63 for col in columns_to_evaluate:
---> 64 evaluate_column_and_save(df_9_response, col, evaluate)

22 frames
/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py in _interpret_response_line(self, rbody, rcode, rheaders, stream)
773 stream_error = stream and "error" in resp.data
774 if stream_error or not 200 <= rcode < 300:
--> 775 raise self.handle_error_response(
776 rbody, rcode, resp.data, rheaders, stream_error=stream_error
777 )

InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4218 tokens. Please reduce the length of the messages.

Metadata

Metadata

Assignees

Labels

answeredπŸ€– The question has been answered. Will be closed automatically if no new commentsbugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions