-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Hi There,
Is it possible to add an option to limit the token or word counts with a parameter, or just skip a request with this error instead of breaking all evaluation process?
evaluating with [answer_relevancy]
InvalidRequestError Traceback (most recent call last)
in <cell line: 63>()
62
63 for col in columns_to_evaluate:
---> 64 evaluate_column_and_save(df_9_response, col, evaluate)
22 frames
/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py in _interpret_response_line(self, rbody, rcode, rheaders, stream)
773 stream_error = stream and "error" in resp.data
774 if stream_error or not 200 <= rcode < 300:
--> 775 raise self.handle_error_response(
776 rbody, rcode, resp.data, rheaders, stream_error=stream_error
777 )
InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4218 tokens. Please reduce the length of the messages.