Skip to content

Add Faraday retry support for the OpenAI LLM #1025

@SpiffyStores

Description

@SpiffyStores

Currently there is no support for retrying OpenAI requests that fail due to network glitches or other reasons.

When processing a very large number of embed! requests, the process can crash fairly often due to temporary network issues. This means that the updates to the vector store may be lost as it's difficult to determine where the failure occurred. Re-running a very large number of embedding requests is often not very practical as it can sometimes take days to process everything.

There's a very simple fix available using the Faraday retry middleware.

Here's the initialization method with retry middleware added.

retry_options = {
  max: 6,
  interval: 0.05,
  interval_randomness: 0.5,
  backoff_factor: 2,
  retry_statuses: [429, 500, 502, 503, 504],
  methods: %i[get post],
  exceptions: Faraday::Retry::Middleware::DEFAULT_EXCEPTIONS + [Faraday::ConnectionFailed]
}

@client = ::OpenAI::Client.new(access_token: api_key, **llm_options) do |f|
  f.response :logger, Langchain.logger, {headers: true, bodies: true, errors: true}
  f.request :retry, retry_options
end

So far, I've found these parameters to be fine with the OpenAI API.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions