Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion errors.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package fantasy
import (
"errors"
"fmt"
"io"
"net/http"
"strings"

Expand Down Expand Up @@ -49,7 +50,14 @@ func (m *ProviderError) Error() string {

// IsRetryable checks if the error is retryable based on the status code.
func (m *ProviderError) IsRetryable() bool {
return m.StatusCode == http.StatusRequestTimeout || m.StatusCode == http.StatusConflict || m.StatusCode == http.StatusTooManyRequests
// According to OpenAI's Go SDK [1], we should retry on connection errors and server internal errors.
// [1] https://github.com/openai/openai-go/blob/719cc10a9a2f8b1ad5bc60f2aac279bf6646a842/internal/requestconfig/requestconfig.go#L250

if errors.Is(m.Cause, io.ErrUnexpectedEOF) {
return true
}

return m.StatusCode == http.StatusRequestTimeout || m.StatusCode == http.StatusConflict || m.StatusCode == http.StatusTooManyRequests || m.StatusCode >= http.StatusInternalServerError
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if retrying for 500 is a good idea. Other providers might behave differently than OpenAI.

Thoughts @kujtimiihoxha?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its fine to retry on 500 actually, should eliminate some random internal errors I see sometimes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, especially third-party LLM providers still occasionally return 5XX errors, and when this happens, the impact on the application is really frustrating. I think a relatively "polite" retry strategy (e.g., exponential backoff) is acceptable.

}

// RetryError represents an error that occurred during retry operations.
Expand Down
Loading