You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+31Lines changed: 31 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -694,6 +694,37 @@ OPENAI_TOKEN=XXX go test -v -tags=integration ./api_integration_test.go
694
694
695
695
If the `OPENAI_TOKEN` environment variable is not available, integration tests will be skipped.
696
696
697
+
## Frequently Asked Questions
698
+
699
+
### Why don't we get the same answer when specifying a temperature field of 0 and asking the same question?
700
+
701
+
Even when specifying a temperature field of 0, it doesn't guarantee that you'll always get the same response. Several factors come into play.
702
+
703
+
1. Go OpenAI Behavior: When you specify a temperature field of 0 in Go OpenAI, the omitempty tag causes that field to be removed from the request. Consequently, the OpenAI API applies the default value of 1.
704
+
2. Token Count for Input/Output: If there's a large number of tokens in the input and output, setting the temperature to 0 can still result in non-deterministic behavior. In particular, when using around 32k tokens, the likelihood of non-deterministic behavior becomes highest even with a temperature of 0.
705
+
706
+
Due to the factors mentioned above, different answers may be returned even for the same question.
707
+
708
+
**Workarounds:**
709
+
1. Using `math.SmallestNonzeroFloat32`: By specifying `math.SmallestNonzeroFloat32` in the temperature field instead of 0, you can mimic the behavior of setting it to 0.
710
+
2. Limiting Token Count: By limiting the number of tokens in the input and output and especially avoiding large requests close to 32k tokens, you can reduce the risk of non-deterministic behavior.
711
+
712
+
By adopting these strategies, you can expect more consistent results.
713
+
714
+
**Related Issues:**
715
+
[omitempty option of request struct will generate incorrect request when parameter is 0.](https://github.com/sashabaranov/go-openai/issues/9)
716
+
717
+
### Does Go OpenAI provide a method to count tokens?
718
+
719
+
No, Go OpenAI does not offer a feature to count tokens, and there are no plans to provide such a feature in the future. However, if there's a way to implement a token counting feature with zero dependencies, it might be possible to merge that feature into Go OpenAI. Otherwise, it would be more appropriate to implement it in a dedicated library or repository.
720
+
721
+
For counting tokens, you might find the following links helpful:
722
+
-[Counting Tokens For Chat API Calls](https://github.com/pkoukk/tiktoken-go#counting-tokens-for-chat-api-calls)
723
+
-[How to count tokens with tiktoken](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
724
+
725
+
**Related Issues:**
726
+
[Is it possible to join the implementation of GPT3 Tokenizer](https://github.com/sashabaranov/go-openai/issues/62)
727
+
697
728
## Thank you
698
729
699
730
We want to take a moment to express our deepest gratitude to the [contributors](https://github.com/sashabaranov/go-openai/graphs/contributors) and sponsors of this project:
0 commit comments