-
Question about model parameter configurationHi! I've been going through the documentation (wiki and README) but couldn't find examples of how to configure common LLM parameters in code. Specifically, I'm looking for guidance on how to set:
Could you provide a code example showing how to configure these parameters when initializing or calling the model? This would be really helpful for fine-tuning the model's behavior for different use cases. Thanks for the great library! Expected outcomeA code snippet demonstrating parameter configuration, similar to: # Example of what I'm looking for
model = SomeModel(
temperature=0.7,
max_tokens=1000,
top_p=0.9
) |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
You would need to initialize a request and pass the generation config into the requests. You can also pass generation config into GenerativeModel Constructor as well. var request = new GenerateContentRequest();
request.GenerationConfig = new GenerationConfig(){
Temperature = 0.3,
MaxOutputTokens = 1000
};
var response = await model.GenerateContentAsync(request); |
Beta Was this translation helpful? Give feedback.
-
Thank you for your answer! |
Beta Was this translation helpful? Give feedback.
You would need to initialize a request and pass the generation config into the requests. You can also pass generation config into GenerativeModel Constructor as well.