Skip to content

Conversation

@quantstruct-bot
Copy link
Collaborator

Enhancements in Assistant Responses, New Gemini Model, and Call Handling

  1. Introduction of 'gemini-2.0-flash-lite' Model Option: You can now use gemini-2.0-flash-lite in Assistant.model[provider="google"].model[model="gemini-2.0-flash-lite"] for a reduced latency, lower cost Gemini model with a 1 million token context window.

gemini-2 0-flash-lite

  1. New Assistant Paginated Response: All Assistant endpoints now return paginated responses. Each response specifies itemsPerPage, totalItems, and currentPage, which you can use to navigate through a list of assistants.

@github-actions
Copy link
Contributor

@sahilsuman933 sahilsuman933 merged commit 78c466d into main Mar 16, 2025
3 of 4 checks passed
@quantstruct-bot quantstruct-bot deleted the changelog branch March 17, 2025 19:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants