-
Notifications
You must be signed in to change notification settings - Fork 509
feat: Add Gemini API integration #650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
44 commits
Select commit
Hold shift + click to select a range
1939b6d
feat: Add Gemini API integration
devin-ai-integration[bot] 9e4f471
fix: Pass session correctly to track LLM events in Gemini provider
devin-ai-integration[bot] b95fe6e
feat: Add Gemini integration with example notebook
devin-ai-integration[bot] 72e985a
fix: Add null checks and improve test coverage for Gemini provider
devin-ai-integration[bot] 6df9b7e
style: Add blank lines between test functions
devin-ai-integration[bot] 200dcf1
test: Improve test coverage for Gemini provider
devin-ai-integration[bot] cd31098
style: Fix formatting in test_gemini.py
devin-ai-integration[bot] fef63a9
test: Add comprehensive test coverage for edge cases and error handling
devin-ai-integration[bot] 10900f5
test: Add graceful API key handling and skip tests when key is missing
devin-ai-integration[bot] 4b96b0f
style: Fix formatting issues in test files
devin-ai-integration[bot] 062f82d
style: Remove trailing whitespace in test_gemini.py
devin-ai-integration[bot] d418202
test: Add coverage for error handling, edge cases, and argument handl…
devin-ai-integration[bot] a9cea74
test: Add streaming exception handling test coverage
devin-ai-integration[bot] 11c7343
style: Apply ruff auto-formatting to test_gemini.py
devin-ai-integration[bot] 4f0b0fe
test: Fix type errors and improve test coverage for Gemini provider
devin-ai-integration[bot] 1a6e1ca
test: Add comprehensive error handling test coverage for Gemini provider
devin-ai-integration[bot] 9efc0f1
style: Apply ruff-format fixes to test_gemini.py
devin-ai-integration[bot] 071a610
fix: Configure Gemini API key before model initialization
devin-ai-integration[bot] 970c318
fix: Update GeminiProvider to properly handle instance methods
devin-ai-integration[bot] 18143b5
fix: Use provider instance in closure for proper method binding
devin-ai-integration[bot] a27b2e4
fix: Use class-level storage for original method
devin-ai-integration[bot] aed3a1b
fix: Use module-level storage for original method
devin-ai-integration[bot] 8297371
style: Apply ruff-format fixes to Gemini integration
devin-ai-integration[bot] 9c9af3a
fix: Move Gemini tests to unit test directory for proper coverage rep…
devin-ai-integration[bot] bff477c
fix: Update Gemini provider to properly handle prompt extraction and …
devin-ai-integration[bot] f8fd56d
test: Add comprehensive test coverage for Gemini provider session han…
devin-ai-integration[bot] 59db821
style: Apply ruff-format fixes to test files
devin-ai-integration[bot] f163e23
fix: Pass LlmTracker client to GeminiProvider constructor
devin-ai-integration[bot] 6d7ee0f
remove extra files
areibman 6e4d965
fix: Improve code efficiency and error handling in Gemini provider
devin-ai-integration[bot] 54a9d36
chore: Clean up test files and merge remote changes
devin-ai-integration[bot] c845a34
test: Add comprehensive test coverage for Gemini provider
devin-ai-integration[bot] 973e59f
fix: Set None as default values and improve test coverage
devin-ai-integration[bot] 481a8d7
build: Add google-generativeai as test dependency
devin-ai-integration[bot] 0871398
docs: Update examples and README for Gemini integration
devin-ai-integration[bot] cddab5b
add gemini logo image
dot-agi 681cd18
add gemini to examples
dot-agi 9e8e85e
add gemini to docs
dot-agi e75fa84
refactor handle_response method
dot-agi 86dec80
cleanup gemini tracking code
dot-agi 3384b2d
delete unit test for gemini
dot-agi 392677a
rename and clean gemini example notebook
dot-agi 38e2621
ruff
dot-agi 9e3393d
update docs
dot-agi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,194 @@ | ||
| from typing import Optional, Any, Dict, Union | ||
|
|
||
| from agentops.llms.providers.base import BaseProvider | ||
| from agentops.event import LLMEvent, ErrorEvent | ||
| from agentops.session import Session | ||
| from agentops.helpers import get_ISO_time, check_call_stack_for_agent_id | ||
| from agentops.log_config import logger | ||
| from agentops.singleton import singleton | ||
|
|
||
|
|
||
| @singleton | ||
| class GeminiProvider(BaseProvider): | ||
| original_generate_content = None | ||
| original_generate_content_async = None | ||
|
|
||
| """Provider for Google's Gemini API. | ||
|
|
||
| This provider is automatically detected and initialized when agentops.init() | ||
| is called and the google.generativeai package is imported. No manual | ||
| initialization is required.""" | ||
|
|
||
| def __init__(self, client=None): | ||
| """Initialize the Gemini provider. | ||
|
|
||
| Args: | ||
| client: Optional client instance. If not provided, will be set during override. | ||
| """ | ||
| super().__init__(client) | ||
| self._provider_name = "Gemini" | ||
|
|
||
| def handle_response(self, response, kwargs, init_timestamp, session: Optional[Session] = None) -> dict: | ||
| """Handle responses from Gemini API for both sync and streaming modes. | ||
|
|
||
| Args: | ||
| response: The response from the Gemini API | ||
| kwargs: The keyword arguments passed to generate_content | ||
| init_timestamp: The timestamp when the request was initiated | ||
| session: Optional AgentOps session for recording events | ||
|
|
||
| Returns: | ||
| For sync responses: The original response object | ||
| For streaming responses: A generator yielding response chunks | ||
| """ | ||
| llm_event = LLMEvent(init_timestamp=init_timestamp, params=kwargs) | ||
| if session is not None: | ||
| llm_event.session_id = session.session_id | ||
|
|
||
| accumulated_content = "" | ||
|
|
||
| def handle_stream_chunk(chunk): | ||
| nonlocal llm_event, accumulated_content | ||
| try: | ||
| if llm_event.returns is None: | ||
| llm_event.returns = chunk | ||
| llm_event.agent_id = check_call_stack_for_agent_id() | ||
| llm_event.model = getattr(chunk, "model", None) or "gemini-1.5-flash" | ||
| llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", None)) or [] | ||
|
|
||
| # Accumulate text from chunk | ||
| if hasattr(chunk, "text") and chunk.text: | ||
| accumulated_content += chunk.text | ||
|
|
||
| # Extract token counts if available | ||
| if hasattr(chunk, "usage_metadata"): | ||
| llm_event.prompt_tokens = getattr(chunk.usage_metadata, "prompt_token_count", None) | ||
| llm_event.completion_tokens = getattr(chunk.usage_metadata, "candidates_token_count", None) | ||
|
|
||
| # If this is the last chunk | ||
| if hasattr(chunk, "finish_reason") and chunk.finish_reason: | ||
| llm_event.completion = accumulated_content | ||
| llm_event.end_timestamp = get_ISO_time() | ||
| self._safe_record(session, llm_event) | ||
|
|
||
| except Exception as e: | ||
| self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e)) | ||
| logger.warning( | ||
| f"Unable to parse chunk for Gemini LLM call. Error: {str(e)}\n" | ||
| f"Response: {chunk}\n" | ||
| f"Arguments: {kwargs}\n" | ||
| ) | ||
|
|
||
| # For streaming responses | ||
| if kwargs.get("stream", False): | ||
|
|
||
| def generator(): | ||
| for chunk in response: | ||
| handle_stream_chunk(chunk) | ||
| yield chunk | ||
|
|
||
| return generator() | ||
|
|
||
| # For synchronous responses | ||
| try: | ||
| llm_event.returns = response | ||
| llm_event.agent_id = check_call_stack_for_agent_id() | ||
| llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", None)) or [] | ||
| llm_event.completion = response.text | ||
| llm_event.model = getattr(response, "model", None) or "gemini-1.5-flash" | ||
|
|
||
| # Extract token counts from usage metadata if available | ||
| if hasattr(response, "usage_metadata"): | ||
| llm_event.prompt_tokens = getattr(response.usage_metadata, "prompt_token_count", None) | ||
| llm_event.completion_tokens = getattr(response.usage_metadata, "candidates_token_count", None) | ||
|
|
||
| llm_event.end_timestamp = get_ISO_time() | ||
| self._safe_record(session, llm_event) | ||
| except Exception as e: | ||
| self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e)) | ||
| logger.warning( | ||
| f"Unable to parse response for Gemini LLM call. Error: {str(e)}\n" | ||
| f"Response: {response}\n" | ||
| f"Arguments: {kwargs}\n" | ||
| ) | ||
|
|
||
| return response | ||
|
|
||
| def override(self): | ||
| """Override Gemini's generate_content method to track LLM events.""" | ||
| self._override_gemini_generate_content() | ||
| self._override_gemini_generate_content_async() | ||
|
|
||
| def _override_gemini_generate_content(self): | ||
| """Override synchronous generate_content method""" | ||
| import google.generativeai as genai | ||
|
|
||
| # Store original method if not already stored | ||
| if self.original_generate_content is None: | ||
| self.original_generate_content = genai.GenerativeModel.generate_content | ||
|
|
||
| provider = self # Store provider instance for closure | ||
|
|
||
| def patched_function(model_self, *args, **kwargs): | ||
| init_timestamp = get_ISO_time() | ||
| session = kwargs.pop("session", None) | ||
|
|
||
| # Handle positional prompt argument | ||
| event_kwargs = kwargs.copy() | ||
| if args and len(args) > 0: | ||
| prompt = args[0] | ||
| if "contents" not in kwargs: | ||
| kwargs["contents"] = prompt | ||
| event_kwargs["prompt"] = prompt | ||
| args = args[1:] | ||
|
|
||
| result = provider.original_generate_content(model_self, *args, **kwargs) | ||
| return provider.handle_response(result, event_kwargs, init_timestamp, session=session) | ||
|
|
||
| # Override the method at class level | ||
| genai.GenerativeModel.generate_content = patched_function | ||
|
|
||
| def _override_gemini_generate_content_async(self): | ||
| """Override asynchronous generate_content method""" | ||
| import google.generativeai as genai | ||
|
|
||
| # Store original async method if not already stored | ||
| if self.original_generate_content_async is None: | ||
| self.original_generate_content_async = genai.GenerativeModel.generate_content_async | ||
|
|
||
| provider = self # Store provider instance for closure | ||
|
|
||
| async def patched_function(model_self, *args, **kwargs): | ||
| init_timestamp = get_ISO_time() | ||
| session = kwargs.pop("session", None) | ||
|
|
||
| # Handle positional prompt argument | ||
| event_kwargs = kwargs.copy() | ||
| if args and len(args) > 0: | ||
| prompt = args[0] | ||
| if "contents" not in kwargs: | ||
| kwargs["contents"] = prompt | ||
| event_kwargs["prompt"] = prompt | ||
| args = args[1:] | ||
|
|
||
| result = await provider.original_generate_content_async(model_self, *args, **kwargs) | ||
| return provider.handle_response(result, event_kwargs, init_timestamp, session=session) | ||
|
|
||
| # Override the async method at class level | ||
| genai.GenerativeModel.generate_content_async = patched_function | ||
|
|
||
| def undo_override(self): | ||
| """Restore original Gemini methods. | ||
|
|
||
| Note: | ||
| This method is called automatically by AgentOps during cleanup. | ||
| Users should not call this method directly.""" | ||
| import google.generativeai as genai | ||
|
|
||
| if self.original_generate_content is not None: | ||
| genai.GenerativeModel.generate_content = self.original_generate_content | ||
| self.original_generate_content = None | ||
|
|
||
| if self.original_generate_content_async is not None: | ||
| genai.GenerativeModel.generate_content_async = self.original_generate_content_async | ||
| self.original_generate_content_async = None | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,118 @@ | ||
| --- | ||
| title: Gemini | ||
| description: "Explore Google DeepMind's Gemini with observation via AgentOps" | ||
| --- | ||
|
|
||
| import CodeTooltip from '/snippets/add-code-tooltip.mdx' | ||
| import EnvTooltip from '/snippets/add-env-tooltip.mdx' | ||
|
|
||
| [Gemini (Google Generative AI)](https://ai.google.dev/gemini-api/docs/quickstart) is a leading provider of AI tools and services. | ||
| Explore the [Gemini API](https://ai.google.dev/docs) for more information. | ||
|
|
||
| <Note> | ||
| `google-generativeai>=0.1.0` is currently supported. | ||
| </Note> | ||
|
|
||
| <Steps> | ||
| <Step title="Install the AgentOps SDK"> | ||
| <CodeGroup> | ||
| ```bash pip | ||
| pip install agentops | ||
| ``` | ||
| ```bash poetry | ||
| poetry add agentops | ||
| ``` | ||
| </CodeGroup> | ||
| </Step> | ||
| <Step title="Install the Gemini SDK"> | ||
| <Note> | ||
| `google-generativeai>=0.1.0` is required for Gemini integration. | ||
| </Note> | ||
| <CodeGroup> | ||
| ```bash pip | ||
| pip install google-generativeai | ||
| ``` | ||
| ```bash poetry | ||
| poetry add google-generativeai | ||
| ``` | ||
| </CodeGroup> | ||
| </Step> | ||
| <Step title="Add 3 lines of code"> | ||
| <CodeTooltip/> | ||
| <CodeGroup> | ||
| ```python python | ||
| import google.generativeai as genai | ||
| import agentops | ||
|
|
||
| agentops.init(<INSERT YOUR API KEY HERE>) | ||
| model = genai.GenerativeModel("gemini-1.5-flash") | ||
| ... | ||
| # End of program (e.g. main.py) | ||
| agentops.end_session("Success") # Success|Fail|Indeterminate | ||
| ``` | ||
| </CodeGroup> | ||
| <EnvTooltip /> | ||
| <CodeGroup> | ||
| ```python .env | ||
| AGENTOPS_API_KEY=<YOUR API KEY> | ||
| GEMINI_API_KEY=<YOUR GEMINI API KEY> | ||
| ``` | ||
| </CodeGroup> | ||
| Read more about environment variables in [Advanced Configuration](/v1/usage/advanced-configuration) | ||
| </Step> | ||
| <Step title="Run your Agent"> | ||
| Execute your program and visit [app.agentops.ai/drilldown](https://app.agentops.ai/drilldown) to observe your Agent! 🕵️ | ||
| <Tip> | ||
| After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard | ||
| </Tip> | ||
| <div/> | ||
| <Frame type="glass" caption="Clickable link to session"> | ||
| <img height="200" src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/link-to-session.gif?raw=true" /> | ||
| </Frame> | ||
| </Step> | ||
| </Steps> | ||
|
|
||
| ## Full Examples | ||
|
|
||
| <CodeGroup> | ||
| ```python sync | ||
| import google.generativeai as genai | ||
| import agentops | ||
|
|
||
| agentops.init(<INSERT YOUR API KEY HERE>) | ||
| model = genai.GenerativeModel("gemini-1.5-flash") | ||
|
|
||
| response = model.generate_content( | ||
| "Write a haiku about AI and humans working together" | ||
| ) | ||
|
|
||
| print(response.text) | ||
| agentops.end_session('Success') | ||
| ``` | ||
|
|
||
| ```python stream | ||
| import google.generativeai as genai | ||
| import agentops | ||
|
|
||
| agentops.init(<INSERT YOUR API KEY HERE>) | ||
| model = genai.GenerativeModel("gemini-1.5-flash") | ||
|
|
||
| response = model.generate_content( | ||
| "Write a haiku about AI and humans working together", | ||
| stream=True | ||
| ) | ||
|
|
||
| for chunk in response: | ||
| print(chunk.text, end="") | ||
|
|
||
| agentops.end_session('Success') | ||
| ``` | ||
| </CodeGroup> | ||
|
|
||
| You can find more examples in the [Gemini Examples](/v1/examples/gemini_examples) section. | ||
|
|
||
| <script type="module" src="/scripts/github_stars.js"></script> | ||
| <script type="module" src="/scripts/scroll-img-fadein-animation.js"></script> | ||
| <script type="module" src="/scripts/button_heartbeat_animation.js"></script> | ||
| <script type="css" src="/styles/styles.css"></script> | ||
| <script type="module" src="/scripts/adjust_api_dynamically.js"></script> |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.