-
Notifications
You must be signed in to change notification settings - Fork 512
feat: Add Gemini API integration #650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 29 commits
Commits
Show all changes
44 commits
Select commit
Hold shift + click to select a range
1939b6d
feat: Add Gemini API integration
devin-ai-integration[bot] 9e4f471
fix: Pass session correctly to track LLM events in Gemini provider
devin-ai-integration[bot] b95fe6e
feat: Add Gemini integration with example notebook
devin-ai-integration[bot] 72e985a
fix: Add null checks and improve test coverage for Gemini provider
devin-ai-integration[bot] 6df9b7e
style: Add blank lines between test functions
devin-ai-integration[bot] 200dcf1
test: Improve test coverage for Gemini provider
devin-ai-integration[bot] cd31098
style: Fix formatting in test_gemini.py
devin-ai-integration[bot] fef63a9
test: Add comprehensive test coverage for edge cases and error handling
devin-ai-integration[bot] 10900f5
test: Add graceful API key handling and skip tests when key is missing
devin-ai-integration[bot] 4b96b0f
style: Fix formatting issues in test files
devin-ai-integration[bot] 062f82d
style: Remove trailing whitespace in test_gemini.py
devin-ai-integration[bot] d418202
test: Add coverage for error handling, edge cases, and argument handl…
devin-ai-integration[bot] a9cea74
test: Add streaming exception handling test coverage
devin-ai-integration[bot] 11c7343
style: Apply ruff auto-formatting to test_gemini.py
devin-ai-integration[bot] 4f0b0fe
test: Fix type errors and improve test coverage for Gemini provider
devin-ai-integration[bot] 1a6e1ca
test: Add comprehensive error handling test coverage for Gemini provider
devin-ai-integration[bot] 9efc0f1
style: Apply ruff-format fixes to test_gemini.py
devin-ai-integration[bot] 071a610
fix: Configure Gemini API key before model initialization
devin-ai-integration[bot] 970c318
fix: Update GeminiProvider to properly handle instance methods
devin-ai-integration[bot] 18143b5
fix: Use provider instance in closure for proper method binding
devin-ai-integration[bot] a27b2e4
fix: Use class-level storage for original method
devin-ai-integration[bot] aed3a1b
fix: Use module-level storage for original method
devin-ai-integration[bot] 8297371
style: Apply ruff-format fixes to Gemini integration
devin-ai-integration[bot] 9c9af3a
fix: Move Gemini tests to unit test directory for proper coverage rep…
devin-ai-integration[bot] bff477c
fix: Update Gemini provider to properly handle prompt extraction and …
devin-ai-integration[bot] f8fd56d
test: Add comprehensive test coverage for Gemini provider session han…
devin-ai-integration[bot] 59db821
style: Apply ruff-format fixes to test files
devin-ai-integration[bot] f163e23
fix: Pass LlmTracker client to GeminiProvider constructor
devin-ai-integration[bot] 6d7ee0f
remove extra files
areibman 6e4d965
fix: Improve code efficiency and error handling in Gemini provider
devin-ai-integration[bot] 54a9d36
chore: Clean up test files and merge remote changes
devin-ai-integration[bot] c845a34
test: Add comprehensive test coverage for Gemini provider
devin-ai-integration[bot] 973e59f
fix: Set None as default values and improve test coverage
devin-ai-integration[bot] 481a8d7
build: Add google-generativeai as test dependency
devin-ai-integration[bot] 0871398
docs: Update examples and README for Gemini integration
devin-ai-integration[bot] cddab5b
add gemini logo image
dot-agi 681cd18
add gemini to examples
dot-agi 9e8e85e
add gemini to docs
dot-agi e75fa84
refactor handle_response method
dot-agi 86dec80
cleanup gemini tracking code
dot-agi 3384b2d
delete unit test for gemini
dot-agi 392677a
rename and clean gemini example notebook
dot-agi 38e2621
ruff
dot-agi 9e3393d
update docs
dot-agi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,186 @@ | ||
| from typing import Optional, Generator, Any, Dict, Union | ||
|
|
||
| from agentops.llms.providers.base import BaseProvider | ||
| from agentops.event import LLMEvent, ErrorEvent | ||
| from agentops.session import Session | ||
| from agentops.helpers import get_ISO_time, check_call_stack_for_agent_id | ||
| from agentops.log_config import logger | ||
| from agentops.singleton import singleton | ||
|
|
||
| # Store original methods at module level | ||
| _ORIGINAL_METHODS = {} | ||
|
|
||
|
|
||
| @singleton | ||
| class GeminiProvider(BaseProvider): | ||
| """Provider for Google's Gemini API. | ||
| This provider is automatically detected and initialized when agentops.init() | ||
| is called and the google.generativeai package is imported. No manual | ||
| initialization is required.""" | ||
|
|
||
| def __init__(self, client=None): | ||
| """Initialize the Gemini provider. | ||
| Args: | ||
| client: Optional client instance. If not provided, will be set during override. | ||
| """ | ||
| super().__init__(client) | ||
| self._provider_name = "Gemini" | ||
|
|
||
| def handle_response( | ||
| self, response, kwargs, init_timestamp, session: Optional[Session] = None | ||
| ) -> Union[Any, Generator[Any, None, None]]: | ||
| """Handle responses from Gemini API for both sync and streaming modes. | ||
| Args: | ||
| response: The response from the Gemini API | ||
| kwargs: The keyword arguments passed to generate_content | ||
| init_timestamp: The timestamp when the request was initiated | ||
| session: Optional AgentOps session for recording events | ||
| Returns: | ||
| For sync responses: The original response object | ||
| For streaming responses: A generator yielding response chunks | ||
| Note: | ||
| Token counts are extracted from usage_metadata if available. | ||
| """ | ||
| llm_event = LLMEvent(init_timestamp=init_timestamp, params=kwargs) | ||
| if session is not None: | ||
| llm_event.session_id = session.session_id | ||
|
|
||
| # For streaming responses | ||
| if kwargs.get("stream", False): | ||
dot-agi marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| accumulated_text = [] # Use list to accumulate text chunks | ||
dot-agi marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| def handle_stream_chunk(chunk): | ||
| nonlocal llm_event | ||
| try: | ||
| if llm_event.returns is None: | ||
| llm_event.returns = chunk | ||
| llm_event.agent_id = check_call_stack_for_agent_id() | ||
| llm_event.model = getattr(chunk, "model", "gemini-1.5-flash") | ||
dot-agi marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", [])) | ||
|
|
||
| if hasattr(chunk, "text") and chunk.text: | ||
| accumulated_text.append(chunk.text) | ||
|
|
||
| # Extract token counts if available | ||
| if hasattr(chunk, "usage_metadata"): | ||
| usage = chunk.usage_metadata | ||
| llm_event.prompt_tokens = getattr(usage, "prompt_token_count", None) | ||
| llm_event.completion_tokens = getattr(usage, "candidates_token_count", None) | ||
|
|
||
| # If this is the last chunk | ||
| if hasattr(chunk, "finish_reason") and chunk.finish_reason: | ||
| llm_event.completion = "".join(accumulated_text) | ||
| llm_event.end_timestamp = get_ISO_time() | ||
| self._safe_record(session, llm_event) | ||
|
|
||
| except Exception as e: | ||
| if session is not None: | ||
| self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e)) | ||
| logger.warning( | ||
| f"Unable to parse chunk for Gemini LLM call. Error: {str(e)}\n" | ||
| f"Chunk: {chunk}\n" | ||
| f"kwargs: {kwargs}\n" | ||
| ) | ||
|
|
||
| def stream_handler(stream): | ||
| try: | ||
| for chunk in stream: | ||
| handle_stream_chunk(chunk) | ||
| yield chunk | ||
| except Exception as e: | ||
| if session is not None: | ||
| self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e)) | ||
| raise # Re-raise after recording error | ||
|
|
||
| return stream_handler(response) | ||
|
|
||
| # For synchronous responses | ||
| try: | ||
| llm_event.returns = response | ||
| llm_event.agent_id = check_call_stack_for_agent_id() | ||
| llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", [])) | ||
| llm_event.completion = response.text | ||
| llm_event.model = getattr(response, "model", "gemini-1.5-flash") | ||
dot-agi marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| # Extract token counts from usage metadata if available | ||
| if hasattr(response, "usage_metadata"): | ||
| usage = response.usage_metadata | ||
| llm_event.prompt_tokens = getattr(usage, "prompt_token_count", None) | ||
| llm_event.completion_tokens = getattr(usage, "candidates_token_count", None) | ||
|
|
||
| llm_event.end_timestamp = get_ISO_time() | ||
| self._safe_record(session, llm_event) | ||
| except Exception as e: | ||
| if session is not None: | ||
| self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e)) | ||
| logger.warning( | ||
| f"Unable to parse response for Gemini LLM call. Error: {str(e)}\n" | ||
| f"Response: {response}\n" | ||
| f"kwargs: {kwargs}\n" | ||
| ) | ||
|
|
||
| return response | ||
|
|
||
| def override(self): | ||
| """Override Gemini's generate_content method to track LLM events. | ||
| Note: | ||
| This method is called automatically by AgentOps during initialization. | ||
| Users should not call this method directly.""" | ||
| import google.generativeai as genai | ||
|
|
||
| # Store original method if not already stored | ||
| if "generate_content" not in _ORIGINAL_METHODS: | ||
| _ORIGINAL_METHODS["generate_content"] = genai.GenerativeModel.generate_content | ||
|
|
||
| # Store provider instance for the closure | ||
| provider = self | ||
|
|
||
| def patched_function(self, *args, **kwargs): | ||
| init_timestamp = get_ISO_time() | ||
|
|
||
| # Extract and remove session from kwargs if present | ||
| session = kwargs.pop("session", None) | ||
|
|
||
| # Handle positional prompt argument | ||
| event_kwargs = kwargs.copy() # Create a copy for event tracking | ||
| if args and len(args) > 0: | ||
| # First argument is the prompt | ||
| prompt = args[0] | ||
| if "contents" not in kwargs: | ||
| kwargs["contents"] = prompt | ||
| event_kwargs["prompt"] = prompt # Store original prompt for event tracking | ||
| args = args[1:] # Remove prompt from args since we moved it to kwargs | ||
|
|
||
| # Call original method and track event | ||
| try: | ||
| if "generate_content" in _ORIGINAL_METHODS: | ||
| result = _ORIGINAL_METHODS["generate_content"](self, *args, **kwargs) | ||
| return provider.handle_response(result, event_kwargs, init_timestamp, session=session) | ||
| else: | ||
| logger.error("Original generate_content method not found. Cannot proceed with override.") | ||
| return None | ||
| except Exception as e: | ||
| logger.error(f"Error in Gemini generate_content: {str(e)}") | ||
| if session is not None: | ||
| provider._safe_record(session, ErrorEvent(exception=e)) | ||
| raise # Re-raise the exception after recording | ||
|
|
||
| # Override the method at class level | ||
| genai.GenerativeModel.generate_content = patched_function | ||
|
|
||
| def undo_override(self): | ||
| """Restore original Gemini methods. | ||
| Note: | ||
| This method is called automatically by AgentOps during cleanup. | ||
| Users should not call this method directly.""" | ||
| if "generate_content" in _ORIGINAL_METHODS: | ||
| import google.generativeai as genai | ||
|
|
||
| genai.GenerativeModel.generate_content = _ORIGINAL_METHODS["generate_content"] | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,135 @@ | ||
| { | ||
| "cells": [ | ||
| { | ||
| "cell_type": "markdown", | ||
| "id": "580c85ac", | ||
| "metadata": {}, | ||
| "source": [ | ||
| "# Gemini API Example with AgentOps\n", | ||
| "\n", | ||
| "This notebook demonstrates how to use AgentOps with Google's Gemini API for both synchronous and streaming text generation." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "d731924a", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "import google.generativeai as genai\n", | ||
| "import agentops\n", | ||
| "from agentops.llms.providers.gemini import GeminiProvider" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "a94545c9", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Configure the Gemini API\n", | ||
| "import os\n", | ||
| "\n", | ||
| "# Replace with your API key\n", | ||
| "# You can get one at: https://ai.google.dev/tutorials/setup\n", | ||
| "GEMINI_API_KEY = \"YOUR_API_KEY_HERE\" # Replace with your API key\n", | ||
| "genai.configure(api_key=GEMINI_API_KEY)\n", | ||
| "\n", | ||
| "# Note: In production, use environment variables:\n", | ||
| "# import os\n", | ||
| "# GEMINI_API_KEY = os.getenv(\"GEMINI_API_KEY\")\n", | ||
| "# genai.configure(api_key=GEMINI_API_KEY)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "d632fe48", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Initialize AgentOps and Gemini model\n", | ||
| "ao_client = agentops.init()\n", | ||
| "model = genai.GenerativeModel(\"gemini-1.5-flash\")\n", | ||
| "\n", | ||
| "# Initialize and override Gemini provider\n", | ||
| "provider = GeminiProvider(model)\n", | ||
| "provider.override()" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "3923b6b8", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Test synchronous generation\n", | ||
| "print(\"Testing synchronous generation:\")\n", | ||
| "response = model.generate_content(\n", | ||
| " \"What are the three laws of robotics?\",\n", | ||
| " session=ao_client\n", | ||
| ")\n", | ||
| "print(response.text)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "da54e521", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Test streaming generation\n", | ||
| "print(\"\\nTesting streaming generation:\")\n", | ||
| "response = model.generate_content(\n", | ||
| " \"Explain the concept of machine learning in simple terms.\",\n", | ||
| " stream=True,\n", | ||
| " session=ao_client\n", | ||
| ")\n", | ||
| "\n", | ||
| "for chunk in response:\n", | ||
| " print(chunk.text, end=\"\")\n", | ||
| "print() # Add newline after streaming output\n", | ||
| "\n", | ||
| "# Test another synchronous generation\n", | ||
| "print(\"\\nTesting another synchronous generation:\")\n", | ||
| "response = model.generate_content(\n", | ||
| " \"What is the difference between supervised and unsupervised learning?\",\n", | ||
| " session=ao_client\n", | ||
| ")\n", | ||
| "print(response.text)" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "c6a674c0", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# End session and check stats\n", | ||
| "agentops.end_session(\n", | ||
| " end_state=\"Success\",\n", | ||
| " end_state_reason=\"Gemini integration example completed successfully\"\n", | ||
| ")" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "id": "b6d35f28", | ||
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Clean up\n", | ||
| "provider.undo_override()" | ||
| ] | ||
| } | ||
| ], | ||
| "metadata": {}, | ||
| "nbformat": 4, | ||
| "nbformat_minor": 5 | ||
| } |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.