Skip to content

Commit 543b180

Browse files
devin-ai-integration[bot]areibmandot-agi
authored
feat: Add Gemini API integration (#650)
* feat: Add Gemini API integration - Add GeminiProvider class for tracking Gemini API calls - Support both sync and streaming modes - Track prompts, completions, and token usage - Add test script demonstrating usage Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Pass session correctly to track LLM events in Gemini provider Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * feat: Add Gemini integration with example notebook Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Add null checks and improve test coverage for Gemini provider Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Add blank lines between test functions Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Improve test coverage for Gemini provider Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Fix formatting in test_gemini.py Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Add comprehensive test coverage for edge cases and error handling Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Add graceful API key handling and skip tests when key is missing Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Fix formatting issues in test files Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Remove trailing whitespace in test_gemini.py Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Add coverage for error handling, edge cases, and argument handling in Gemini provider Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Add streaming exception handling test coverage Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Apply ruff auto-formatting to test_gemini.py Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Fix type errors and improve test coverage for Gemini provider Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Add comprehensive error handling test coverage for Gemini provider Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Apply ruff-format fixes to test_gemini.py Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Configure Gemini API key before model initialization Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Update GeminiProvider to properly handle instance methods Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Use provider instance in closure for proper method binding Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Use class-level storage for original method Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Use module-level storage for original method Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Apply ruff-format fixes to Gemini integration Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Move Gemini tests to unit test directory for proper coverage reporting Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Update Gemini provider to properly handle prompt extraction and improve test coverage Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Add comprehensive test coverage for Gemini provider session handling and event recording Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * style: Apply ruff-format fixes to test files Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Pass LlmTracker client to GeminiProvider constructor Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * remove extra files * fix: Improve code efficiency and error handling in Gemini provider - Add _extract_token_counts helper method - Make error handling consistent with OpenAI provider - Remove redundant session checks - Improve error message formatting - Add comprehensive documentation Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * test: Add comprehensive test coverage for Gemini provider Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * fix: Set None as default values and improve test coverage Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * build: Add google-generativeai as test dependency Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * docs: Update examples and README for Gemini integration Co-Authored-By: Alex Reibman <meta.alex.r@gmail.com> * add gemini logo image * add gemini to examples * add gemini to docs * refactor handle_response method * cleanup gemini tracking code * delete unit test for gemini * rename and clean gemini example notebook * ruff * update docs --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Alex Reibman <meta.alex.r@gmail.com> Co-authored-by: reibs <areibman@gmail.com> Co-authored-by: Pratyush Shukla <ps4534@nyu.edu>
1 parent 6d0459a commit 543b180

File tree

11 files changed

+830
-0
lines changed

11 files changed

+830
-0
lines changed

agentops/llms/providers/gemini.py

Lines changed: 194 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,194 @@
1+
from typing import Optional, Any, Dict, Union
2+
3+
from agentops.llms.providers.base import BaseProvider
4+
from agentops.event import LLMEvent, ErrorEvent
5+
from agentops.session import Session
6+
from agentops.helpers import get_ISO_time, check_call_stack_for_agent_id
7+
from agentops.log_config import logger
8+
from agentops.singleton import singleton
9+
10+
11+
@singleton
12+
class GeminiProvider(BaseProvider):
13+
original_generate_content = None
14+
original_generate_content_async = None
15+
16+
"""Provider for Google's Gemini API.
17+
18+
This provider is automatically detected and initialized when agentops.init()
19+
is called and the google.generativeai package is imported. No manual
20+
initialization is required."""
21+
22+
def __init__(self, client=None):
23+
"""Initialize the Gemini provider.
24+
25+
Args:
26+
client: Optional client instance. If not provided, will be set during override.
27+
"""
28+
super().__init__(client)
29+
self._provider_name = "Gemini"
30+
31+
def handle_response(self, response, kwargs, init_timestamp, session: Optional[Session] = None) -> dict:
32+
"""Handle responses from Gemini API for both sync and streaming modes.
33+
34+
Args:
35+
response: The response from the Gemini API
36+
kwargs: The keyword arguments passed to generate_content
37+
init_timestamp: The timestamp when the request was initiated
38+
session: Optional AgentOps session for recording events
39+
40+
Returns:
41+
For sync responses: The original response object
42+
For streaming responses: A generator yielding response chunks
43+
"""
44+
llm_event = LLMEvent(init_timestamp=init_timestamp, params=kwargs)
45+
if session is not None:
46+
llm_event.session_id = session.session_id
47+
48+
accumulated_content = ""
49+
50+
def handle_stream_chunk(chunk):
51+
nonlocal llm_event, accumulated_content
52+
try:
53+
if llm_event.returns is None:
54+
llm_event.returns = chunk
55+
llm_event.agent_id = check_call_stack_for_agent_id()
56+
llm_event.model = getattr(chunk, "model", None) or "gemini-1.5-flash"
57+
llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", None)) or []
58+
59+
# Accumulate text from chunk
60+
if hasattr(chunk, "text") and chunk.text:
61+
accumulated_content += chunk.text
62+
63+
# Extract token counts if available
64+
if hasattr(chunk, "usage_metadata"):
65+
llm_event.prompt_tokens = getattr(chunk.usage_metadata, "prompt_token_count", None)
66+
llm_event.completion_tokens = getattr(chunk.usage_metadata, "candidates_token_count", None)
67+
68+
# If this is the last chunk
69+
if hasattr(chunk, "finish_reason") and chunk.finish_reason:
70+
llm_event.completion = accumulated_content
71+
llm_event.end_timestamp = get_ISO_time()
72+
self._safe_record(session, llm_event)
73+
74+
except Exception as e:
75+
self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e))
76+
logger.warning(
77+
f"Unable to parse chunk for Gemini LLM call. Error: {str(e)}\n"
78+
f"Response: {chunk}\n"
79+
f"Arguments: {kwargs}\n"
80+
)
81+
82+
# For streaming responses
83+
if kwargs.get("stream", False):
84+
85+
def generator():
86+
for chunk in response:
87+
handle_stream_chunk(chunk)
88+
yield chunk
89+
90+
return generator()
91+
92+
# For synchronous responses
93+
try:
94+
llm_event.returns = response
95+
llm_event.agent_id = check_call_stack_for_agent_id()
96+
llm_event.prompt = kwargs.get("prompt", kwargs.get("contents", None)) or []
97+
llm_event.completion = response.text
98+
llm_event.model = getattr(response, "model", None) or "gemini-1.5-flash"
99+
100+
# Extract token counts from usage metadata if available
101+
if hasattr(response, "usage_metadata"):
102+
llm_event.prompt_tokens = getattr(response.usage_metadata, "prompt_token_count", None)
103+
llm_event.completion_tokens = getattr(response.usage_metadata, "candidates_token_count", None)
104+
105+
llm_event.end_timestamp = get_ISO_time()
106+
self._safe_record(session, llm_event)
107+
except Exception as e:
108+
self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e))
109+
logger.warning(
110+
f"Unable to parse response for Gemini LLM call. Error: {str(e)}\n"
111+
f"Response: {response}\n"
112+
f"Arguments: {kwargs}\n"
113+
)
114+
115+
return response
116+
117+
def override(self):
118+
"""Override Gemini's generate_content method to track LLM events."""
119+
self._override_gemini_generate_content()
120+
self._override_gemini_generate_content_async()
121+
122+
def _override_gemini_generate_content(self):
123+
"""Override synchronous generate_content method"""
124+
import google.generativeai as genai
125+
126+
# Store original method if not already stored
127+
if self.original_generate_content is None:
128+
self.original_generate_content = genai.GenerativeModel.generate_content
129+
130+
provider = self # Store provider instance for closure
131+
132+
def patched_function(model_self, *args, **kwargs):
133+
init_timestamp = get_ISO_time()
134+
session = kwargs.pop("session", None)
135+
136+
# Handle positional prompt argument
137+
event_kwargs = kwargs.copy()
138+
if args and len(args) > 0:
139+
prompt = args[0]
140+
if "contents" not in kwargs:
141+
kwargs["contents"] = prompt
142+
event_kwargs["prompt"] = prompt
143+
args = args[1:]
144+
145+
result = provider.original_generate_content(model_self, *args, **kwargs)
146+
return provider.handle_response(result, event_kwargs, init_timestamp, session=session)
147+
148+
# Override the method at class level
149+
genai.GenerativeModel.generate_content = patched_function
150+
151+
def _override_gemini_generate_content_async(self):
152+
"""Override asynchronous generate_content method"""
153+
import google.generativeai as genai
154+
155+
# Store original async method if not already stored
156+
if self.original_generate_content_async is None:
157+
self.original_generate_content_async = genai.GenerativeModel.generate_content_async
158+
159+
provider = self # Store provider instance for closure
160+
161+
async def patched_function(model_self, *args, **kwargs):
162+
init_timestamp = get_ISO_time()
163+
session = kwargs.pop("session", None)
164+
165+
# Handle positional prompt argument
166+
event_kwargs = kwargs.copy()
167+
if args and len(args) > 0:
168+
prompt = args[0]
169+
if "contents" not in kwargs:
170+
kwargs["contents"] = prompt
171+
event_kwargs["prompt"] = prompt
172+
args = args[1:]
173+
174+
result = await provider.original_generate_content_async(model_self, *args, **kwargs)
175+
return provider.handle_response(result, event_kwargs, init_timestamp, session=session)
176+
177+
# Override the async method at class level
178+
genai.GenerativeModel.generate_content_async = patched_function
179+
180+
def undo_override(self):
181+
"""Restore original Gemini methods.
182+
183+
Note:
184+
This method is called automatically by AgentOps during cleanup.
185+
Users should not call this method directly."""
186+
import google.generativeai as genai
187+
188+
if self.original_generate_content is not None:
189+
genai.GenerativeModel.generate_content = self.original_generate_content
190+
self.original_generate_content = None
191+
192+
if self.original_generate_content_async is not None:
193+
genai.GenerativeModel.generate_content_async = self.original_generate_content_async
194+
self.original_generate_content_async = None

agentops/llms/tracker.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
from .providers.ai21 import AI21Provider
1717
from .providers.llama_stack_client import LlamaStackClientProvider
1818
from .providers.taskweaver import TaskWeaverProvider
19+
from .providers.gemini import GeminiProvider
1920

2021
original_func = {}
2122
original_create = None
@@ -24,6 +25,9 @@
2425

2526
class LlmTracker:
2627
SUPPORTED_APIS = {
28+
"google.generativeai": {
29+
"0.1.0": ("GenerativeModel.generate_content", "GenerativeModel.generate_content_stream"),
30+
},
2731
"litellm": {"1.3.1": ("openai_chat_completions.completion",)},
2832
"openai": {
2933
"1.0.0": (
@@ -210,6 +214,15 @@ def override_api(self):
210214
else:
211215
logger.warning(f"Only TaskWeaver>=0.0.1 supported. v{module_version} found.")
212216

217+
if api == "google.generativeai":
218+
module_version = version(api)
219+
220+
if Version(module_version) >= parse("0.1.0"):
221+
provider = GeminiProvider(self.client)
222+
provider.override()
223+
else:
224+
logger.warning(f"Only google.generativeai>=0.1.0 supported. v{module_version} found.")
225+
213226
def stop_instrumenting(self):
214227
OpenAiProvider(self.client).undo_override()
215228
GroqProvider(self.client).undo_override()
@@ -221,3 +234,4 @@ def stop_instrumenting(self):
221234
AI21Provider(self.client).undo_override()
222235
LlamaStackClientProvider(self.client).undo_override()
223236
TaskWeaverProvider(self.client).undo_override()
237+
GeminiProvider(self.client).undo_override()
36.5 KB
Loading

docs/mint.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,7 @@
9393
"v1/integrations/camel",
9494
"v1/integrations/cohere",
9595
"v1/integrations/crewai",
96+
"v1/integrations/gemini",
9697
"v1/integrations/groq",
9798
"v1/integrations/langchain",
9899
"v1/integrations/llama_stack",

docs/v1/examples/examples.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,10 @@ mode: "wide"
5757
Ultra-fast LLM inference with Groq Cloud
5858
</Card>
5959

60+
<Card title="Gemini" icon={<img src="https://www.github.com/agentops-ai/agentops/blob/main/docs/images/external/deepmind/gemini-logo.png?raw=true" alt="Gemini" />} iconType="image" href="/v1/integrations/gemini">
61+
Explore Google DeepMind's Gemini with observation via AgentOps
62+
</Card>
63+
6064
<Card title="LangChain" icon={<img src="https://www.github.com/agentops-ai/agentops/blob/main/docs/images/external/langchain/langchain-logo.png?raw=true" alt="LangChain" />} iconType="image" href="/v1/examples/langchain">
6165
Jupyter Notebook with a sample LangChain integration
6266
</Card>

docs/v1/integrations/gemini.mdx

Lines changed: 118 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,118 @@
1+
---
2+
title: Gemini
3+
description: "Explore Google DeepMind's Gemini with observation via AgentOps"
4+
---
5+
6+
import CodeTooltip from '/snippets/add-code-tooltip.mdx'
7+
import EnvTooltip from '/snippets/add-env-tooltip.mdx'
8+
9+
[Gemini (Google Generative AI)](https://ai.google.dev/gemini-api/docs/quickstart) is a leading provider of AI tools and services.
10+
Explore the [Gemini API](https://ai.google.dev/docs) for more information.
11+
12+
<Note>
13+
`google-generativeai>=0.1.0` is currently supported.
14+
</Note>
15+
16+
<Steps>
17+
<Step title="Install the AgentOps SDK">
18+
<CodeGroup>
19+
```bash pip
20+
pip install agentops
21+
```
22+
```bash poetry
23+
poetry add agentops
24+
```
25+
</CodeGroup>
26+
</Step>
27+
<Step title="Install the Gemini SDK">
28+
<Note>
29+
`google-generativeai>=0.1.0` is required for Gemini integration.
30+
</Note>
31+
<CodeGroup>
32+
```bash pip
33+
pip install google-generativeai
34+
```
35+
```bash poetry
36+
poetry add google-generativeai
37+
```
38+
</CodeGroup>
39+
</Step>
40+
<Step title="Add 3 lines of code">
41+
<CodeTooltip/>
42+
<CodeGroup>
43+
```python python
44+
import google.generativeai as genai
45+
import agentops
46+
47+
agentops.init(<INSERT YOUR API KEY HERE>)
48+
model = genai.GenerativeModel("gemini-1.5-flash")
49+
...
50+
# End of program (e.g. main.py)
51+
agentops.end_session("Success") # Success|Fail|Indeterminate
52+
```
53+
</CodeGroup>
54+
<EnvTooltip />
55+
<CodeGroup>
56+
```python .env
57+
AGENTOPS_API_KEY=<YOUR API KEY>
58+
GEMINI_API_KEY=<YOUR GEMINI API KEY>
59+
```
60+
</CodeGroup>
61+
Read more about environment variables in [Advanced Configuration](/v1/usage/advanced-configuration)
62+
</Step>
63+
<Step title="Run your Agent">
64+
Execute your program and visit [app.agentops.ai/drilldown](https://app.agentops.ai/drilldown) to observe your Agent! 🕵️
65+
<Tip>
66+
After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard
67+
</Tip>
68+
<div/>
69+
<Frame type="glass" caption="Clickable link to session">
70+
<img height="200" src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/link-to-session.gif?raw=true" />
71+
</Frame>
72+
</Step>
73+
</Steps>
74+
75+
## Full Examples
76+
77+
<CodeGroup>
78+
```python sync
79+
import google.generativeai as genai
80+
import agentops
81+
82+
agentops.init(<INSERT YOUR API KEY HERE>)
83+
model = genai.GenerativeModel("gemini-1.5-flash")
84+
85+
response = model.generate_content(
86+
"Write a haiku about AI and humans working together"
87+
)
88+
89+
print(response.text)
90+
agentops.end_session('Success')
91+
```
92+
93+
```python stream
94+
import google.generativeai as genai
95+
import agentops
96+
97+
agentops.init(<INSERT YOUR API KEY HERE>)
98+
model = genai.GenerativeModel("gemini-1.5-flash")
99+
100+
response = model.generate_content(
101+
"Write a haiku about AI and humans working together",
102+
stream=True
103+
)
104+
105+
for chunk in response:
106+
print(chunk.text, end="")
107+
108+
agentops.end_session('Success')
109+
```
110+
</CodeGroup>
111+
112+
You can find more examples in the [Gemini Examples](/v1/examples/gemini_examples) section.
113+
114+
<script type="module" src="/scripts/github_stars.js"></script>
115+
<script type="module" src="/scripts/scroll-img-fadein-animation.js"></script>
116+
<script type="module" src="/scripts/button_heartbeat_animation.js"></script>
117+
<script type="css" src="/styles/styles.css"></script>
118+
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>

0 commit comments

Comments
 (0)