Skip to content

Commit c155a7a

Browse files
authored
Added basic ADK example (#146)
* Added basic ADK example * Added basic ADK example notebook * Update and rename readme.md to README.md * Update README.md with opensource model * Update Clarifai_adk_example_notebook.ipynb
1 parent 7d3c271 commit c155a7a

File tree

15 files changed

+1098
-0
lines changed

15 files changed

+1098
-0
lines changed
Lines changed: 386 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,386 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "4266a1ca",
6+
"metadata": {},
7+
"source": [
8+
"![image](https://github.com/user-attachments/assets/b22c9807-f5e7-49eb-b00d-598e400781af)"
9+
]
10+
},
11+
{
12+
"cell_type": "markdown",
13+
"id": "11cc1e7c",
14+
"metadata": {},
15+
"source": [
16+
"# Clarifai - Google ADK \n",
17+
"This notebook shows a basic example of how to use clarifai CO models with Google ADK library "
18+
]
19+
},
20+
{
21+
"cell_type": "markdown",
22+
"id": "e80d827e",
23+
"metadata": {},
24+
"source": [
25+
"### Weather Agent Tutorial 🌦️\n",
26+
"\n",
27+
"This notebook demonstrates how to build and interact with a weather information agent using Google ADK, OpenAI/Clarifai models, and custom tool integration."
28+
]
29+
},
30+
{
31+
"cell_type": "markdown",
32+
"id": "88d96be8",
33+
"metadata": {},
34+
"source": [
35+
"#### Install necessary packages"
36+
]
37+
},
38+
{
39+
"cell_type": "code",
40+
"execution_count": null,
41+
"id": "93033718",
42+
"metadata": {},
43+
"outputs": [],
44+
"source": [
45+
"!pip install -q google-adk litellm"
46+
]
47+
},
48+
{
49+
"cell_type": "code",
50+
"execution_count": 1,
51+
"id": "9f209947",
52+
"metadata": {},
53+
"outputs": [
54+
{
55+
"name": "stdout",
56+
"output_type": "stream",
57+
"text": [
58+
"Libraries imported.\n"
59+
]
60+
}
61+
],
62+
"source": [
63+
"# @title Import necessary libraries\n",
64+
"import os\n",
65+
"import asyncio\n",
66+
"from google.adk.agents import Agent\n",
67+
"from google.adk.models.lite_llm import LiteLlm # For multi-model support\n",
68+
"from google.adk.sessions import InMemorySessionService\n",
69+
"from google.adk.runners import Runner\n",
70+
"from google.genai import types # For creating message Content/Parts\n",
71+
"\n",
72+
"import warnings\n",
73+
"# Ignore all warnings\n",
74+
"warnings.filterwarnings(\"ignore\")\n",
75+
"\n",
76+
"import logging\n",
77+
"logging.basicConfig(level=logging.ERROR)\n",
78+
"\n",
79+
"print(\"Libraries imported.\")"
80+
]
81+
},
82+
{
83+
"cell_type": "markdown",
84+
"id": "909650fe",
85+
"metadata": {},
86+
"source": [
87+
"### Setup your PAT key\n",
88+
"Set your Clarifai PAT as environment variable.\n",
89+
"Below we will be using Clarifai PAT as alias for OPENAI API KEY, since we are using Clarifai models in OpenAI compatible endpoints format."
90+
]
91+
},
92+
{
93+
"cell_type": "code",
94+
"execution_count": null,
95+
"id": "1b7ad495",
96+
"metadata": {},
97+
"outputs": [],
98+
"source": [
99+
"!export CLARIFAI_PAT=\"YOUR_CLARIFAI_PAT\" # Set your Clarifai PAT here"
100+
]
101+
},
102+
{
103+
"cell_type": "code",
104+
"execution_count": null,
105+
"id": "5fa33e0a",
106+
"metadata": {},
107+
"outputs": [],
108+
"source": [
109+
"clarifai_pat = os.getenv('CLARIFAI_PAT')"
110+
]
111+
},
112+
{
113+
"cell_type": "markdown",
114+
"id": "2f4ca491",
115+
"metadata": {},
116+
"source": [
117+
"### Clarifai LLM model\n",
118+
"Google ADK uses LiteLLM underhood to call the LLM models. It also allows to pass the openai compatible endpoints by using the base url and model name.\n",
119+
"\n",
120+
"##### Using Clarifai Models\n",
121+
"\n",
122+
"Clarifai models can be accessed using LiteLLM in the below model URL format:\n",
123+
"Starts with prefix `openai` - \n",
124+
"\n",
125+
"`openai/{user_id}/{app_id}/models/{model_id}`"
126+
]
127+
},
128+
{
129+
"cell_type": "code",
130+
"execution_count": null,
131+
"id": "fdf04b9f",
132+
"metadata": {},
133+
"outputs": [],
134+
"source": [
135+
"clarifai_model = LiteLlm(model=\"openai/deepseek-ai/deepseek-chat/models/DeepSeek-R1-Distill-Qwen-7B\",\n",
136+
" base_url=\"https://api.clarifai.com/v2/ext/openai/v1\",\n",
137+
" api_key=clarifai_pat)"
138+
]
139+
},
140+
{
141+
"cell_type": "markdown",
142+
"id": "b2c3e548",
143+
"metadata": {},
144+
"source": [
145+
"### Available Models\n",
146+
"\n",
147+
"You can explore available models on the [Clarifai Community](https://clarifai.com/explore) platform. Some popular models include:\n",
148+
"\n",
149+
"- GPT-4: `openai/chat-completion/models/gpt-4o`\n",
150+
"- Gemini 2.5 Flash: `gcp/generate/models/gemini-2_5-flash`\n",
151+
"- Llama 2: `meta/Llama-2/models/llama2-70b-chat`\n",
152+
"- Mixtral: `mistralai/Mixtral-8x7B/models/mixtral-8x7b-instruct`"
153+
]
154+
},
155+
{
156+
"cell_type": "markdown",
157+
"id": "53306054",
158+
"metadata": {},
159+
"source": [
160+
"#### Tool definition\n",
161+
"In this below snippet, we are setting up the `get_weather` tool."
162+
]
163+
},
164+
{
165+
"cell_type": "code",
166+
"execution_count": 4,
167+
"id": "43e0fcb0",
168+
"metadata": {},
169+
"outputs": [
170+
{
171+
"name": "stdout",
172+
"output_type": "stream",
173+
"text": [
174+
"--- Tool: get_weather called for city: New York ---\n",
175+
"{'status': 'success', 'report': 'The weather in New York is sunny with a temperature of 25°C.'}\n",
176+
"--- Tool: get_weather called for city: Paris ---\n",
177+
"{'status': 'error', 'error_message': \"Sorry, I don't have weather information for 'Paris'.\"}\n"
178+
]
179+
}
180+
],
181+
"source": [
182+
"# @title Define the get_weather Tool\n",
183+
"def get_weather(city: str) -> dict:\n",
184+
" \"\"\"Retrieves the current weather report for a specified city.\n",
185+
"\n",
186+
" Args:\n",
187+
" city (str): The name of the city (e.g., \"New York\", \"London\", \"Tokyo\").\n",
188+
"\n",
189+
" Returns:\n",
190+
" dict: A dictionary containing the weather information.\n",
191+
" Includes a 'status' key ('success' or 'error').\n",
192+
" If 'success', includes a 'report' key with weather details.\n",
193+
" If 'error', includes an 'error_message' key.\n",
194+
" \"\"\"\n",
195+
" print(f\"--- Tool: get_weather called for city: {city} ---\") # Log tool execution\n",
196+
" city_normalized = city.lower().replace(\" \", \"\") # Basic normalization\n",
197+
"\n",
198+
" # Mock weather data\n",
199+
" mock_weather_db = {\n",
200+
" \"newyork\": {\"status\": \"success\", \"report\": \"The weather in New York is sunny with a temperature of 25°C.\"},\n",
201+
" \"london\": {\"status\": \"success\", \"report\": \"It's cloudy in London with a temperature of 15°C.\"},\n",
202+
" \"tokyo\": {\"status\": \"success\", \"report\": \"Tokyo is experiencing light rain and a temperature of 18°C.\"},\n",
203+
" }\n",
204+
"\n",
205+
" if city_normalized in mock_weather_db:\n",
206+
" return mock_weather_db[city_normalized]\n",
207+
" else:\n",
208+
" return {\"status\": \"error\", \"error_message\": f\"Sorry, I don't have weather information for '{city}'.\"}\n",
209+
"\n",
210+
"# Example tool usage (optional test)\n",
211+
"print(get_weather(\"New York\"))\n",
212+
"print(get_weather(\"Paris\"))"
213+
]
214+
},
215+
{
216+
"cell_type": "markdown",
217+
"id": "096a2826",
218+
"metadata": {},
219+
"source": [
220+
"#### Agent Interaction"
221+
]
222+
},
223+
{
224+
"cell_type": "code",
225+
"execution_count": 5,
226+
"id": "0144e4f4",
227+
"metadata": {},
228+
"outputs": [],
229+
"source": [
230+
"# @title Define Agent Interaction Function\n",
231+
"\n",
232+
"from google.genai import types # For creating message Content/Parts\n",
233+
"\n",
234+
"async def call_agent_async(query: str, runner, user_id, session_id):\n",
235+
" \"\"\"Sends a query to the agent and prints the final response.\"\"\"\n",
236+
" print(f\"\\n>>> User Query: {query}\")\n",
237+
"\n",
238+
" # Prepare the user's message in ADK format\n",
239+
" content = types.Content(role='user', parts=[types.Part(text=query)])\n",
240+
"\n",
241+
" final_response_text = \"Agent did not produce a final response.\" # Default\n",
242+
"\n",
243+
" # Key Concept: run_async executes the agent logic and yields Events.\n",
244+
" # We iterate through events to find the final answer.\n",
245+
" async for event in runner.run_async(user_id=user_id, session_id=session_id, new_message=content):\n",
246+
" # You can uncomment the line below to see *all* events during execution\n",
247+
" # print(f\" [Event] Author: {event.author}, Type: {type(event).__name__}, Final: {event.is_final_response()}, Content: {event.content}\")\n",
248+
"\n",
249+
" # Key Concept: is_final_response() marks the concluding message for the turn.\n",
250+
" if event.is_final_response():\n",
251+
" if event.content and event.content.parts:\n",
252+
" # Assuming text response in the first part\n",
253+
" final_response_text = event.content.parts[0].text\n",
254+
" elif event.actions and event.actions.escalate: # Handle potential errors/escalations\n",
255+
" final_response_text = f\"Agent escalated: {event.error_message or 'No specific message.'}\"\n",
256+
" # Add more checks here if needed (e.g., specific error codes)\n",
257+
" break # Stop processing events once the final response is found\n",
258+
"\n",
259+
" print(f\"<<< Agent Response: {final_response_text}\")"
260+
]
261+
},
262+
{
263+
"cell_type": "markdown",
264+
"id": "2c2b3aef",
265+
"metadata": {},
266+
"source": [
267+
"#### Calling Agent\n"
268+
]
269+
},
270+
{
271+
"cell_type": "code",
272+
"execution_count": 7,
273+
"id": "353d9d21",
274+
"metadata": {},
275+
"outputs": [
276+
{
277+
"name": "stdout",
278+
"output_type": "stream",
279+
"text": [
280+
"Session created: App='weather_tutorial_app_gpt', User='user_1_gpt', Session='session_001_gpt'\n",
281+
"Runner created for agent 'weather_agent_gpt'.\n",
282+
"\n",
283+
"--- Testing GPT Agent ---\n",
284+
"\n",
285+
">>> User Query: What's the weather in Tokyo?\n",
286+
"--- Tool: get_weather called for city: Tokyo ---\n",
287+
"<<< Agent Response: In Tokyo, the weather is currently experiencing light rain with a temperature of 18°C.\n"
288+
]
289+
}
290+
],
291+
"source": [
292+
"\n",
293+
"# @title 1. Import LiteLlm\n",
294+
"from google.adk.models.lite_llm import LiteLlm\n",
295+
"# @title Define and Test GPT Agent\n",
296+
"\n",
297+
"# Make sure 'get_weather' function from Step 1 is defined in your environment.\n",
298+
"# Make sure 'call_agent_async' is defined from earlier.\n",
299+
"\n",
300+
"# --- Agent using GPT-4o ---\n",
301+
"weather_agent_gpt = None # Initialize to None\n",
302+
"runner_gpt = None # Initialize runner to None\n",
303+
"\n",
304+
"try:\n",
305+
" weather_agent_gpt = Agent(\n",
306+
" name=\"weather_agent_gpt\",\n",
307+
" # Key change: Wrap the LiteLLM model identifier\n",
308+
" model=clarifai_model,\n",
309+
" description=\"Provides weather information (using GPT-4o).\",\n",
310+
" instruction=\"You are a helpful weather assistant powered by GPT-4o. \"\n",
311+
" \"Use the 'get_weather' tool for city weather requests. \"\n",
312+
" \"Clearly present successful reports or polite error messages based on the tool's output status.\",\n",
313+
" tools=[get_weather], # Re-use the same tool\n",
314+
" )\n",
315+
"\n",
316+
" # InMemorySessionService is simple, non-persistent storage for this tutorial.\n",
317+
" session_service_gpt = InMemorySessionService() # Create a dedicated service\n",
318+
"\n",
319+
" # Define constants for identifying the interaction context\n",
320+
" APP_NAME_GPT = \"weather_tutorial_app_gpt\" # Unique app name for this test\n",
321+
" USER_ID_GPT = \"user_1_gpt\"\n",
322+
" SESSION_ID_GPT = \"session_001_gpt\" # Using a fixed ID for simplicity\n",
323+
"\n",
324+
" # Create the specific session where the conversation will happen\n",
325+
" session_gpt = await session_service_gpt.create_session(\n",
326+
" app_name=APP_NAME_GPT,\n",
327+
" user_id=USER_ID_GPT,\n",
328+
" session_id=SESSION_ID_GPT\n",
329+
" )\n",
330+
" print(f\"Session created: App='{APP_NAME_GPT}', User='{USER_ID_GPT}', Session='{SESSION_ID_GPT}'\")\n",
331+
"\n",
332+
" # Create a runner specific to this agent and its session service\n",
333+
" runner_gpt = Runner(\n",
334+
" agent=weather_agent_gpt,\n",
335+
" app_name=APP_NAME_GPT, # Use the specific app name\n",
336+
" session_service=session_service_gpt # Use the specific session service\n",
337+
" )\n",
338+
" print(f\"Runner created for agent '{runner_gpt.agent.name}'.\")\n",
339+
"\n",
340+
" # --- Test the GPT Agent ---\n",
341+
" print(\"\\n--- Testing GPT Agent ---\")\n",
342+
" # Ensure call_agent_async uses the correct runner, user_id, session_id\n",
343+
" await call_agent_async(query = \"What's the weather in Tokyo?\",\n",
344+
" runner=runner_gpt,\n",
345+
" user_id=USER_ID_GPT,\n",
346+
" session_id=SESSION_ID_GPT)\n",
347+
" # --- OR ---\n",
348+
"\n",
349+
" # Uncomment the following lines if running as a standard Python script (.py file):\n",
350+
" # import asyncio\n",
351+
" # if __name__ == \"__main__\":\n",
352+
" # try:\n",
353+
" # asyncio.run(call_agent_async(query = \"What's the weather in Tokyo?\",\n",
354+
" # runner=runner_gpt,\n",
355+
" # user_id=USER_ID_GPT,\n",
356+
" # session_id=SESSION_ID_GPT)\n",
357+
" # except Exception as e:\n",
358+
" # print(f\"An error occurred: {e}\")\n",
359+
"\n",
360+
"except Exception as e:\n",
361+
" print(f\"❌ Could not create or run GPT agent. Check API Key and model name. Error: {e}\")"
362+
]
363+
}
364+
],
365+
"metadata": {
366+
"kernelspec": {
367+
"display_name": "adkclarifai",
368+
"language": "python",
369+
"name": "python3"
370+
},
371+
"language_info": {
372+
"codemirror_mode": {
373+
"name": "ipython",
374+
"version": 3
375+
},
376+
"file_extension": ".py",
377+
"mimetype": "text/x-python",
378+
"name": "python",
379+
"nbconvert_exporter": "python",
380+
"pygments_lexer": "ipython3",
381+
"version": "3.11.6"
382+
}
383+
},
384+
"nbformat": 4,
385+
"nbformat_minor": 5
386+
}

0 commit comments

Comments
 (0)