Skip to content

Commit b47cf2f

Browse files
Dwij1704dot-agi
andauthored
Fix Examples (#1145)
* Fix * Update requirements for various examples - Set specific versions for autogen-agentchat, pymarkdown, google-adk, google-genai, and nest-asyncio. - Added smolagents[litellm] and markdownify to smolagents requirements. - Ensured consistency in package specifications across example directories. * Update requirements and enhance customer service agent demo - Updated pymarkdown version and added pymarkdownlnt and pymdown-extensions to crewai requirements. - Added deprecated package to google_adk requirements. - Modified customer_service_agent to use asyncio for running the main function and included predefined test messages for demonstration. - Improved output formatting in customer_service_agent for better user experience. * Enhance OpenAI Agents Examples with Tracing and Async Demos - Added tracing for various agent patterns and tools to improve observability. - Updated demo functions to utilize asyncio for better performance and responsiveness. - Removed redundant tracer initialization and end trace calls for clarity. - Improved output handling in image generation demo for better user experience. * Update requirements and modify web search agent configuration - Added 'datasets' and 'pinecone' to the OpenAI example requirements. - Changed the auto_start_session parameter to False in the web_search.py agent initialization for improved control over session management. - Enhanced the web agent in multi_smolagents_system.py with a name and description for better clarity on its functionality. * Refactor Google ADK Human Approval Workflow to Automated Approval System - Updated the workflow to automate approval decisions based on configurable business rules instead of requiring human input. - Modified agent names and descriptions to reflect the automated nature of the process. - Enhanced the external approval tool to analyze request amounts and reasons for automated decision-making. - Adjusted session management and trace naming for clarity and consistency. Updated OpenAI Multi-Tool Orchestration to disable auto session start for improved control over session management. * Enhance AgentChat Example for Multi-Agent Collaboration - Updated the AgentChat example to demonstrate AI-to-AI collaboration with multiple specialized agents. - Modified agent roles and system messages for clarity and purpose. - Adjusted the conversation setup to allow for more meaningful interactions among agents. - Added Pinecone API key to the integration test workflow for improved functionality. * Update context manager examples and requirements - Added 'openai' to the requirements for the agno example. - Removed multiple context manager example files to streamline the directory. - Deleted outdated examples related to basic usage, error handling, parallel traces, production patterns, and README documentation. - Cleaned up the context manager requirements file to remove unnecessary dependencies. * Remove outdated context manager examples from integration test workflow and update agno requirements to include additional packages. This streamlines the examples and enhances functionality with new dependencies. * Update agno requirements to include 'arxiv', 'pypdf', and 'duckduckgo-search' for enhanced functionality and resource access. * Remove asyncio.run from demonstrate_workflows in agno_workflow_setup.py for direct function execution. * Refactor Multi-Tool Orchestration Example to Use Existing Pinecone Index - Removed unnecessary imports and streamlined the connection to an existing Pinecone index. - Updated the logic to check for existing data in the index before upserting new data. - Enhanced comments for clarity on the embedding process and index usage. --------- Co-authored-by: Pratyush Shukla <[email protected]>
1 parent 6cb9728 commit b47cf2f

25 files changed

+335
-1536
lines changed

.github/workflows/examples-integration-test.yml

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,6 @@ jobs:
6767
- { path: 'examples/ag2/async_human_input.py', name: 'AG2 Async Human Input' }
6868
- { path: 'examples/ag2/tools_wikipedia_search.py', name: 'AG2 Wikipedia Search' }
6969

70-
# Context Manager examples
71-
- { path: 'examples/context_manager/basic_usage.py', name: 'Context Manager Basic' }
72-
- { path: 'examples/context_manager/error_handling.py', name: 'Context Manager Errors' }
73-
- { path: 'examples/context_manager/parallel_traces.py', name: 'Context Manager Parallel' }
74-
- { path: 'examples/context_manager/production_patterns.py', name: 'Context Manager Production' }
75-
7670
# Agno examples
7771
- { path: 'examples/agno/agno_async_operations.py', name: 'Agno Async Operations' }
7872
- { path: 'examples/agno/agno_basic_agents.py', name: 'Agno Basic Agents' }
@@ -84,7 +78,7 @@ jobs:
8478
- { path: 'examples/google_adk/human_approval.py', name: 'Google ADK Human Approval' }
8579

8680
# LlamaIndex examples
87-
- { path: 'examples/llamaindex/llamaindex_example.py', name: 'LlamaIndex' }
81+
# - { path: 'examples/llamaindex/llamaindex_example.py', name: 'LlamaIndex' }
8882

8983
# Mem0 examples
9084
- { path: 'examples/mem0/mem0_memoryclient_example.py', name: 'Mem0 Memory Client' }
@@ -157,6 +151,7 @@ jobs:
157151
LLAMA_API_KEY: ${{ secrets.LLAMA_API_KEY }}
158152
PERPLEXITY_API_KEY: ${{ secrets.PERPLEXITY_API_KEY }}
159153
REPLICATE_API_TOKEN: ${{ secrets.REPLICATE_API_TOKEN }}
154+
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
160155
PYTHONPATH: ${{ github.workspace }}
161156
run: |
162157
echo "Running ${{ matrix.example.name }}..."

examples/ag2/async_human_input.py

Lines changed: 89 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1-
# Agent Chat with Async Human Inputs
1+
# Agent Chat with Async Operations
22
#
3-
# We are going to create an agent that can chat with a human asynchronously. The agent will be able to respond to messages from the human and will also be able to send messages to the human.
3+
# We are going to create agents that can perform asynchronous operations and chat with each other.
4+
# This example demonstrates async capabilities without requiring human input.
45
#
5-
# We are going to use AgentOps to monitor the agent's performance and observe its interactions with the human.
6+
# We are going to use AgentOps to monitor the agent's performance and observe their interactions.
67
# # Install required dependencies
78
# %pip install agentops
89
# %pip install ag2
@@ -25,92 +26,124 @@
2526
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY", "your_api_key_here")
2627
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "your_openai_api_key_here")
2728

28-
agentops.init(auto_start_session=False, trace_name="AG2 Async Human Input")
29-
tracer = agentops.start_trace(
30-
trace_name="AG2 Agent chat with Async Human Inputs", tags=["ag2-chat-async-human-inputs", "agentops-example"]
31-
)
29+
agentops.init(auto_start_session=False, trace_name="AG2 Async Demo")
30+
tracer = agentops.start_trace(trace_name="AG2 Async Agent Demo", tags=["ag2-async-demo", "agentops-example"])
3231

3332

34-
# Define an asynchronous function that simulates some asynchronous task (e.g., I/O operation)
35-
async def my_asynchronous_function():
36-
print("Start asynchronous function")
37-
await asyncio.sleep(2) # Simulate some asynchronous task (e.g., I/O operation)
38-
print("End asynchronous function")
39-
return "input"
33+
# Define an asynchronous function that simulates async processing
34+
async def simulate_async_processing(task_name: str, delay: float = 1.0) -> str:
35+
"""
36+
Simulate some asynchronous processing (e.g., API calls, file operations, etc.)
37+
"""
38+
print(f"🔄 Starting async task: {task_name}")
39+
await asyncio.sleep(delay) # Simulate async work
40+
print(f"✅ Completed async task: {task_name}")
41+
return f"Processed: {task_name}"
4042

4143

42-
# Define a custom class `CustomisedUserProxyAgent` that extends `UserProxyAgent`
43-
class CustomisedUserProxyAgent(UserProxyAgent):
44-
# Asynchronous function to get human input
44+
# Define a custom UserProxyAgent that simulates automated responses
45+
class AutomatedUserProxyAgent(UserProxyAgent):
46+
def __init__(self, name: str, **kwargs):
47+
super().__init__(name, **kwargs)
48+
self.response_count = 0
49+
self.predefined_responses = [
50+
"Yes, please generate interview questions for these topics.",
51+
"The questions look good. Can you make them more specific to senior-level positions?",
52+
"Perfect! These questions are exactly what we need. Thank you!",
53+
]
54+
4555
async def a_get_human_input(self, prompt: str) -> str:
46-
# Call the asynchronous function to get user input asynchronously
47-
user_input = await my_asynchronous_function()
48-
return user_input
56+
# Simulate async processing before responding
57+
await simulate_async_processing(f"Processing user input #{self.response_count + 1}")
58+
59+
if self.response_count < len(self.predefined_responses):
60+
response = self.predefined_responses[self.response_count]
61+
self.response_count += 1
62+
print(f"👤 User: {response}")
63+
return response
64+
else:
65+
print("👤 User: TERMINATE")
66+
return "TERMINATE"
4967

50-
# Asynchronous function to receive a message
5168
async def a_receive(
5269
self,
5370
message: Union[Dict, str],
5471
sender,
5572
request_reply: Optional[bool] = None,
5673
silent: Optional[bool] = False,
5774
):
58-
# Call the superclass method to handle message reception asynchronously
5975
await super().a_receive(message, sender, request_reply, silent)
6076

6177

62-
class CustomisedAssistantAgent(AssistantAgent):
63-
# Asynchronous function to get human input
64-
async def a_get_human_input(self, prompt: str) -> str:
65-
# Call the asynchronous function to get user input asynchronously
66-
user_input = await my_asynchronous_function()
67-
return user_input
68-
69-
# Asynchronous function to receive a message
78+
class AsyncAssistantAgent(AssistantAgent):
7079
async def a_receive(
7180
self,
7281
message: Union[Dict, str],
7382
sender,
7483
request_reply: Optional[bool] = None,
7584
silent: Optional[bool] = False,
7685
):
77-
# Call the superclass method to handle message reception asynchronously
86+
# Simulate async processing before responding
87+
await simulate_async_processing("Analyzing request and preparing response", 0.5)
7888
await super().a_receive(message, sender, request_reply, silent)
7989

8090

8191
nest_asyncio.apply()
8292

8393

8494
async def main():
85-
boss = CustomisedUserProxyAgent(
86-
name="boss",
87-
human_input_mode="ALWAYS",
88-
max_consecutive_auto_reply=0,
95+
print("🚀 Starting AG2 Async Demo")
96+
print("=" * 50)
97+
98+
# Create agents with automated behavior
99+
user_proxy = AutomatedUserProxyAgent(
100+
name="hiring_manager",
101+
human_input_mode="NEVER", # No human input required
102+
max_consecutive_auto_reply=3,
89103
code_execution_config=False,
104+
is_termination_msg=lambda msg: "TERMINATE" in str(msg.get("content", "")),
90105
)
91106

92-
assistant = CustomisedAssistantAgent(
93-
name="assistant",
94-
system_message="You will provide some agenda, and I will create questions for an interview meeting. Every time when you generate question then you have to ask user for feedback and if user provides the feedback then you have to incorporate that feedback and generate new set of questions and if user don't want to update then terminate the process and exit",
107+
assistant = AsyncAssistantAgent(
108+
name="interview_consultant",
109+
system_message="""You are an expert interview consultant. When given interview topics,
110+
you create thoughtful, relevant questions. You ask for feedback and incorporate it.
111+
When the user is satisfied with the questions, end with 'TERMINATE'.""",
95112
llm_config={"config_list": [{"model": "gpt-4o-mini", "api_key": os.environ.get("OPENAI_API_KEY")}]},
113+
is_termination_msg=lambda msg: "TERMINATE" in str(msg.get("content", "")),
96114
)
97115

98-
await boss.a_initiate_chat(
99-
assistant,
100-
message="Resume Review, Technical Skills Assessment, Project Discussion, Job Role Expectations, Closing Remarks.",
101-
n_results=3,
102-
)
103-
104-
105-
# await main()
106-
agentops.end_trace(tracer, end_state="Success")
107-
108-
# Let's check programmatically that spans were recorded in AgentOps
109-
print("\n" + "=" * 50)
110-
print("Now let's verify that our LLM calls were tracked properly...")
111-
try:
112-
agentops.validate_trace_spans(trace_context=tracer)
113-
print("\n✅ Success! All LLM spans were properly recorded in AgentOps.")
114-
except agentops.ValidationError as e:
115-
print(f"\n❌ Error validating spans: {e}")
116-
raise
116+
try:
117+
print("🤖 Initiating automated conversation...")
118+
await user_proxy.a_initiate_chat(
119+
assistant,
120+
message="""I need help creating interview questions for these topics:
121+
- Resume Review
122+
- Technical Skills Assessment
123+
- Project Discussion
124+
- Job Role Expectations
125+
- Closing Remarks
126+
127+
Please create 2-3 questions for each topic.""",
128+
max_turns=6,
129+
)
130+
except Exception as e:
131+
print(f"\n❌ Error occurred: {e}")
132+
finally:
133+
agentops.end_trace(tracer, end_state="Success")
134+
135+
# Validate AgentOps tracking
136+
print("\n" + "=" * 50)
137+
print("🔍 Validating AgentOps tracking...")
138+
try:
139+
agentops.validate_trace_spans(trace_context=tracer)
140+
print("✅ Success! All LLM spans were properly recorded in AgentOps.")
141+
except agentops.ValidationError as e:
142+
print(f"❌ Error validating spans: {e}")
143+
raise
144+
145+
print("\n🎉 Demo completed successfully!")
146+
147+
148+
if __name__ == "__main__":
149+
asyncio.run(main())

examples/ag2/requirements.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
ag2
22
nest-asyncio
3-
wikipedia-api
3+
wikipedia-api
4+
ag2[openai]

examples/agno/agno_workflow_setup.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@
1616
"""
1717

1818
from agno.agent import Agent, RunResponse
19-
import asyncio
2019
import agentops
2120
from dotenv import load_dotenv
2221
from agno.workflow import Workflow
@@ -124,4 +123,4 @@ def demonstrate_workflows():
124123
raise
125124

126125

127-
asyncio.run(demonstrate_workflows())
126+
demonstrate_workflows()

examples/agno/requirements.txt

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,8 @@
11
agno
2-
aiohttp
2+
aiohttp
3+
openai
4+
googlesearch-python
5+
pycountry
6+
arxiv
7+
pypdf
8+
duckduckgo-search

examples/autogen/AgentChat.py

Lines changed: 33 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
1-
# Microsoft Autogen Chat Example
1+
# Microsoft Autogen Multi-Agent Collaboration Example
22
#
3+
# This example demonstrates AI-to-AI collaboration using multiple specialized agents working together without human interaction.
34
# AgentOps automatically configures itself when it's initialized meaning your agent run data will be tracked and logged to your AgentOps dashboard right away.
45
# First let's install the required packages
56
# %pip install -U autogen-agentchat
@@ -13,7 +14,7 @@
1314

1415
import agentops
1516

16-
from autogen_agentchat.agents import AssistantAgent, UserProxyAgent
17+
from autogen_agentchat.agents import AssistantAgent
1718
from autogen_ext.models.openai import OpenAIChatCompletionClient
1819

1920
from autogen_agentchat.teams import RoundRobinGroupChat
@@ -32,9 +33,10 @@
3233
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "your_openai_api_key_here")
3334

3435
# When initializing AgentOps, you can pass in optional tags to help filter sessions
35-
agentops.init(auto_start_session=False, trace_name="Autogen Agent Chat Example")
36+
agentops.init(auto_start_session=False, trace_name="Autogen Multi-Agent Collaboration Example")
3637
tracer = agentops.start_trace(
37-
trace_name="Microsoft Agent Chat Example", tags=["autogen-chat", "microsoft-autogen", "agentops-example"]
38+
trace_name="Microsoft Multi-Agent Collaboration Example",
39+
tags=["autogen-collaboration", "microsoft-autogen", "agentops-example"],
3840
)
3941

4042
# AutoGen will now start automatically tracking
@@ -45,38 +47,53 @@
4547
# * Correspondence between agents
4648
# * Tool usage
4749
# * Errors
48-
# # Simple Chat Example
50+
# # Multi-Agent Collaboration Example
4951
# Define model and API key
5052
model_name = "gpt-4o-mini" # Or "gpt-4o" / "gpt-4o-mini" as per migration guide examples
5153
api_key = os.getenv("OPENAI_API_KEY")
5254

5355
# Create the model client
5456
model_client = OpenAIChatCompletionClient(model=model_name, api_key=api_key)
5557

56-
# Create the agent that uses the LLM.
57-
assistant = AssistantAgent(
58-
name="assistant",
59-
system_message="You are a helpful assistant.", # Added system message for clarity
58+
# Create multiple AI agents with different roles
59+
research_agent = AssistantAgent(
60+
name="research_agent",
61+
system_message="You are a research specialist. Your role is to gather information, analyze data, and provide insights on topics. You ask thoughtful questions and provide well-researched responses.",
6062
model_client=model_client,
6163
)
6264

63-
user_proxy_initiator = UserProxyAgent("user_initiator")
65+
creative_agent = AssistantAgent(
66+
name="creative_agent",
67+
system_message="You are a creative strategist. Your role is to brainstorm innovative solutions, think outside the box, and propose creative approaches to problems. You build on others' ideas and suggest novel perspectives.",
68+
model_client=model_client,
69+
)
70+
71+
analyst_agent = AssistantAgent(
72+
name="analyst_agent",
73+
system_message="You are a critical analyst. Your role is to evaluate ideas, identify strengths and weaknesses, and provide constructive feedback. You help refine concepts and ensure practical feasibility.",
74+
model_client=model_client,
75+
)
6476

6577

6678
async def main():
67-
termination = MaxMessageTermination(max_messages=2)
79+
# Set up a longer conversation to allow for meaningful AI-to-AI interaction
80+
termination = MaxMessageTermination(max_messages=8)
6881

6982
group_chat = RoundRobinGroupChat(
70-
[user_proxy_initiator, assistant], # Corrected: agents as positional argument
83+
[research_agent, creative_agent, analyst_agent], # AI agents working together
7184
termination_condition=termination,
7285
)
7386

74-
chat_task = "How can I help you today?"
75-
print(f"User Initiator: {chat_task}")
87+
# A task that will engage all three agents in meaningful collaboration
88+
chat_task = "Let's develop a comprehensive strategy for reducing plastic waste in urban environments. I need research on current methods, creative solutions, and analysis of feasibility."
89+
print(f"🎯 Task: {chat_task}")
90+
print("\n" + "=" * 80)
91+
print("🤖 AI Agents Collaboration Starting...")
92+
print("=" * 80)
7693

7794
try:
7895
stream = group_chat.run_stream(task=chat_task)
79-
await Console().run(stream)
96+
await Console(stream=stream)
8097
agentops.end_trace(tracer, end_state="Success")
8198

8299
except Exception as e:
@@ -112,4 +129,4 @@ async def main():
112129

113130
# You can view data on this run at [app.agentops.ai](app.agentops.ai).
114131
#
115-
# The dashboard will display LLM events for each message sent by each agent, including those made by the human user.
132+
# The dashboard will display LLM events for each message sent by each agent, showing the full AI-to-AI collaboration process with research, creative, and analytical perspectives.

examples/autogen/requirements.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
pyautogen
1+
autogen-agentchat==0.6.1
2+
autogen-ext[openai]

0 commit comments

Comments
 (0)