Skip to content

Commit 32f50a9

Browse files
authored
feat: llama 4 agent added (#1530)
1 parent 9f68f53 commit 32f50a9

File tree

7 files changed

+991
-0
lines changed

7 files changed

+991
-0
lines changed
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
# Tweet Simulator Agent Guide
2+
3+
This guide provides detailed steps to create a Tweet Simulator Agent that leverages Llama 4 and Composio, featuring a web interface built with FastAPI. Ensure you have Python 3.10 or higher installed.
4+
5+
6+
## Steps to Run
7+
8+
**Navigate to the Project Directory:**
9+
Change to the directory where the `setup.sh`, `backend_main.py`, `requirements.txt`, and `README.md` files are located. For example:
10+
```sh
11+
# Make sure you are in the root of the composio repository first
12+
cd python/examples/advanced_agents/tweet-simulator/llama-4
13+
```
14+
15+
### 1. Run the Setup File
16+
Make the setup.sh Script Executable (if necessary):
17+
On Linux or macOS, you might need to make the setup.sh script executable:
18+
```shell
19+
chmod +x setup.sh
20+
```
21+
Execute the setup.sh script to set up the environment and install dependencies. This will also activate the virtual environment (`~/.venvs/tweet_simulator`).
22+
```shell
23+
./setup.sh
24+
```
25+
Now, fill in the `.env` file with your secrets (like `GROQ_API_KEY`).
26+
27+
### 2. Activate Virtual Environment (if not already active)
28+
If you open a new terminal, you'll need to activate the virtual environment created by the setup script:
29+
```shell
30+
source ~/.venvs/tweet_simulator/bin/activate
31+
```
32+
33+
### 3. Run the FastAPI Application
34+
Use `uvicorn` to run the `backend_main.py` script:
35+
```shell
36+
uvicorn backend_main:app --reload --port 8000
37+
```
38+
* `--reload`: Enables auto-reloading when code changes are detected.
39+
* `--port 8000`: Specifies the port to run the application on.
40+
41+
### 4. Access the Application
42+
Open your web browser and navigate to:
43+
```
44+
http://localhost:8000
45+
```
46+
You should see the Tweet Simulator interface. Enter a topic and watch the agents generate tweets!
47+
Lines changed: 210 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,210 @@
1+
import os
2+
from aioconsole import aprint
3+
import dotenv
4+
import asyncio
5+
import json
6+
import random
7+
from fastapi import FastAPI, Request, Query
8+
from fastapi.responses import HTMLResponse
9+
from fastapi.staticfiles import StaticFiles
10+
from fastapi.templating import Jinja2Templates
11+
from sse_starlette.sse import EventSourceResponse
12+
from typing import List, Dict, Any, AsyncGenerator
13+
14+
from composio_llamaindex import App, ComposioToolSet
15+
from llama_index.core.agent import FunctionCallingAgentWorker
16+
from llama_index.core.llms import ChatMessage
17+
from llama_index.llms.groq import Groq
18+
19+
dotenv.load_dotenv()
20+
21+
app = FastAPI()
22+
app.mount("/static", StaticFiles(directory="python/examples/advanced_agents/game_builder/llama-4/static"), name="static")
23+
templates = Jinja2Templates(directory="python/examples/advanced_agents/game_builder/llama-4/templates")
24+
25+
composio_toolset = ComposioToolSet()
26+
fetched_tools = composio_toolset.get_tools(apps=[App.COMPOSIO_SEARCH])
27+
agent_tools: List[Any] = list(fetched_tools)
28+
29+
llm = Groq(model='meta-llama/llama-4-scout-17b-16e-instruct', api_key=os.environ.get('GROQ_API_KEY'))
30+
31+
RESEARCHER_NAME = "Researcher"
32+
FAMOUS_AGENT_NAMES = ["Elon Musk", "Marc Andreessen", "Sam Altman", "Naval Ravikant", "Paul Graham", "Balaji Srinivasan", "Pieter Levels"]
33+
REACTION_AGENT_NAMES = FAMOUS_AGENT_NAMES
34+
35+
PROFILE_PICS = {
36+
"Elon Musk": "https://x.com/elonmusk/photo",
37+
"Marc Andreessen": "https://x.com/pmarca/photo",
38+
"Sam Altman": "https://x.com/sama/photo",
39+
"Naval Ravikant": "https://x.com/naval/photo",
40+
"Paul Graham": "https://x.com/paulg/photo",
41+
"Balaji Srinivasan": "https://x.com/balajis/photo",
42+
"Pieter Levels": "https://x.com/levelsio/photo"
43+
}
44+
45+
AGENT_PERSONAS = {
46+
RESEARCHER_NAME: ChatMessage(
47+
role="system",
48+
content=(
49+
f"You are the {RESEARCHER_NAME}. Your sole task is to research a given topic. "
50+
"Your process: "
51+
"1. **Research:** Use the EXA_SEARCH tool to find relevant information about the topic. Focus on key features, recent news, and general sentiment. "
52+
"2. **Summarize Concisely:** Synthesize your findings into a **very brief** (2-3 key bullet points or sentences) summary. This summary must be concise to avoid exceeding limits. It will be used by other agents. "
53+
"**Constraint:** Your final output MUST ONLY be the concise research summary text. No extra greetings or commentary."
54+
)
55+
),
56+
"Elon Musk": ChatMessage(
57+
role="system",
58+
content=(
59+
"You are Elon Musk. React to the topic/summary with a short, direct tweet (max 280 chars). "
60+
"Make a bold statement, announce something matter-of-factly, or offer a blunt take. **Take a contrarian stance, disagreeing with the mainstream sentiment or likely views of others (like Sam Altman).** Simple language. "
61+
"Output ONLY the tweet text. No hashtags. Example style: 'There was (still is) a massive cyberattack against X.' or 'I expected to lose, but there is value to losing a piece for a positional gain.'"
62+
)
63+
),
64+
"Marc Andreessen": ChatMessage(
65+
role="system",
66+
content=(
67+
"You are Marc Andreessen. React to the topic/summary with an enthusiastic, insightful tweet (max 280 chars). "
68+
"Highlight tech breakthroughs, express techno-optimism, or share an interesting story/example. **Maintain a neutral perspective on the core debate, focusing on the technological or market implications objectively.** Think 'build'. "
69+
"Output ONLY the tweet text. No hashtags. Example style: 'Deepseek R1 is one of the most amazing and impressive breakthroughs I\'ve ever seen...' or 'Retweet or quote tweet this if you\'ve ever been de-banked...'"
70+
)
71+
),
72+
"Sam Altman": ChatMessage(
73+
role="system",
74+
content=(
75+
"You are Sam Altman. React to the topic/summary with a concise, thoughtful tweet (max 280 chars). "
76+
"Share an observation about AI progress, a resource constraint ('GPUs melting'), or a strategic thought. Often use 'we'. Measured tone. "
77+
"Output ONLY the tweet text. No hashtags. Example style: 'we trained a new model that is good at creative writing...' or 'it\'s super fun seeing people love images in chatgpt. but our GPUs are melting.'"
78+
)
79+
),
80+
"Naval Ravikant": ChatMessage(
81+
role="system",
82+
content=(
83+
"You are Naval Ravikant. React to the topic/summary with a short, philosophical, aphoristic tweet (max 280 chars). "
84+
"Distill the essence into a principle about wealth, time, or long-term thinking. Very concise. "
85+
"Output ONLY the tweet text. No hashtags. Example style: 'Play long-term games with long-term people.' or 'Earn with your mind, not your time.'"
86+
)
87+
),
88+
"Paul Graham": ChatMessage(
89+
role="system",
90+
content=(
91+
"You are Paul Graham. React to the topic/summary with a concise, insightful tweet (max 280 chars). Focus on subtle observations, identifying patterns, or offering pointed critique/advice related to thinking or building. "
92+
"Distill a specific observation. **Lean towards supporting the likely perspective of Sam Altman, using your observational style to bolster that view.** "
93+
"Output ONLY the tweet text. No hashtags. Example style: 'My point here is not that I dislike \'delve,\' though I do, but that it\'s a sign that text was written by ChatGPT.'"
94+
)
95+
),
96+
"Balaji Srinivasan": ChatMessage(
97+
role="system",
98+
content=(
99+
"You are Balaji Srinivasan. React to the topic/summary with a short, analytical, future-focused tweet (max 280 chars). "
100+
"Focus on macro trends (reindustrialization, AI overproduction), potential disruptions, or network effects. Can be dense or use strong keywords. "
101+
"Output ONLY the tweet text. No hashtags. Example style: 'Everyone wants to reindustrialize. No one wants to remember why the US deindustrialized...' or 'AI OVERPRODUCTION China seeks to commoditize...'"
102+
)
103+
),
104+
"Pieter Levels": ChatMessage(
105+
role="system",
106+
content=(
107+
"You are Pieter Levels (levelsio). React to the topic/summary with a direct, pragmatic tweet based on personal experience or indie hacker reality (max 280 chars). "
108+
"Challenge conventional wisdom, talk about bootstrapping, or share a blunt observation. Often uses 'I'. "
109+
"Output ONLY the tweet text. No hashtags. Example style: 'I\'m on 6 grams of Creatine per day...' or 'So many VC funded exits you hear about are actually massive failures...'"
110+
)
111+
),
112+
}
113+
114+
async def stream_simulation(topic: str) -> AsyncGenerator[str, None]:
115+
simulation_history: List[Dict[str, str]] = []
116+
agents = {}
117+
research_summary = "No research summary generated."
118+
119+
try:
120+
for name, persona_msg in AGENT_PERSONAS.items():
121+
agents[name] = FunctionCallingAgentWorker(
122+
tools=agent_tools,
123+
llm=llm,
124+
prefix_messages=[persona_msg],
125+
max_function_calls=5,
126+
allow_parallel_tool_calls=False,
127+
verbose=False,
128+
).as_agent()
129+
except Exception as e:
130+
yield json.dumps({"role": "System Error", "content": f"Server error during agent setup: {e}"})
131+
return
132+
133+
try:
134+
researcher_agent = agents[RESEARCHER_NAME]
135+
research_prompt = f"Research the topic: '{topic}' and provide a very brief (2-3 key bullet points or sentences) summary."
136+
try:
137+
status_update = {"role": "System Status", "content": f"Researcher is gathering information on '{topic}'..."}
138+
yield json.dumps(status_update)
139+
response = await researcher_agent.achat(research_prompt)
140+
research_summary = response.response.strip()
141+
if not research_summary:
142+
research_summary = "Researcher did not produce a summary."
143+
except Exception as e:
144+
research_summary = f"Error during research phase: {e}"
145+
error_update = {"role": "System Error", "content": f"Error during research: {e}"}
146+
yield json.dumps(error_update)
147+
148+
num_turns = 1
149+
for turn in range(num_turns):
150+
for name in REACTION_AGENT_NAMES:
151+
agent = agents[name]
152+
prompt = (
153+
f"Topic: '{topic}'\n\n"
154+
f"Research Summary Provided:\n{research_summary}\n\n"
155+
f"Remember your persona and instructions. Focus SOLELY on your unique perspective reacting ONLY to the research summary and topic. Output ONLY the tweet text."
156+
)
157+
tweet_data = {}
158+
tweet_content = ""
159+
try:
160+
response = await agent.achat(prompt)
161+
tweet_content = response.response.strip()
162+
except AttributeError:
163+
response = agent.chat(prompt)
164+
tweet_content = response.response.strip()
165+
except Exception as e:
166+
tweet_content = f"[Error generating tweet for {name}]"
167+
168+
profile_pic_url = PROFILE_PICS.get(name)
169+
170+
if not tweet_content or tweet_content.lower() == 'none':
171+
tweet_content = f"[{name} did not generate a tweet.]"
172+
tweet_data = {"role": "System Info", "content": tweet_content}
173+
else:
174+
likes = random.randint(0, 1000)
175+
tweet_data = {
176+
"role": name,
177+
"content": tweet_content,
178+
"likes": likes,
179+
"profile_pic_url": profile_pic_url
180+
}
181+
182+
history_entry = {"role": name, "content": tweet_content}
183+
simulation_history.append(history_entry)
184+
yield json.dumps(tweet_data)
185+
186+
completion_data = {"role": "System", "content": "Simulation Complete"}
187+
yield json.dumps(completion_data)
188+
189+
except asyncio.CancelledError:
190+
print("[STREAM] Client disconnected.")
191+
except Exception as e:
192+
try:
193+
error_data = {"role": "System Error", "content": f"An unexpected server error occurred: {e}"}
194+
yield json.dumps(error_data)
195+
except Exception as final_e:
196+
print(f"[STREAM] Error yielding final error message: {final_e}")
197+
finally:
198+
print("[STREAM] Simulation stream finished.")
199+
200+
@app.get("/", response_class=HTMLResponse)
201+
async def read_root(request: Request):
202+
return templates.TemplateResponse("index.html", {"request": request})
203+
204+
@app.get("/simulation_stream")
205+
async def simulation_endpoint(topic: str = Query(...)):
206+
return EventSourceResponse(stream_simulation(topic))
207+
208+
if __name__ == "__main__":
209+
import uvicorn
210+
uvicorn.run("backend_main:app", host="0.0.0.0", port=8000, reload=True)
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
composio-llamaindex==0.7.11
2+
llama-index-llms-groq==0.3.1
3+
python-dotenv==1.0.1
4+
aioconsole==0.8.1
5+
fastapi[all]==0.115.12
6+
sse-starlette==2.1.3
7+
llama-index-core==0.12.25
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
#!/bin/bash
2+
3+
# Create a virtual environment
4+
echo "Creating virtual environment..."
5+
python3.10 -m venv ~/.venvs/tweet_simulator
6+
7+
# Activate the virtual environment
8+
echo "Activating virtual environment..."
9+
source ~/.venvs/tweet_simulator/bin/activate
10+
11+
# Install libraries from requirements.txt
12+
echo "Installing libraries from requirements.txt..."
13+
pip3.10 install -r requirements.txt
14+
15+
# Login to your account
16+
echo "Login to your Composio acount"
17+
composio login
18+
19+
# Copy env backup to .env file
20+
if [ -f ".env.example" ]; then
21+
echo "Copying .env.example to .env..."
22+
cp .env.example .env
23+
else
24+
echo "No .env.example file found. Creating a new .env file..."
25+
touch .env
26+
fi
27+
28+
# Prompt user to fill the .env file
29+
echo "Please fill in the .env file with the necessary environment variables."
30+
31+
echo "Setup completed successfully!"

0 commit comments

Comments
 (0)