Skip to content
Open
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions agents/deepagents_content_creator/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
FROM python:3.13-slim
ARG RELEASE_VERSION="main"
COPY ./agents/deepagents_content_creator/ /app/agents/deepagents_content_creator
COPY ./apps/agentstack-sdk-py/ /app/apps/agentstack-sdk-py/
WORKDIR /app/agents/deepagents_content_creator
RUN --mount=type=cache,target=/tmp/.cache/uv \
--mount=type=bind,from=ghcr.io/astral-sh/uv:0.9.5,source=/uv,target=/bin/uv \
UV_COMPILE_BYTECODE=1 HOME=/tmp uv sync
ENV PRODUCTION_MODE=True \
RELEASE_VERSION=${RELEASE_VERSION}
CMD ["/app/agents/deepagents_content_creator/.venv/bin/server"]
146 changes: 146 additions & 0 deletions agents/deepagents_content_creator/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
# Content Builder Agent

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we update this README to be specific to this implementation (what was changed from original codebase) and link out to original README here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

<img width="1255" height="756" alt="content-cover-image" src="https://github.com/user-attachments/assets/4ebe0aba-2780-4644-8a00-ed4b96680dc9" />

A content writing agent for writing blog posts, LinkedIn posts, and tweets with cover images included.

**This example demonstrates how to define an agent through three filesystem primitives:**
- **Memory** (`AGENTS.md`) – persistent context like brand voice and style guidelines
- **Skills** (`skills/*/SKILL.md`) – workflows for specific tasks, loaded on demand
- **Subagents** (`subagents.yaml`) – specialized agents for delegated tasks like research

The `content_writer.py` script shows how to combine these into a working agent.

## Quick Start

```bash
# Set API keys
export ANTHROPIC_API_KEY="..."
export GOOGLE_API_KEY="..." # For image generation
export TAVILY_API_KEY="..." # For web search (optional)

# Run (uv automatically installs dependencies on first run)
cd examples/content-builder-agent
uv run python content_writer.py "Write a blog post about prompt engineering"
```

**More examples:**
```bash
uv run python content_writer.py "Create a LinkedIn post about AI agents"
uv run python content_writer.py "Write a Twitter thread about the future of coding"
```

## How It Works

The agent is configured by files on disk, not code:

```
content-builder-agent/
├── AGENTS.md # Brand voice & style guide
├── subagents.yaml # Subagent definitions
├── skills/
│ ├── blog-post/
│ │ └── SKILL.md # Blog writing workflow
│ └── social-media/
│ └── SKILL.md # Social media workflow
└── content_writer.py # Wires it together (includes tools)
```

| File | Purpose | When Loaded |
|------|---------|-------------|
| `AGENTS.md` | Brand voice, tone, writing standards | Always (system prompt) |
| `subagents.yaml` | Research and other delegated tasks | Always (defines `task` tool) |
| `skills/*/SKILL.md` | Content-specific workflows | On demand |

**What's in the skills?** Each skill teaches the agent a specific workflow:
- **Blog posts:** Structure (hook → context → main content → CTA), SEO best practices, research-first approach
- **Social media:** Platform-specific formats (LinkedIn character limits, Twitter thread structure), hashtag usage
- **Image generation:** Detailed prompt engineering guides with examples for different content types (technical posts, announcements, thought leadership)

## Architecture

```python
agent = create_deep_agent(
memory=["./AGENTS.md"], # ← Middleware loads into system prompt
skills=["./skills/"], # ← Middleware loads on demand
tools=[generate_cover, generate_social_image], # ← Image generation tools
subagents=load_subagents("./subagents.yaml"), # ← See note below
backend=FilesystemBackend(root_dir="/"),
)
```

The `memory` and `skills` parameters are handled natively by deepagents middleware. Tools are defined in the script and passed directly.

**Note on subagents:** Unlike `memory` and `skills`, subagents must be defined in code. We use a small `load_subagents()` helper to externalize config to YAML. You can also define them inline:

```python
subagents=[
{
"name": "researcher",
"description": "Research topics before writing...",
"model": "anthropic:claude-haiku-4-5-20251001",
"system_prompt": "You are a research assistant...",
"tools": [web_search],
}
],
```

**Flow:**
1. Agent receives task → loads relevant skill (blog-post or social-media)
2. Delegates research to `researcher` subagent → saves to `research/`
3. Writes content following skill workflow → saves to `blogs/` or `linkedin/`
4. Generates cover image with Gemini → saves alongside content

## Output

```
blogs/
└── prompt-engineering/
├── post.md # Blog content
└── hero.png # Generated cover image

linkedin/
└── ai-agents/
├── post.md # Post content
└── image.png # Generated image

research/
└── prompt-engineering.md # Research notes
```

## Customizing

**Change the voice:** Edit `AGENTS.md` to modify brand tone and style.

**Add a content type:** Create `skills/<name>/SKILL.md` with YAML frontmatter:
```yaml
---
name: newsletter
description: Use this skill when writing email newsletters
---
# Newsletter Skill
...
```

**Add a subagent:** Add to `subagents.yaml`:
```yaml
editor:
description: Review and improve drafted content
model: anthropic:claude-haiku-4-5-20251001
system_prompt: |
You are an editor. Review the content and suggest improvements...
tools: []
```

**Add a tool:** Define it in `content_writer.py` with the `@tool` decorator and add to `tools=[]`.

## Security Note

This agent has filesystem access and can read, write, and delete files on your machine. Review generated content before publishing and avoid running in directories with sensitive data.

## Requirements

- Python 3.11+
- `ANTHROPIC_API_KEY` - For the main agent
- `GOOGLE_API_KEY` - For image generation (uses Gemini's [Imagen / "nano banana"](https://ai.google.dev/gemini-api/docs/image-generation) via `gemini-2.5-flash-image`)
- `TAVILY_API_KEY` - For web search (optional, research still works without it)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This README contains multiple inconsistencies and errors that make it difficult to understand and run the agent. It appears to be a copy from another project and was not fully adapted.

  • It refers to a non-existent file content_writer.py throughout the document. The main logic is in src/content_creator/agent.py.
  • The "Quick Start" section provides an incorrect path (cd examples/content-builder-agent) and an incorrect command (uv run python content_writer.py ...). This agent is set up as a server.
  • The "Requirements" section mentions ANTHROPIC_API_KEY, but the agent expects TAVILY_API_KEY and GOOGLE_API_KEY to be set as environment variables, and the LLM is provided by the platform.

Please update the README to accurately reflect the implementation, including correct file paths, commands, and requirements.

40 changes: 40 additions & 0 deletions agents/deepagents_content_creator/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
[project]
name = "content-creator"
version = "0.1.0"
description = "A content writer agent configured entirely through files on disk"
authors = [
{ name = "IBM Corp." },
]
requires-python = ">=3.11"
dependencies = [
"agentstack-sdk",
"cachetools>=6.2.5",
"deepagents>=0.3.5",
"google-genai>=1.0.0",
"langchain-openai>=1.1.7",
"pillow>=10.0.0",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"tavily-python>=0.5.0",
"wcmatch>=10.1"
]

[dependency-groups]
dev = []

[tool.ruff]
line-length = 120

[tool.uv.sources]
agentstack-sdk = { path = "../../apps/agentstack-sdk-py", editable = true }

[project.scripts]
server = "content_creator.agent:serve"

[build-system]
requires = ["uv_build>=0.9.0,<0.10.0"]
build-backend = "uv_build"

[tool.pyright]
venvPath = "."
venv = ".venv"
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
./blogs
.venv/
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Copyright 2026 © BeeAI a Series of LF Projects, LLC
# SPDX-License-Identifier: Apache-2.0

173 changes: 173 additions & 0 deletions agents/deepagents_content_creator/src/content_creator/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
# Copyright 2025 © BeeAI a Series of LF Projects, LLC
# SPDX-License-Identifier: Apache-2.0


import json
import os
from collections import defaultdict
from pathlib import Path
from typing import Annotated
from datetime import datetime, timezone
from a2a.utils import get_message_text
from deepagents.backends import CompositeBackend, FilesystemBackend
from a2a.types import Message
from langchain_core.runnables import RunnableConfig

from agentstack_sdk.a2a.extensions import (
AgentDetail,
AgentDetailContributor,
LLMServiceExtensionServer,
LLMServiceExtensionSpec,
PlatformApiExtensionSpec,
PlatformApiExtensionServer,
LLMServiceExtensionParams,
LLMDemand,
TrajectoryExtensionServer,
TrajectoryExtensionSpec,
EnvVar,
)
from agentstack_sdk.a2a.types import AgentMessage
from agentstack_sdk.server import Server
from agentstack_sdk.server.context import RunContext
from langchain_core.messages import HumanMessage, AIMessageChunk, ToolMessage
from deepagents import create_deep_agent, SubAgent

from content_creator.backend import AgentStackBackend
from content_creator.tools import generate_cover, generate_social_image
from content_creator.utils import load_subagents, create_chat_model
from content_creator.messages import to_langchain_messages
from content_creator.tools import web_search

DEFAULT_MODEL = "anthropic:claude-sonnet-4-5-20250929"
AVAILABLE_SUBAGENTS = load_subagents(config_path=Path("./subagents.yaml"), tools={"web_search": web_search})
LLM_BY_AGENT = {
"default": LLMDemand(suggested=(DEFAULT_MODEL,), description="Default LLM for the root agent"),
**{
agent.name: LLMDemand(suggested=(agent.model,), description=f"LLM for subagent '{agent.name}'")
for agent in AVAILABLE_SUBAGENTS
if agent.model
},
}

server = Server()

CURRENT_DIRECTORY = Path(__file__).parent


@server.agent(
name="Content Creator Agent (Deepagents)",
documentation_url=f"https://github.com/i-am-bee/agentstack/blob/{os.getenv('RELEASE_VERSION', 'main')}/agents/deepagents_content_creator",
default_input_modes=["text/plain"],
default_output_modes=["text/plain", "image/jpeg", "image/png", "text/markdown"],
description="A content writer for a technology company that creates engaging, informative content that educates readers about AI, software development, and emerging technologies.",
detail=AgentDetail(
interaction_mode="multi-turn",
author=AgentDetailContributor(name="IBM"),
variables=[
EnvVar(name="TAVILY_API_KEY", description="API Key for Tavily to do web search", required=True),
EnvVar(name="GOOGLE_API_KEY", description="API Key for Google Image models", required=True),
],
),
)
async def deepagents_content_creator(
message: Message,
context: RunContext,
llm: Annotated[
LLMServiceExtensionServer,
LLMServiceExtensionSpec(params=LLMServiceExtensionParams(llm_demands=LLM_BY_AGENT)),
],
trajectory: Annotated[TrajectoryExtensionServer, TrajectoryExtensionSpec()],
_: Annotated[PlatformApiExtensionServer, PlatformApiExtensionSpec()],
):
default_llm_config = llm.data.llm_fulfillments.get("default")
if not default_llm_config:
yield "No LLM configured!"
return

user_message = get_message_text(message).strip()
if not user_message:
yield "Please provide a topic or instruction."
return

started_at = datetime.now(timezone.utc)
await context.store(data=message)

subagents: list[SubAgent] = []
for sub_agent in AVAILABLE_SUBAGENTS:
llm_config = llm.data.llm_fulfillments.get(sub_agent.name) or default_llm_config
sub_agent = sub_agent.to_deepagent_subagent(model=create_chat_model(llm_config))
subagents.append(sub_agent)

agent_stack_backend = AgentStackBackend()
print([f.filename for f in await agent_stack_backend.alist()])
fs_backend = FilesystemBackend(virtual_mode=True, root_dir=CURRENT_DIRECTORY)

agent = create_deep_agent(
model=create_chat_model(default_llm_config),
memory=[f"{CURRENT_DIRECTORY}/memory/AGENTS.md"],
skills=[f"{CURRENT_DIRECTORY}/skills/"],
tools=[generate_cover, generate_social_image],
subagents=subagents,
backend=CompositeBackend(
default=agent_stack_backend,
routes={f"{CURRENT_DIRECTORY}/memory/": fs_backend, f"{CURRENT_DIRECTORY}/skills/": fs_backend},
),
)

thread_id = f"session-{context.task_id}"
history = [message async for message in context.load_history() if isinstance(message, Message) and message.parts]
lc_messages = [*to_langchain_messages(history), HumanMessage(content=user_message)]
tool_calls = defaultdict(lambda: {"name": "", "args": ""})

async for chunk in agent.astream(
input={"messages": lc_messages},
config=RunnableConfig(configurable={"thread_id": thread_id}),
stream_mode=["messages"],
):
node_name, messages = chunk
if node_name != "messages" or not messages:
continue

for last_msg in messages:
if isinstance(last_msg, AIMessageChunk):
if (
"finish_reason" in last_msg.response_metadata
and last_msg.response_metadata["finish_reason"] == "tool_calls"
):
for _, data in tool_calls.items():
tool_call_metadata = trajectory.trajectory_metadata(
title=data["name"], content=json.dumps(obj=data["args"])
)
yield tool_call_metadata
await context.store(data=AgentMessage(metadata=tool_call_metadata))
tool_calls.clear()

elif last_msg.tool_call_chunks:
for tc in last_msg.tool_call_chunks:
tc_id: str | None = tc.get("id")
if tc_id:
tool_calls[tc_id]["name"] += tc.get("name") or ""
tool_calls[tc_id]["args"] += tc.get("args") or ""
elif last_msg.text:
yield AgentMessage(text=last_msg.text)
await context.store(AgentMessage(text=last_msg.text))

elif isinstance(last_msg, ToolMessage) and last_msg.name and last_msg.text:
tool_message_metadata = trajectory.trajectory_metadata(title=last_msg.name, content=last_msg.text)
yield tool_message_metadata
await context.store(data=AgentMessage(metadata=tool_message_metadata))

updated_files = await agent_stack_backend.alist(order_by="created_at", order="asc", created_after=started_at)
for updated_file in updated_files:
yield updated_file.to_file_part()


def serve():
try:
server.run(host=os.getenv("HOST", "127.0.0.1"), port=int(os.getenv("PORT", 10003)), configure_telemetry=True)
except KeyboardInterrupt:
pass


if __name__ == "__main__":
serve()
Loading