Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions a2a/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
/secret.*
/.vscode
/.venv
/.mypy_cache
Expand Down
17 changes: 14 additions & 3 deletions a2a/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,22 @@ RUN python -m compileall -q .
ENV AGENT_CONFIG=/app/agent.yaml

COPY <<EOF ./entrypoint.sh
#!/bin/bash
#!/bin/sh
set -e

export LLM_AGENT_API_URL=\${MODEL_RUNNER_URL}
export LLM_AGENT_MODEL_NAME=\${MODEL_RUNNER_MODEL}
if test -f /run/secrets/openai-api-key; then
export OPENAI_API_KEY=$(cat /run/secrets/openai-api-key)
fi

if test -n "\${OPENAI_API_KEY}"; then
echo "Using OpenAI with \${MODEL_NAME}"
export LLM_AGENT_MODEL_PROVIDER=openai
export LLM_AGENT_MODEL_NAME=\${OPENAI_MODEL_NAME}
else
echo "Using Docker Model Runner with \${MODEL_RUNNER_MODEL}"
export LLM_AGENT_MODEL_PROVIDER=docker
export LLM_AGENT_MODEL_NAME=\${MODEL_RUNNER_MODEL}
fi
exec \$@
EOF
RUN chmod +x ./entrypoint.sh
Expand Down
20 changes: 20 additions & 0 deletions a2a/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,25 @@ Using Docker Offload with GPU support, you can run the same demo with a larger m
docker compose -f compose.yaml -f compose.offload.yaml up --build
```

# 🧠 Inference Options

By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet connection or external API key is required.

If you’d prefer to use OpenAI instead:

1. Create a `secret.openai-api-key` file with your OpenAI API key:

```
sk-...
```

2. Restart the project with the OpenAI configuration:

```
docker compose down -v
docker compose -f compose.yaml -f compose.openai.yaml up
```

# ❓ What Can It Do?

This system performs multi-agent fact verification, coordinated by an **Auditor**:
Expand Down Expand Up @@ -125,3 +144,4 @@ docker compose down -v
[DuckDuckGo]: https://duckduckgo.com
[Docker Compose]: https://github.com/docker/compose
[Docker Desktop]: https://www.docker.com/products/docker-desktop/
[Docker Model Runner]: https://docs.docker.com/ai/model-runner/
4 changes: 3 additions & 1 deletion a2a/agents/critic.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,9 @@ instructions: |

Here is the question and answer you are going to double check:

model: ${LLM_AGENT_MODEL_NAME}
model:
name: ${LLM_AGENT_MODEL_NAME}
provider: ${LLM_AGENT_MODEL_PROVIDER}
tools:
- mcp/duckduckgo:search
skills:
Expand Down
4 changes: 3 additions & 1 deletion a2a/agents/reviser.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,9 @@ instructions: |
* If the answer is inaccurate, disputed, or unsupported, then you should output your revised answer text.
In any case YOU MUST output only your answer.

model: ${LLM_AGENT_MODEL_NAME}
model:
name: ${LLM_AGENT_MODEL_NAME}
provider: ${LLM_AGENT_MODEL_PROVIDER}
skills:
- id: revise_answer
name: Revise and Correct Answer Text
Expand Down
21 changes: 21 additions & 0 deletions a2a/compose.openai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@

services:
auditor-agent-a2a:
environment:
- OPENAI_MODEL_NAME=o3
secrets:
- openai-api-key
critic-agent-a2a:
environment:
- OPENAI_MODEL_NAME=o3
secrets:
- openai-api-key
reviser-agent-a2a:
environment:
- OPENAI_MODEL_NAME=o3
secrets:
- openai-api-key

secrets:
openai-api-key:
file: secret.openai-api-key
19 changes: 11 additions & 8 deletions a2a/src/AgentKit/agent/llm_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,23 +34,26 @@ def _build_model(self) -> BaseLlm:
provider = self._config.model.provider
name = self._config.model.name

if not name:
raise ValueError(
f"LLM agent {self._config.name} does not specify a model name"
)

if not provider:
provider = "docker"

base_url = None
api_key: str | None = None
if provider == "docker":
api_key = "does_not_matter_but_cannot_be_empty"
base_url = os.getenv("LLM_AGENT_API_URL")
base_url = os.getenv("MODEL_RUNNER_URL")
if not base_url:
raise ValueError("AGENT_LLM_URL environment variable is not set")
raise ValueError("MODEL_RUNNER_URL environment variable is not set")
name = "openai/" + name
elif provider == "openai":
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("OPENAI_API_KEY environment variable is not set")
else:
raise ValueError(f"unknown model provider {provider}")
if not name:
raise ValueError(
f"LLM agent {self._config.name} does not specify a model name"
)
print("LLMPARAMS", name, base_url)
return LiteLlm(model="openai/" + name, api_key=api_key, base_url=base_url)
return LiteLlm(model=name, api_key=api_key, base_url=base_url)
1 change: 1 addition & 0 deletions adk/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
/secret.*
/.vscode
/.venv
/.mypy_cache
Expand Down
22 changes: 21 additions & 1 deletion adk/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,29 @@ RUN --mount=type=cache,target=/root/.cache/uv \
COPY agents/ ./agents/
RUN python -m compileall -q .

COPY <<EOF /entrypoint.sh
#!/bin/sh
set -e

if test -f /run/secrets/openai-api-key; then
export OPENAI_API_KEY=$(cat /run/secrets/openai-api-key)
fi

if test -n "\${OPENAI_API_KEY}"; then
echo "Using OpenAI with \${OPENAI_MODEL_NAME}"
else
echo "Using Docker Model Runner with \${MODEL_RUNNER_MODEL}"
export OPENAI_BASE_URL=\${MODEL_RUNNER_URL}
export OPENAI_MODEL_NAME=openai/\${MODEL_RUNNER_MODEL}
export OPENAI_API_KEY=cannot_be_empty
fi
exec adk web --host 0.0.0.0 --port 8080 --log_level DEBUG
EOF
RUN chmod +x /entrypoint.sh

# Create non-root user
RUN useradd --create-home --shell /bin/bash app \
&& chown -R app:app /app
USER app

CMD [ "adk", "web", "--host", "0.0.0.0", "--port", "8080", "--log_level", "DEBUG" ]
ENTRYPOINT [ "/entrypoint.sh" ]
19 changes: 19 additions & 0 deletions adk/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,24 @@ docker compose -f compose.yaml -f compose.offload.yaml up --build
No configuration needed — everything runs from the container. Open `http://localhost:8080` in your browser to
chat with the agents.

# 🧠 Inference Options

By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet connection or external API key is required.

If you’d prefer to use OpenAI instead:

1. Create a `secret.openai-api-key` file with your OpenAI API key:

```
sk-...
```

2. Restart the project with the OpenAI configuration:

```
docker compose down -v
docker compose -f compose.yaml -f compose.openai.yaml up
```

# ❓ What Can It Do?

Expand Down Expand Up @@ -127,3 +145,4 @@ docker compose down -v
[DuckDuckGo]: https://duckduckgo.com
[Docker Compose]: https://github.com/docker/compose
[Docker Desktop]: https://www.docker.com/products/docker-desktop/
[Docker Model Runner]: https://docs.docker.com/ai/model-runner/
6 changes: 0 additions & 6 deletions adk/agents/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,11 @@
"""LLM Auditor for verifying & refining LLM-generated answers using the web."""

import logging
import os

import litellm

from . import agent

# Set the base URL for the OpenAI API to the Docker Model Runner URL
os.environ.setdefault("OPENAI_BASE_URL", os.getenv("MODEL_RUNNER_URL", ""))
# Set the API key to a dummy value since it's not used
os.environ.setdefault("OPENAI_API_KEY", "not-used")

# Enable logging with reduced verbosity
logging.basicConfig(
level=logging.INFO, # Less verbose than DEBUG
Expand Down
4 changes: 2 additions & 2 deletions adk/agents/sub_agents/critic/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@
tools = create_mcp_toolsets(tools_cfg=["mcp/duckduckgo:search"])

critic_agent = Agent(
# MODEL_RUNNER_MODEL is set by model_runner provider with the model name
model=LiteLlm(model=f"openai/{os.environ.get('MODEL_RUNNER_MODEL')}"),
# OPENAI_MODEL_NAME is set by entrypoint.sh with the model name
model=LiteLlm(model=os.environ.get("OPENAI_MODEL_NAME", "")),
name="critic_agent",
instruction=prompt.CRITIC_PROMPT,
tools=tools, # type: ignore
Expand Down
4 changes: 2 additions & 2 deletions adk/agents/sub_agents/reviser/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,8 +95,8 @@ def force_string_content(


reviser_agent = Agent(
# MODEL_RUNNER_MODEL is set by model_runner provider with the model name
model=LiteLlm(model=f"openai/{os.environ.get('MODEL_RUNNER_MODEL')}"),
# OPENAI_MODEL_NAME is set by entrypoint.sh with the model name
model=LiteLlm(model=os.environ.get("OPENAI_MODEL_NAME", "")),
name="reviser_agent",
instruction=prompt.REVISER_PROMPT,
before_model_callback=force_string_content,
Expand Down
10 changes: 10 additions & 0 deletions adk/compose.openai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
services:
adk:
environment:
- OPENAI_MODEL_NAME=o3
secrets:
- openai-api-key

secrets:
openai-api-key:
file: secret.openai-api-key
1 change: 1 addition & 0 deletions crew-ai/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
/secret.*
/.vscode
/.venv
/.mypy_cache
Expand Down
16 changes: 13 additions & 3 deletions crew-ai/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,19 @@ COPY . .
RUN poetry install
COPY <<EOF /entrypoint.sh
#!/bin/sh
export OPENAI_BASE_URL=\${MODEL_RUNNER_URL}
export OPENAI_MODEL_NAME=openai/\${MODEL_RUNNER_MODEL}
export OPENAI_API_KEY=does_not_matter_but_cannot_be_empty

if test -f /run/secrets/openai-api-key; then
export OPENAI_API_KEY=$(cat /run/secrets/openai-api-key)
fi

if test -n "\${OPENAI_API_KEY}"; then
echo "Using OpenAI with \${OPENAI_MODEL_NAME}"
else
echo "Using Docker Model Runner with \${MODEL_RUNNER_MODEL}"
export OPENAI_BASE_URL=\${MODEL_RUNNER_URL}
export OPENAI_MODEL_NAME=openai/\${MODEL_RUNNER_MODEL}
export OPENAI_API_KEY=cannot_be_empty
fi
exec poetry run marketing_posts
EOF
RUN chmod +x /entrypoint.sh
Expand Down
19 changes: 19 additions & 0 deletions crew-ai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,24 @@ docker compose up --build
That’s all. The agents will spin up and collaborate through a series of predefined roles and tasks to
deliver a complete marketing strategy for the input project.

# 🧠 Inference Options

By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet connection or external API key is required.

If you’d prefer to use OpenAI instead:

1. Create a `secret.openai-api-key` file with your OpenAI API key:

```
sk-...
```

2. Restart the project with the OpenAI configuration:

```
docker compose down -v
docker compose -f compose.yaml -f compose.openai.yaml up
```

## ❓ What Can It Do?

Expand Down Expand Up @@ -142,3 +160,4 @@ docker compose down -v
[DuckDuckGo]: https://duckduckgo.com
[Docker Compose]: https://github.com/docker/compose
[Docker Desktop]: https://www.docker.com/products/docker-desktop/
[Docker Model Runner]: https://docs.docker.com/ai/model-runner/
10 changes: 10 additions & 0 deletions crew-ai/compose.openai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
services:
agents:
environment:
- OPENAI_MODEL_NAME=gpt-4.1-mini
secrets:
- openai-api-key

secrets:
openai-api-key:
file: secret.openai-api-key
1 change: 1 addition & 0 deletions langgraph/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
/secret.*
/.vscode
/.venv
/.mypy_cache
Expand Down
22 changes: 21 additions & 1 deletion langgraph/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,24 @@ RUN --mount=type=cache,target=/root/.cache/uv \
UV_COMPILE_BYTECODE=1 uv pip install --system .
COPY agent.py .
RUN python -m compileall -q .
ENTRYPOINT [ "python", "agent.py" ]
COPY <<EOF /entrypoint.sh
#!/bin/sh
set -e

if test -f /run/secrets/openai-api-key; then
export OPENAI_API_KEY=$(cat /run/secrets/openai-api-key)
fi

if test -n "\${OPENAI_API_KEY}"; then
echo "Using OpenAI with \${OPENAI_MODEL_NAME}"
export MODEL_NAME=\${OPENAI_MODEL_NAME}
else
echo "Using Docker Model Runner with \${MODEL_RUNNER_MODEL}"
export OPENAI_BASE_URL=\${MODEL_RUNNER_URL}
export MODEL_NAME=\${MODEL_RUNNER_MODEL}
export OPENAI_API_KEY=cannot_be_empty
fi
exec python agent.py
EOF
RUN chmod +x /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
19 changes: 19 additions & 0 deletions langgraph/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,24 @@ docker compose up

That’s all. The agent spins up automatically, sets up PostgreSQL, loads a pre-seeded database (`Chinook.db`), and starts answering your questions.

# 🧠 Inference Options

By default, this project uses [Docker Model Runner] to handle LLM inference locally — no internet connection or external API key is required.

If you’d prefer to use OpenAI instead:

1. Create a `secret.openai-api-key` file with your OpenAI API key:

```
sk-...
```

2. Restart the project with the OpenAI configuration:

```
docker compose down -v
docker compose -f compose.yaml -f compose.openai.yaml up
```

# ❓ What Can It Do?

Expand Down Expand Up @@ -99,3 +117,4 @@ docker compose down -v
[PostgreSQL]: https://postgresql.org
[Docker Compose]: https://github.com/docker/compose
[Docker Desktop]: https://www.docker.com/products/docker-desktop/
[Docker Model Runner]: https://docs.docker.com/ai/model-runner/
4 changes: 2 additions & 2 deletions langgraph/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
from mcp import ClientSession
from mcp.client.sse import sse_client

base_url = os.getenv("MODEL_RUNNER_URL")
model = os.getenv("MODEL_RUNNER_MODEL", "gpt-4.1")
base_url = os.getenv("OPENAI_API_BASE_URL")
model = os.getenv("MODEL_NAME")
mcp_server_url = os.getenv("MCP_SERVER_URL")
api_key = os.getenv("OPENAI_API_KEY", "does_not_matter")

Expand Down
Loading