Sage is a virtual Mythic agent that that uses an AI agentic system to operate Mythic and Mythic agents running on compromised hosts. Sage does not run on a compromised host, it runs entirely in the Sage container. Sage leverages external AI model providers (e.g., Anthropic, Ollama, OpenAI) for inference and requires API keys for the selected provider.
WARNING: DO NOT USE THIS IN A PRODUCTION ENVIRONMENT BECAUSE THERE ARE CURRENTLY NO CONTROLS OR HUMAN-IN-LOOP FOR COMMANDS ISSUED TO MYTHIC AGENTS
NOTE: REQUIRES MYTHIC v3.3.1-rc57 OR LATER
To get started:
- Clone the Mythic repository
- Pull down the Sage agent from the MythicAgents organization
- Start Mythic
- Navigate to https://127.0.0.1:7443 and login with a username of
mythic_adminand password retrieved from the.envfile
This code snippet will execute most of the getting started steps:
cd ~/
git clone https://github.com/its-a-feature/Mythic
cd Mythic/
sudo make
sudo ./mythic-cli install github https://github.com/MythicAgents/sage
sudo ./mythic-cli start
sudo cat .env | grep MYTHIC_ADMIN_PASSWORD
Sage uses the following CASE SENSITIVE settings/keys to determine how to interact with models:
provider- Who is providing the model (e.g., Anthropic, Amazon Bedrock, LiteLLM, OpenAI, etc.)?- Many model providers (e.g., LiteLLM, Ollama, LM Studio) use the OpenAI API spec; select OpenAI in this case
model- The model string that the provider uses to determine which model to use for inference (e.g.,gpt-4o-miniorus.anthropic.claude-3-5-sonnet-20241022-v2:0)API_ENDPOINT- Where to send HTTP request for the model provider (e.g.https://api.openai.com/v1orhttp://127.0.0.1:11434/v1)- This key is not used for Amazon Bedrock calls and can be left blank
- Can be left blank if using standard API for OpenAI or Anthropic
API_KEY- The API key needed to authenticate to the model provider (e.g.,sk-az1RLw7XUWGXGUBcSgsNT5BlbkFJdbGbUgbbk7BUG9y6ezzb)- Amazon Bedrock
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKENAWS_DEFAULT_REGION
NOTE: WHERE SETTINGS AND CREDENTIALS ARE CONFIGURED OR SET MATTERS
These settings/keys can be provided in 4 different places to provide maximum flexibility. When a command is issued, Sage will look for credentials in this order and stop when the first instance is found:
- Task command parameters
- USER Secrets
- Payload build parameters
- Payload container system environment variables
This allows the Sage agent payload to be created with build paramaters so that all operators have access. However, operators can override the the provider, model, and credentials at any time by providing them along side the command that is being issued.
Sage is a different kind of Mythic agent because it is not an agent that runs on a compromised host. The "agent" is all local and lives on the Mythic server itself. Think of it like a "virtual" agent. Follow these steps to create an agent callback to interact with:
- Go to the
Payloadstab in Mythic - Click
Actions->Generate New Payload - Select
sagefor the target operating system - Click next on the Payload Type screen
- Fill out the build parameters, if any; See Model Access & Authentication
- Click next on the Select Commands; there are no commands to add
- Click next on the Select C2 Profiles; Sage does not use a C2 profile
- Click the
CREATE PAYLOADbutton to build the agent - A new callback will be created during the build process
- Go to the
Active Callbackstab in Mythic to interact with Sage
In order to interact with Anthropic, you must set the following values:
provider:Anthropicmodel:claude-sonnet-4-5-20250929orclaude-sonnet-4-5- Example model strings to us with Anthropic
API_ENDPOINT: Leave blankAPI_KEY:sk-ant-api03-abc123XYZ456_DEF789ghi0JKLmno1PQRsTu2vWXyz34AB56CDef78GHIjk9LMN_OPQRSTUVWXYZabcdef0123456789-ABCDEFG)
You must have an AWS account that has Bedrock permissions AND have access to the desired model in your bedrock configuration
NOTE:: From the aws cli, run the following command to get your AWS secrets:
aws sts get-session-token
In order to interact with Amazon Bedrock, you must set the following values:
provider:Bedrockmodel:us.anthropic.claude-3-5-sonnet-20241022-v2:0- Example model strings
- The first part is the inference region (e.g.,
usorglobal)
API_ENDPOINT: Leave blankAPI_KEY: Leave blankAWS_ACCESS_KEY_ID:AKIAI44QH8DHBEXAMPLEAWS_SECRET_ACCESS_KEY:wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYAWS_SESSION_TOKEN:IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZVERYLONGSTRINGEXAMPLEAWS_DEFAULT_REGION:us-east-1
Sage can interact with any OpenAI API capable application (e.g., ollama, OpenWeb UI, LM Studio, or LiteLLM)
Provide a fake API key for providers like LM Studio because it is required to use the OpenAI library
In order to interact with OpenAI's API, you must set the following: Es
provider:OpenAImodel:gpt-4o-miniAPI_KEY:sk-az1RLw7XUWGXGUBcSgsNT5BlbkFJdbGbUgbbk7BUG9y6ezzbAPI_ENDPOINT: OPTIONAL orhttps://api.openai.com/v1
Download and run ollama Docker image with: sudo docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama. You can work with the container directly using sudo docker exec -it ollama ollama run llama3
Alternatively create a Docker compose file with
In order to interact with an ollama, you must set the following:
provider:OpenAImodel:qwen3:1.7b- The selected model must support tools
API_ENDPOINT:http://127.0.0.1:11434/v1API_KEY:dummy-ollama-key
If your LLM provider requires custom SSL/TLS certificates (e.g., corporate proxy with custom CA, self-signed certificates, or internal certificate authorities), Sage supports loading a custom certificate bundle by setting the SSL_CERT_FILE environment variable. This can be useful for an internally hosted LiteLLM instance.
-
Create your certificate bundle in PEM format containing all required CA certificates:
cat root-ca.pem intermediate-ca.pem > bundle.pem -
Place the certificate bundle in the Sage certs directory:
- Local development:
Payload_Type/sage/certs/bundle.pem - Mythic deployment:
mythic/InstalledServices/sage/Payload_Type/sage/certs/bundle.pem
- Local development:
-
Restart the Sage container:
sudo ./mythic-cli start sage
When Sage starts with a custom certificate bundle, you'll see this message in the container logs:
[SAGE] Using custom SSL certificate bundle: /Mythic/certs/bundle.pem
If no certificate bundle is found, Sage will use system default certificates:
[SAGE] No custom SSL certificate bundle found, using system defaults
The certificate bundle must be in PEM format with one or more certificates:
-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJAKJ... (Base64-encoded certificate)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJAKJ... (Another certificate if needed)
-----END CERTIFICATE-----
Issue: SSL: CERTIFICATE_VERIFY_FAILED errors when connecting to LLM provider
Solution:
- Verify your certificate bundle is in PEM format
- Ensure it contains all required CA certificates (root and intermediate)
- Check file permissions (must be readable by the Sage container)
- Verify the certificate path in Sage startup logs
Issue: Certificate bundle not detected
Solution:
- Verify file is named exactly
bundle.pem(case-sensitive) - Verify file is in
Payload_Type/sage/certs/directory - Restart the Sage container after adding the certificate
Sage uses a multi-agent system built with LangGraph to intelligently route and execute tasks. The system consists of a Supervisor agent that coordinates multiple specialist agents, each with their own expertise and tools.
┌─────────────────────┐
│ │
│ User / Mythic │
│ Operator │
│ │
└──────────┬──────────┘
│
▼
┌─────────────────────────────────────┐
│ │
│ Supervisor Agent (Router) │
│ │
│ - Analyzes user intent │
│ - Routes to appropriate agent │
│ - Monitors progress & results │
│ - Integrates responses │
│ │
└──────┬──────────┬──────────┬──────────┬────────┘
│ │ │ │
┌───────────────┘ │ │ └─────────────┐
│ │ │ │
▼ ▼ ▼ ▼
┌────────────────────────┐ ┌────────────────────────┐ ┌────────────────────────┐
│ │ │ │ │ │
│ Generalist Agent │ │ Mythic_Operator Agent │ │ Mythic_Payload Agent │
│ │ │ │ │ │
│ - General questions │ │ - Callback management │ │ - Payload creation │
│ - Explanations │ │ - Task execution │ │ - Build configuration │
│ - Advice & planning │ │ - Reconnaissance │ │ - C2 profile selection │
│ - No Mythic tools │ │ - File operations │ │ - Compatibility checks │
│ │ │ - Mythic API calls │ │ │
│ │ │ │ │ │
└────────────────────────┘ └───────────┬────────────┘ └────────────────────────┘
│
│ Can delegate to
▼
┌────────────────────────┐
│ │
│ Mythic_Payload Agent │
│ (for lateral movement,│
│ privilege escalation)│
│ │
└────────────────────────┘
┌────────────────────────┐
│ │
│ MCP_Manager Agent │
│ │
│ - External MCP tools │
│ - Web fetching │
│ - Third-party APIs │
│ - Custom integrations │
│ │
└────────────────────────┘
Role: Task Router & Coordinator
The Supervisor is the entry point for all user requests. It analyzes the user's intent and delegates work to the appropriate specialist agent.
Responsibilities:
- Parse and understand user requests
- Route tasks to specialist agents based on expertise
- Monitor agent progress and recursion limits
- Integrate results from multiple agents
- Decide when tasks are complete vs. need continuation
- Handle agent handbacks when approaching recursion limits
Key Behaviors:
- Uses
transfer_to_*tools to delegate to specialist agents - Uses
respond_to_usertool when work is complete - Uses
request_continuationtool when hitting recursion limits - Recognizes task completion markers like
[AgentName completed task]
Role: General Knowledge & Explanations
Handles general questions and queries that don't require Mythic-specific operations.
Responsibilities:
- Answer general questions (technology, concepts, best practices)
- Provide explanations and summaries
- Handle open-ended or creative queries
- Offer guidance and recommendations
Tools: None (pure language model reasoning)
Example Tasks:
- "Explain how lateral movement works in red teaming"
- "What are the differences between Apollo and Poseidon agents?"
- "Suggest a reconnaissance strategy for a Windows domain"
Role: Mythic Operations & Execution
The primary operational agent for all Mythic C2 activities. Has direct access to Mythic API tools.
Responsibilities:
- Manage callbacks and agents
- Execute commands on compromised hosts via
issue_task_and_waitfor_task_output - Query task history and retrieve results
- Perform reconnaissance and enumeration
- Upload files and manage artifacts
- Check existing task history BEFORE issuing new commands (avoids duplicate work)
- Monitor recursion limits and use
summarize_and_handbackwhen needed - Delegate to Mythic_Payload agent for payload creation needs
Tools:
get_all_active_callbacks- List available agents/callbacksget_all_commands_for_payloadtype- Get command documentationissue_task_and_waitfor_task_output- Execute commands on callbacksget_task_history_for_callback- Review previous tasksget_all_task_output_by_task_id- Retrieve task resultsupload_file_by_file_uuid- Upload files to Mythicget_all_uploaded_files- List uploaded filesget_operations- Get operation detailstransfer_to_Mythic_Payload- Delegate payload creationsummarize_and_handback- Return control to Supervisor
Example Tasks:
- "List all active callbacks"
- "Run whoami on callback 5"
- "Do host-based reconnaissance on the domain controller"
- "Upload this script and execute it"
Critical Workflow: Before issuing new commands, the agent:
- Gets active callbacks
- Checks task history to see what's already been run
- Reviews existing output from past tasks
- Only issues NEW commands if the data doesn't already exist
Role: Payload Creation & Configuration
Specializes in creating Mythic payloads (C2 agents/implants) for deployment.
Responsibilities:
- Create payloads for specific target systems (OS, architecture)
- Configure C2 profiles (HTTP, WebSocket, DNS, etc.)
- Validate compatibility between payload types, OS, and C2 profiles
- Provide payload UUIDs and build details
- Suggest payload types based on target environment
Tools:
get_payload_names- List installed payload typescreate_payload- Build a new payloadget_all_payload_info- Query payload detailsget_c2_profiles_for_payload- Get supported C2 profilessummarize_and_handback- Return control to Supervisor
Example Tasks:
- "Create an Apollo payload for Windows with HTTP"
- "Build a Poseidon agent for Linux lateral movement"
- "What payload types support DNS C2?"
When to Use: The Mythic_Operator agent delegates to this agent when:
- Privilege escalation requires a new payload
- Lateral movement needs a different payload
- User explicitly requests payload creation
- Specialized payloads needed (service binaries, DLLs, etc.)
Role: External Tool Integration via MCP
Handles tasks that require external tools provided by connected MCP (Model Context Protocol) servers. MCP servers extend Sage's capabilities beyond built-in Mythic functionality.
Responsibilities:
- Execute tools from connected MCP servers
- Interpret and summarize tool results
- Handle tool errors gracefully
- Provide guidance on connecting MCP servers when none are available
- Monitor recursion limits and use
summarize_and_handbackwhen needed
Tools:
- Dynamic tools provided by connected MCP servers
summarize_and_handback- Return control to Supervisor
MCP Server Capabilities (when connected):
- Web fetching and HTTP requests
- File system operations (on Mythic server)
- Database queries
- Third-party API integrations
- Custom tools specific to connected servers
Example Tasks:
- "Fetch the contents of https://example.com/config.json"
- "Query the external database for user records"
- "Call the webhook API with this data"
Important Notes:
- MCP_Manager is only used for EXTERNAL tools not provided by other agents
- Mythic operations (callbacks, tasks, payloads) should use Mythic_Operator or Mythic_Payload agents
- Requires MCP servers to be connected via the
mcp-connectcommand - Use
mcp-listcommand to see available MCP tools
Routing Priority: The Supervisor prefers built-in agents (Mythic_Operator, Mythic_Payload, Generalist) over MCP_Manager when they have relevant capabilities. MCP_Manager is used only when the task requires capabilities that other agents cannot provide.
Each agent operates in its own message channel to prevent context pollution and optimize token usage:
supervisor_messages- Supervisor's view of the conversationgeneralist_messages- Generalist's isolated contextmythic_operator_messages- Mythic_Operator's isolated contextmythic_payload_messages- Mythic_Payload's isolated contextmcp_manager_messages- MCP_Manager's isolated context
Key Concepts:
-
Explicit Handoffs: When Supervisor delegates, it creates:
- A
ToolMessageacknowledging the transfer - A
HumanMessagewith the task (tagged with_delegated_tometadata)
- A
-
Response Copying: Worker agent responses are copied to
supervisor_messagesso the Supervisor can see results -
Sequence Numbering: All messages tagged with
_seqfor chronological ordering across channels -
Deduplication: Messages have unique IDs (
_msg_id) to prevent duplicate display -
Completion Headers: Worker agents add completion markers (filtered in non-verbose mode)
Example Flow:
User: "List callbacks and run whoami on callback 5"
↓
Supervisor analyzes → identifies Mythic operation → calls transfer_to_Mythic_Operator
↓
Mythic_Operator receives task in mythic_operator_messages channel
↓
Mythic_Operator calls get_all_active_callbacks tool
↓
Mythic_Operator calls issue_task_and_waitfor_task_output("whoami", callback_id=5)
↓
Mythic_Operator response copied to supervisor_messages
↓
Supervisor sees completion → calls respond_to_user with results
↓
User sees formatted output
Complex multi-step operations may hit LangGraph's recursion limit. The recursion limit is set to 25 calls before it is triggered as a safety mechanism to prevent run away agents. Sage handles this gracefully:
- Worker Agent Monitoring: Agents check
remaining_stepsbefore major operations - Handback Tool: When
remaining_steps <= 4, agents usesummarize_and_handbackto return control - Progress Summaries: Include completed work, key findings, and remaining tasks
- User Continuation: Supervisor asks user if they want to continue
- Context Preservation: All state stored in
sage.dbcheckpoint system
Example Handback:
🤖[Mythic_Operator]> Progress Handback: Completed reconnaissance on 3 hosts (gathered system info,
running processes, network config). Remaining: privilege escalation and lateral movement to
identified targets.
🤖[Supervisor]> We've hit the operation complexity limit. Would you like me to continue with
privilege escalation?
By default, Sage shows concise output. Enable verbose mode to see:
- Individual tool calls with arguments
- Tool responses with full data
- Agent reasoning and intermediate steps
- Completion headers and internal messages
Set verbose=true in the chat or query command parameters.
Sage provides the following Mythic commands for interacting with AI models and MCP servers.
Multi-turn interactive chat session with an AI model. Supports back-and-forth conversation with context preserved across messages. New chats means a brand new context.
chat -prompt <prompt>
Parameters:
| Parameter | Required | Description |
|---|---|---|
prompt |
Yes | The prompt to send to the model |
tools |
No | Enable tool use (default: true) |
verbose |
No | Show verbose output of all messages (default: false) |
provider |
No | Override the model provider |
model |
No | Override the model |
API_ENDPOINT |
No | Override the API endpoint |
API_KEY |
No | Override the API key |
AWS_ACCESS_KEY_ID |
No | AWS credentials for Bedrock |
AWS_SECRET_ACCESS_KEY |
No | AWS credentials for Bedrock |
AWS_SESSION_TOKEN |
No | AWS credentials for Bedrock |
AWS_DEFAULT_REGION |
No | AWS region for Bedrock |
Example:
chat -prompt "Tell me about active callbacks in Mythic"
Send a single query to a model and receive a single response. Unlike chat, this does not maintain conversation history.
query -prompt <prompt>
Parameters:
| Parameter | Required | Description |
|---|---|---|
prompt |
Yes | The prompt to send to the model |
tools |
No | Enable tool use (default: true) |
verbose |
No | Show verbose output (default: false) |
provider |
No | Override the model provider |
model |
No | Override the model |
API_ENDPOINT |
No | Override the API endpoint |
API_KEY |
No | Override the API key |
AWS_* |
No | AWS credentials for Bedrock |
Example:
query -prompt "What is the capital of France?"
List all available models for the configured provider.
list
Parameters:
| Parameter | Required | Description |
|---|---|---|
provider |
No | Override the model provider |
API_ENDPOINT |
No | Override the API endpoint |
API_KEY |
No | Override the API key |
Example:
list
Note: Listing models for Bedrock is not currently supported. Use the AWS CLI instead.
Connect to an MCP (Model Context Protocol) server. Supports STDIO, SSE, and Streamable HTTP transports.
mcp-connect -name <server_name> -connection_type <stdio|sse|streamable_http> [options]
Parameters:
| Parameter | Required | Description |
|---|---|---|
name |
Yes | Unique name for the MCP server connection |
connection_type |
Yes | Type of connection: stdio, sse, or streamable_http |
command |
STDIO | Command to execute for STDIO MCP server |
arguments |
STDIO | Array of command arguments |
cwd |
No | Working directory for STDIO command |
url |
SSE/HTTP | URL for SSE or HTTP streaming connection |
headers |
No | HTTP headers (format: Key: Value) |
timeout |
No | Connection timeout in seconds (default: 30) |
sse_read_timeout |
No | SSE read timeout in seconds (default: 300) |
terminate_on_close |
No | Terminate HTTP connection on close (default: true) |
ssl_verify |
No | Verify SSL certificates (default: true) |
Examples:
STDIO connection to Mythic MCP server:
mcp-connect -name mythic -connection_type stdio -command uv -arguments --directory -arguments /Mythic/mcp/mythic -arguments run -arguments main.py -arguments mythic_admin -arguments SuperSecretPassword -arguments 192.168.1.100 -arguments 7443
SSE connection:
mcp-connect -name myserver -connection_type sse -url https://example.com/mcp/sse
SSE connection without SSL verification (development only):
mcp-connect -name myserver -connection_type sse -url https://example.com/mcp/sse -ssl_verify false
Disconnect from a connected MCP server.
mcp-disconnect -name <server_name>
Parameters:
| Parameter | Required | Description |
|---|---|---|
name |
Yes | Name of the MCP server to disconnect |
Example:
mcp-disconnect -name mythic
List all connected MCP servers and their available tools with descriptions and parameters.
mcp-list
Example Output:
Connected MCP Servers: 1
Total Tools Available: 5
==================================================
Server: mythic
Connection Type: stdio
Tools: 5
Available Tools:
- list_callbacks
Description: List all callbacks in Mythic
Parameters:
- include_archived (boolean): Include archived callbacks
- create_task
Description: Create a new task for a callback
Parameters:
- callback_id* (string): The callback ID
- command* (string): The command to execute
Directly invoke an MCP tool with specified arguments.
NOTE: MCP tool names and arguments are shown after using the
mcp-connectormcp-listcommands
mcp-call -tool <tool_name> [-server <server_name>] -args <key> -args <value> ...
Parameters:
| Parameter | Required | Description |
|---|---|---|
tool |
Yes | Name of the MCP tool to invoke |
server |
No | Server name (required if multiple servers have the same tool names) |
args |
No | Tool arguments as alternating key-value pairs |
Examples:
Call a tool with no arguments:
mcp-call -tool list_callbacks
Call a tool with arguments:
mcp-call -tool get_callback -args callback_id -args 123
Call a tool on a specific server (when tool name conflicts exist):
mcp-call -tool search -server mythic -args query -args "admin"
Note: Use
mcp-listto see available tools and their parameter names. Required parameters are marked with*.
Internal Mythic command for callback table functionality. Does nothing when executed directly.
exit
Use the following commands to run the Sage container from the command line without using Docker (typicall for testing and troubleshooting):
NOTE: Replace the RabbitMQ password with the one from the
.envfile in the root Mythic folder
cd sage/Payload_Type/sage
export DEBUG_LEVEL=debug
export MYTHIC_SERVER_HOST="127.0.0.1"
export RABBITMQ_HOST="127.0.0.1"
export RABBITMQ_PASSWORD="K5SHkn1fk2pcT0YkQxTTMgO5gFwjiQ"
python3 main.pySage maintains conversation state and history using a SQLite database that enables multi-turn conversations and recovery from interruptions.
sage.db is a SQLite database used by LangGraph's checkpoint system to persist conversation state. It stores complete conversation histories, agent states, and message flows across all Sage interactions.
The database contains:
- Conversation Messages: All user inputs (HumanMessage), AI responses (AIMessage), and tool execution results (ToolMessage)
- Multi-Agent State: Isolated message channels for each specialist agent (Supervisor, Generalist, Mythic_Operator, Mythic_Payload)
- Message Sequences: Ordering information to maintain conversation flow across agent handoffs
- Graph State: Agent counters, recursion limits, and workflow state
- Thread Identifiers: Unique conversation threads based on Mythic task IDs
Each conversation is identified by a unique thread ID composed of:
thread_id = f"{agent_task_id}-{task_id}"
Where:
agent_task_id: Mythic's AgentTaskID for the specific callback interactiontask_id: Mythic's Task.ID for the specific command issued
This ensures each Sage task maintains its own isolated conversation context.
The checkpoint system enables:
- Conversation Continuity: Multi-turn conversations where context is preserved across multiple commands
- Recursion Recovery: When complex tasks hit recursion limits, state is preserved and can be resumed with "continue"
- Agent Handoffs: Supervisor can delegate to specialist agents while maintaining conversation history
- Progress Tracking: Complete audit trail of what agents have done and decided during task execution
- Local development:
Payload_Type/sage/sage.db - Mythic deployment:
mythic/InstalledServices/sage/Payload_Type/sage/sage.db - Note: This file is excluded from version control but preserved in the repository structure
The sage.db file contains:
- Complete conversation histories with your LLM interactions
- Task details and responses from Mythic API calls
- Tool execution results and agent reasoning
This data persists across Sage container restarts and may contain sensitive operational information. Consider the database contents when sharing the Sage directory or backing up data.
The database grows over time as conversations accumulate. If you need to clear conversation history:
# Stop the Sage container first
sudo ./mythic-cli stop sage
# Remove the database file
rm mythic/InstalledServices/sage/Payload_Type/sage/sage.db
# Restart Sage (database will be recreated)
sudo ./mythic-cli start sageNote: Deleting sage.db removes all conversation history but does not affect Sage's ability to function. A new database will be created automatically on startup.
Sage includes integrated observability and tracing powered by Phoenix from Arize AI. Phoenix provides real-time monitoring, visualization, and debugging of LLM interactions.
Phoenix is an open-source observability platform designed specifically for LLM applications. It captures detailed traces of:
- LLM requests and responses
- Token usage and costs
- Latency and performance metrics
- Tool/function calls
- Multi-agent workflows
- Real-time Tracing: Monitor LLM calls as they happen across all Sage sessions
- Performance Analytics: Track token usage, response times, and model behavior
- Multi-Agent Visibility: Visualize the Sage supervisor and specialist agent interactions
- No Configuration Required: Phoenix launches automatically when Sage starts
Phoenix automatically starts when Sage launches and is accessible at:
- URL:
http://127.0.0.1:6006 - No authentication required for local development
Phoenix stores trace data locally at:
- Location:
Payload_Type/sage/.phoenix/ - Database:
phoenix.db(SQLite) - Note: This directory is excluded from version control but preserved in the repository structure
All LLM traces, spans, and observability data are stored in this local database and can be viewed through the Phoenix web interface.
- GitHub: https://github.com/Arize-ai/phoenix
- Documentation: https://docs.arize.com/phoenix
LangSmith Observability gives you complete visibility into agent behavior with tracing, real-time monitoring, alerting, and high-level insights into usage.
Sage uses LangChain and therefore you can also use LangSmith to view AI agent system traces.
To use LangSmith, export the following environment variables before Sage is started:
export LANGSMITH_TRACING=true
export LANGSMITH_ENDPOINT=https://api.smith.langchain.com
export LANGSMITH_PROJECT=Sage
export LANGSMITH_API_KEY=lsv2_pt_example_langsmith_api_key
This project is still early in development and new features and capabilities will be added. Currently, some known limitations are:
- There's no file upload functionality
- No Human-in-the-loop to prevent issuing commands that could negatively impact production networks
- There is no context management and the context window can get full causing errors
- Chat sessions are not stream based
- Bedrock provider is limited to Anthropic Claude
