-
Notifications
You must be signed in to change notification settings - Fork 768
Cloud/deployable temporal example #395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughRouterWorkflow.run now takes no input and LLMRouter uses an Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Client
participant Temporal as Temporal Server
participant Worker as Worker (run_worker.py)
participant Router as RouterWorkflow.run()
participant LLMFactory as LLMRouter (llm_factory)
participant Agent as Target Agent
Client->>Temporal: Start RouterWorkflow
Worker->>Router: execute run()
Router->>LLMFactory: call factory to obtain LLM
LLMFactory->>Agent: route request
Agent-->>LLMFactory: result
LLMFactory-->>Router: routed result
Router-->>Worker: WorkflowResult
Worker-->>Temporal: complete
Temporal-->>Client: result
sequenceDiagram
autonumber
participant Main as run_worker.main()
participant Mod as examples/temporal/workflows.py
participant Factory as create_temporal_worker_for_app
participant Worker as Temporal Worker
Main->>Mod: import workflows (registers workflow entry points)
Main->>Factory: create worker for app
activate Worker
Main->>Worker: async run (start)
Worker-->>Main: worker run loop (await)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks (2 passed, 1 warning)❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
examples/temporal/router.py (2)
51-52
: Guard filesystem server mutation and avoid duplicatesUnconditionally extending args with CWD is non-idempotent and may KeyError if the server isn’t configured. Also use append.
- context.config.mcp.servers["filesystem"].args.extend([os.getcwd()]) + fs = context.config.mcp.servers.get("filesystem") + if fs: + cwd = os.getcwd() + if cwd not in fs.args: + fs.args.append(cwd) + else: + logger.warning("Filesystem MCP server not configured; skipping path injection")
17-17
: Makemain
import robust for package vs script execution
Inexamples/temporal/router.py
, update the import to handle both package and script contexts, and add an__init__.py
so relative imports work:-from main import app +try: + from .main import app # when used as a package +except ImportError: + from main import app # fallback for script executionAlso create
examples/temporal/__init__.py
(can be empty) to mark it as a package.
🧹 Nitpick comments (8)
examples/temporal/router.py (6)
35-41
: Align Workflow generic with no-input run()
RouterWorkflow
is declared asWorkflow[str]
butrun()
takes no input. If the generic represents the input type (as in other workflows), switch toWorkflow[None]
to avoid confusion and type-checker noise.-class RouterWorkflow(Workflow[str]): +class RouterWorkflow(Workflow[None]):Run mypy/pyright to confirm generic semantics in
Workflow
align with this change.
43-47
: Docstring return text is misleadingIt says “processed data” but the workflow returns the literal "Success". Update for accuracy.
- Returns: - A WorkflowResult containing the processed data + Returns: + A WorkflowResult indicating success
80-85
: Prefer per-agent LLM factory or class-based factoryCapturing a single pre-instantiated LLM can couple state across agents. Return a new LLM per agent (or pass the class directly if supported).
- router = LLMRouter( - llm_factory=lambda _agent: llm, + router = LLMRouter( + llm_factory=lambda agent: OpenAIAugmentedLLM( + name=f"openai_router_{agent.name}", + instruction="You are a router", + ),If
LLMRouter
accepts a class factory, even simpler:- llm_factory=lambda _agent: llm, + llm_factory=OpenAIAugmentedLLM,
92-107
: Make logging calls robust/structured
logger.info("...", data=results)
relies on a custom logger API. For portability, log serializable payloads or useextra
.- logger.info("Router Results:", data=results) + logger.info("Router Results: %s", [r.model_dump() for r in results]) - logger.info("Tools available:", data=result.model_dump()) + logger.info("Tools available: %s", result.model_dump()) - logger.info("read_file result:", data=result.model_dump()) + logger.info("read_file result: %s", result.model_dump()) - logger.info("Router Results:", data=results) + logger.info("Router Results: %s", [r.model_dump() for r in results]) - logger.info("Router Results:", data=results) + logger.info("Router Results: %s", [r.model_dump() for r in results])If the structured logger expects key-value pairs, prefer:
logger.info("Router Results", extra={"data": [r.model_dump() for r in results]})Also applies to: 121-139
100-106
: Handle missing file in example pathOn cloud deploys the CWD may not contain
mcp_agent.config.yaml
. Add an existence check to prevent tool errors.- result = await agent.call_tool( - name="read_file", - arguments={ - "path": str(os.path.join(os.getcwd(), "mcp_agent.config.yaml")) - }, - ) + cfg_path = os.path.join(os.getcwd(), "mcp_agent.config.yaml") + if os.path.exists(cfg_path): + result = await agent.call_tool( + name="read_file", + arguments={"path": str(cfg_path)}, + ) + logger.info("read_file result: %s", result.model_dump()) + else: + logger.warning("Config file not found at %s; skipping read_file", cfg_path) - logger.info("read_file result:", data=result.model_dump())
148-151
: Consider setting a deterministic workflow_idTo avoid duplicate runs on redeploys, pass an explicit
workflow_id
.- handle = await executor.start_workflow( - "RouterWorkflow", - ) + handle = await executor.start_workflow( + "RouterWorkflow", + workflow_id="router-example", + )examples/temporal/run_worker.py (1)
18-19
: Avoid configuring root logging at import timeGuard
basicConfig
so it doesn’t override host logging in deployed environments.-logging.basicConfig(level=logging.INFO) +root = logging.getLogger() +if not root.handlers: + logging.basicConfig(level=logging.INFO)examples/temporal/workflows.py (1)
1-6
: Declare explicit exportsMake intent clear and help linters by exporting the public workflows.
from .interactive import WorkflowWithInteraction # noqa: F401 + +__all__ = [ + "SimpleWorkflow", + "EvaluatorOptimizerWorkflow", + "OrchestratorWorkflow", + "ParallelWorkflow", + "RouterWorkflow", + "WorkflowWithInteraction", +]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
examples/temporal/router.py
(2 hunks)examples/temporal/run_worker.py
(1 hunks)examples/temporal/workflows.py
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
examples/temporal/workflows.py (4)
examples/temporal/evaluator_optimizer.py (1)
EvaluatorOptimizerWorkflow
(23-87)examples/temporal/orchestrator.py (1)
OrchestratorWorkflow
(21-101)examples/temporal/router.py (1)
RouterWorkflow
(35-141)examples/temporal/interactive.py (1)
WorkflowWithInteraction
(30-58)
examples/temporal/router.py (1)
src/mcp_agent/executor/workflow.py (1)
WorkflowResult
(55-59)
from basic import SimpleWorkflow # noqa: F401 | ||
from evaluator_optimizer import EvaluatorOptimizerWorkflow # noqa: F401 | ||
from orchestrator import OrchestratorWorkflow # noqa: F401 | ||
from parallel import ParallelWorkflow # noqa: F401 | ||
from router import RouterWorkflow # noqa: F401 | ||
from interactive import WorkflowWithInteraction # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Switch to relative imports for package safety
Bare imports break when not running from the module directory. Use explicit relative imports.
-from basic import SimpleWorkflow # noqa: F401
-from evaluator_optimizer import EvaluatorOptimizerWorkflow # noqa: F401
-from orchestrator import OrchestratorWorkflow # noqa: F401
-from parallel import ParallelWorkflow # noqa: F401
-from router import RouterWorkflow # noqa: F401
-from interactive import WorkflowWithInteraction # noqa: F401
+from .basic import SimpleWorkflow # noqa: F401
+from .evaluator_optimizer import EvaluatorOptimizerWorkflow # noqa: F401
+from .orchestrator import OrchestratorWorkflow # noqa: F401
+from .parallel import ParallelWorkflow # noqa: F401
+from .router import RouterWorkflow # noqa: F401
+from .interactive import WorkflowWithInteraction # noqa: F401
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
from basic import SimpleWorkflow # noqa: F401 | |
from evaluator_optimizer import EvaluatorOptimizerWorkflow # noqa: F401 | |
from orchestrator import OrchestratorWorkflow # noqa: F401 | |
from parallel import ParallelWorkflow # noqa: F401 | |
from router import RouterWorkflow # noqa: F401 | |
from interactive import WorkflowWithInteraction # noqa: F401 | |
from .basic import SimpleWorkflow # noqa: F401 | |
from .evaluator_optimizer import EvaluatorOptimizerWorkflow # noqa: F401 | |
from .orchestrator import OrchestratorWorkflow # noqa: F401 | |
from .parallel import ParallelWorkflow # noqa: F401 | |
from .router import RouterWorkflow # noqa: F401 | |
from .interactive import WorkflowWithInteraction # noqa: F401 |
🤖 Prompt for AI Agents
In examples/temporal/workflows.py lines 1-6, the module uses bare imports which
fail when the package isn't executed from its directory; change them to explicit
relative imports (e.g. prefix each import with a single dot or appropriate
number of dots for sibling modules) so they resolve correctly when the package
is imported, and keep or adjust any linter ignores as needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
examples/temporal/requirements.txt
(1 hunks)
🔇 Additional comments (1)
examples/temporal/requirements.txt (1)
2-2
: Pin mcp-agent to the latest PyPI release
The local file URL won’t resolve in cloud builders. Replace it with the published version for reproducible installs:-mcp-agent @ file://../../ # Link to the local mcp-agent project root. Remove @ file://../../ for cloud deployment +mcp-agent==0.1.13 # Pin to the latest published version
Can you try the @app.tool and @app.async_tool decorators as part of this example testing @rholinshead? We can have some workflows that are exposed as tools |
Just to clarify, do you mean to update some of the existing |
@saqadri updated & it works: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
examples/temporal/orchestrator.py (1)
1-5
: Update module docstring to reflect @app.async_tool usage.The top docstring still references app.workflow/app.workflow_run, which is misleading now.
-""" -Example of using Temporal as the execution engine for MCP Agent workflows. -This example demonstrates how to create a workflow using the app.workflow and app.workflow_run -decorators, and how to run it using the Temporal executor. -""" +""" +Example using Temporal as the execution engine for MCP Agent workflows. +This example registers an async tool via @app.async_tool which is auto-wrapped +as a Temporal workflow (run/get_status) and executed by the Temporal executor. +"""
🧹 Nitpick comments (2)
examples/temporal/orchestrator.py (2)
20-24
: Remove stray string literal.This free-standing triple-quoted string is a no-op; fold it into the module docstring (see prior comment) or remove it.
-""" -A more complex example that demonstrates how to orchestrate multiple agents. -This example uses the @app.async_tool decorator instead of traditional workflow/run definitions -and will have a workflow created behind the scenes. -"""
97-100
: Parameterize model and cap iterations (optional).Hard-coding gpt-4o and 100 iterations can be slow/expensive. Consider reading from context/model selector or env, and default to saner iterations.
- return await orchestrator.generate_str( - message=input, - request_params=RequestParams(model="gpt-4o", max_iterations=100), - ) + return await orchestrator.generate_str( + message=input, + request_params=RequestParams( + model=(context.model_selector.default_model() if getattr(context, "model_selector", None) else os.getenv("OPENAI_MODEL", "gpt-4o")), + max_iterations=int(os.getenv("ORCHESTRATOR_MAX_ITERS", "20")), + ), + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
examples/temporal/README.md
(1 hunks)examples/temporal/orchestrator.py
(1 hunks)examples/temporal/workflows.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- examples/temporal/README.md
🚧 Files skipped from review as they are similar to previous changes (1)
- examples/temporal/workflows.py
🧰 Additional context used
🧬 Code graph analysis (1)
examples/temporal/orchestrator.py (3)
src/mcp_agent/core/context.py (2)
Context
(57-103)mcp
(102-103)src/mcp_agent/workflows/llm/augmented_llm.py (1)
RequestParams
(124-174)src/mcp_agent/app.py (2)
async_tool
(760-801)config
(154-155)
🔇 Additional comments (2)
examples/temporal/orchestrator.py (2)
103-118
: Demo flow looks good, pending signature fix.Once app_ctx is optional (see above), this start_workflow call is consistent with the decorated function.
27-29
: Revert the= None
default onapp_ctx
; the async_tool machinery inspects for anapp_ctx
parameter and injectsworkflow_self.context
automatically.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/mcp_agent/server/app_server.py (1)
562-566
: Signal name mismatch will break human-input resume.The default stored here is "human_input", but the workflow’s signal is
provide_human_input
(InteractiveWorkflow). Align the default to avoid signaling the wrong name.- "signal_name": metadata.get("signal_name", "human_input"), + "signal_name": metadata.get("signal_name", "provide_human_input"),
🧹 Nitpick comments (3)
src/mcp_agent/executor/temporal/interactive_workflow.py (2)
43-45
: Fix None-typed fields for static/type safety.Annotate these as optional to match initialization with None.
- self._request: HumanInputRequest = None - self._response: str = None + self._request: HumanInputRequest | None = None + self._response: str | None = None
65-66
: Tighten the callback’s return type.Expose an accurate callable signature for better schemas and editor help.
- def create_input_callback(self) -> callable: + def create_input_callback(self) -> "Callable[[HumanInputRequest], Awaitable[HumanInputResponse]]":Add imports:
-from typing import Generic, TypeVar +from typing import Generic, TypeVar, Callable, Awaitablesrc/mcp_agent/server/app_server.py (1)
927-931
: Docs/code drift: async tool only registers a run endpoint.Comment says “registers alias tools -run and -get_status,” but only the run endpoint is created. Either update the comment or add a
<name>-get_status
tool that proxies toworkflows-get_status
.Example addition (sketch):
@@ def create_declared_function_tools(mcp: FastMCP, server_context: ServerContext): - elif mode == "async": + elif mode == "async": @@ mcp.add_tool( _async_adapter, name=run_tool_name, description=full_desc, structured_output=False, ) registered.add(run_tool_name) + + # Optional: status alias + async def _status_wrapper(run_id: str, workflow_id: str | None = None, **kwargs): + ctx: MCPContext = kwargs.pop("__context__") + return await _workflow_status(ctx, run_id, workflow_id) + _status_wrapper.__name__ = f"{name_local}-get_status" + mcp.add_tool( + _make_async_adapter("ctx", _status_wrapper), + name=f"{name_local}-get_status", + description=f"Get status for '{wname_local}' runs.", + structured_output=False, + ) + registered.add(f"{name_local}-get_status")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/mcp_agent/executor/temporal/interactive_workflow.py
(1 hunks)src/mcp_agent/server/app_server.py
(1 hunks)
🔇 Additional comments (2)
src/mcp_agent/executor/temporal/interactive_workflow.py (2)
41-42
: Constructor flexibility looks good.Passing through args/kwargs to the base Workflow improves compatibility with dynamic factories.
74-81
: Add timeout handling to workflow.wait_conditionReplace the unbounded wait with a timed wait and catch
asyncio.TimeoutError
:- await workflow.wait_condition(lambda: self._response is not None) + from datetime import timedelta + import asyncio + + try: + await workflow.wait_condition( + lambda: self._response is not None, + timeout=timedelta(minutes=5), + ) + except asyncio.TimeoutError: + logger.warning("Input request timed out") + return HumanInputResponse(request_id=request.request_id, response="")
wait_condition
supports atimeout
parameter (as seconds ortimedelta
) and raisesasyncio.TimeoutError
on timeout.
payload: Dict[str, Any] | None = None, | ||
) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Broadened payload type is fine; validate input and redact logs.
Two guardrails:
- Enforce payload to be a JSON object for consistency with signal handlers.
- Avoid logging full payloads (may contain secrets); log keys or a truncated JSON string.
@@
async def resume_workflow(
ctx: MCPContext,
run_id: str | None = None,
workflow_id: str | None = None,
signal_name: str | None = "resume",
- payload: Dict[str, Any] | None = None,
+ payload: Dict[str, Any] | None = None,
) -> bool:
@@
- logger.info(
- f"Resuming workflow ID {workflow_id or 'unknown'}, run ID {run_id or 'unknown'} with signal '{signal_name}' and payload '{payload}'"
- )
+ # Validate input payload
+ if payload is not None and not isinstance(payload, dict):
+ raise ToolError("payload must be a JSON object (dict).")
+
+ # Redact payload in logs (structure only, truncate to avoid giant logs)
+ try:
+ payload_summary = (
+ {"keys": sorted(list(payload.keys()))} if isinstance(payload, dict) else str(type(payload))
+ )
+ except Exception:
+ payload_summary = "uninspectable-payload"
+ logger.info(
+ f"Resuming workflow ID {workflow_id or 'unknown'}, run ID {run_id or 'unknown'} with signal '{signal_name}' and payload_summary '{payload_summary}'"
+ )
Run to find other callers assuming string payloads:
🏁 Script executed:
#!/bin/bash
rg -nP -C2 '\bresume_workflow\s*\('
rg -nP -C2 '\bworkflows-resume\b|\"workflows-resume\"'
Length of output: 12268
🏁 Script executed:
#!/bin/bash
# Inspect CLI command signature and payload type declaration
rg -nP 'def resume_workflow' -C5 --type=py src/mcp_agent/cli/cloud/commands/workflows/resume/main.py
rg -nP 'payload:' -C3 --type=py src/mcp_agent/cli/cloud/commands/workflows/resume/main.py
Length of output: 2354
Normalize and validate payload, redact logs
In src/mcp_agent/server/app_server.py → resume_workflow, accept both str
and dict
for payload
: if it’s a str
, do json.loads
; if the result isn’t a dict
, raise ToolError("payload must be a JSON object")
. Then replace the existing logger.info
call to log only a redacted payload_summary
(e.g. list of keys or truncated JSON), never the full payload.
🤖 Prompt for AI Agents
In src/mcp_agent/server/app_server.py around lines 775 to 776, modify
resume_workflow so payload may be either a str or dict: if payload is a str,
parse it with json.loads (catching JSON errors to raise ToolError("payload must
be a JSON object")); after parsing ensure the result is a dict and if not raise
ToolError("payload must be a JSON object"). Replace the existing logger.info
that prints the full payload with a log of a redacted payload_summary only (for
example, list(payload.keys()) or a truncated/serialized snippet) so the full
payload is never logged.
* Temporarily exclude CLI from test coverage (#429) ### TL;DR Exclude CLI code from test coverage metrics for now. Will add tests when we're done sprinting 10000 mph  <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Tests** * Adjusted test coverage collection to exclude non-critical CLI components, resulting in more accurate coverage metrics for core functionality. * **Chores** * Updated coverage reporting configuration to align with the new exclusion rules, ensuring consistent results across local and CI runs. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * Add workflow commands to CLI (#424) ### TL;DR Added workflow management commands to the MCP Agent CLI, including describe, suspend, resume, and cancel operations. ### What changed? - Added four new workflow management commands: - `describe_workflow`: Shows detailed information about a workflow execution - `suspend_workflow`: Pauses a running workflow execution - `resume_workflow`: Resumes a previously suspended workflow - `cancel_workflow`: Permanently stops a workflow execution - Implemented corresponding API client methods in `WorkflowAPIClient`: - `suspend_workflow` - `resume_workflow` - `cancel_workflow` - Updated the CLI structure to expose these commands under `mcp-agent cloud workflows` - Added an alias for `describe_workflow` as `status` for backward compatibility ### How to test? Test the new workflow commands with a running workflow: ``` # Get workflow details mcp-agent cloud workflows describe run_abc123 mcp-agent cloud workflows status run_abc123 # alias # Suspend a workflow mcp-agent cloud workflows suspend run_abc123 # Resume a workflow (with optional payload) mcp-agent cloud workflows resume run_abc123 mcp-agent cloud workflows resume run_abc123 --payload '{"data": "value"}' # Cancel a workflow (with optional reason) mcp-agent cloud workflows cancel run_abc123 mcp-agent cloud workflows cancel run_abc123 --reason "User requested cancellation" ``` ### Why make this change? These commands provide essential workflow lifecycle management capabilities to users, allowing them to monitor and control workflow executions through the CLI. The ability to suspend, resume, and cancel workflows gives users more control over long-running operations and helps manage resources more efficiently. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - New Features - Introduced “workflows” CLI group with commands: describe (alias: status), resume, suspend, and cancel. - Describe supports text, JSON, and YAML output; all commands work with server ID or URL and include improved error messages. - Refactor - Renamed CLI group from “workflow” to “workflows” and reorganized command registrations. - Consolidated internal utility imports (no behavior change). - Chores - Updated module descriptions. - Removed legacy workflow status package/exports in favor of the new workflows commands. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * add servers workflow subcommand (#428) # Add servers workflows subcommand This PR adds a new `workflows` subcommand to the `mcp-agent cloud servers` command that allows users to list workflows associated with a specific server. The command supports: - Filtering by workflow status - Limiting the number of results - Multiple output formats (text, JSON, YAML) - Accepting server IDs, app config IDs, or server URLs as input Examples: ``` mcp-agent cloud servers workflows app_abc123 mcp-agent cloud servers workflows https://server.example.com --status running mcp-agent cloud servers workflows apcnf_xyz789 --limit 10 --format json ``` The PR also cleans up the examples in the existing workflow commands and adds the necessary API client support for listing workflows. * add workflow list and runs (#430) ### TL;DR Reorganized workflow commands `mcp-agent cloud workflows runs` `mcp-agent cloud workflows list` `mcp-agent cloud server workflows` (alias of workflows list) ### What changed? - Moved `list_workflows_for_server` from the servers module to the workflows module as `list_workflow_runs` - Added new workflow commands: `list_workflows` and `list_workflow_runs` - Updated CLI command structure to make workflows commands more intuitive - Applied consistent code formatting with black across all server and workflow related files ### How to test? Test the new and reorganized workflow commands: ```bash # List available workflow definitions mcp-agent cloud workflows list app_abc123 # List workflow runs (previously under servers workflows) mcp-agent cloud workflows runs app_abc123 # Test with different output formats mcp-agent cloud workflows list app_abc123 --format json mcp-agent cloud workflows runs app_abc123 --format yaml # Verify existing commands still work mcp-agent cloud servers list mcp-agent cloud workflows describe app_abc123 run_xyz789 ``` * [ez] Move deploy command to cloud namespace (#431) ### TL;DR Added `cloud deploy` command as an alias for the existing `deploy` command. * First pass at implementing the mcp-agent CLI (#409) * Initial scaffolding * initial CLI * checkpoint * checkpoint 2 * various updates to cli * fix lint and format * fix: should load secrets.yaml template instead when running init cli command * fix: prevent None values in either mcp-agent secrets and config yaml files from overwriting one another when merging both * fix: when running config check, use get_settings() instead of Settings() to ensure settings are loaded. * fix: handle None values for servers in MCPSettings so it defaults to empty dict and update secrets.yaml template so it does not overwrite mcp servers in config * Inform users to save and close editor to continue when running config edit command * fix: Update openai, anthropic and azure regex for keys cli command * Sort model list by provider and model name * Add filtering support for models list cli command * disable untested commands * lint, format, gen_schema * get rid of accidental otlp exporter changes from another branch * get rid of accidental commit from other branch --------- Co-authored-by: StreetLamb <[email protected]> * Docs MVP (#436) * Initial scaffolding * initial CLI * checkpoint * checkpoint 2 * various updates to cli * fix lint and format * fix: should load secrets.yaml template instead when running init cli command * fix: prevent None values in either mcp-agent secrets and config yaml files from overwriting one another when merging both * fix: when running config check, use get_settings() instead of Settings() to ensure settings are loaded. * fix: handle None values for servers in MCPSettings so it defaults to empty dict and update secrets.yaml template so it does not overwrite mcp servers in config * Inform users to save and close editor to continue when running config edit command * fix: Update openai, anthropic and azure regex for keys cli command * Sort model list by provider and model name * Add filtering support for models list cli command * disable untested commands * Fixes to docs * Updating the main.py and !developer_secrets for secrets * updating python entry files to main.py * Fix tracer.py --------- Co-authored-by: StreetLamb <[email protected]> Co-authored-by: Andrew Hoh <[email protected]> * fix: max complete token for openai gen structured (#438) * Fix regression in CLI ("cloud cloud") * docs fixes * Fix top-level cli cloud commands (deploy, login, etc) * Add eager tool validation to ensure json serializability of input params/result types * More docs updates * Refactor workflow runs list to use MCP tool calls (#439) ### TL;DR Refactored the workflow runs listing command to use MCP tool calls instead of direct API client calls. ### What changed? - Replaced the direct API client approach with MCP tool calls to retrieve workflow runs - Added a new `_list_workflow_runs_async` function that uses the MCP App and gen_client to communicate with the server - Improved status filtering and display logic to work with both object and dictionary response formats - Enhanced error handling and formatting of workflow run information - Updated the workflow data processing to handle different response formats more robustly ### How to test? ```bash # List workflow runs from a server mcp-agent cloud workflows runs <server_id_or_url> # Filter by status mcp-agent cloud workflows runs <server_id_or_url> --status running # Limit results mcp-agent cloud workflows runs <server_id_or_url> --limit 10 # Change output format mcp-agent cloud workflows runs <server_id_or_url> --format json ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - New Features - Add status filtering for workflow runs, with common aliases (e.g., timeout → timed_out). - Add an optional limit to constrain the number of results. - Allow server selection via direct URL or config-based server ID. - Refactor - Update text output: columns now show Workflow ID, Name, Status, Run ID, Created; Principal removed. - Improve date formatting and consistent JSON/YAML/Text rendering. - Bug Fixes - Clearer error messages and safer handling when server info is missing or no data is returned. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * Update workflows commands UI to be more consistant with the rest of the CLI (#432) ### TL;DR Improved CLI workflow command output formatting with better visual indicators and consistent styling. ### How to test? ``` mcp-agent cloud workflows cancel <run-id> mcp-agent cloud workflows describe <run-id> mcp-agent cloud workflows resume <run-id> ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Style** * Cancel workflow: added a blank line before the status and changed the success icon to 🚫 (yellow). * Describe workflow: replaced panel UI with a clean, header-based text layout (“🔍 Workflow Details”), showing name with colorized status and fields for Workflow ID, Run ID, and Created. Updated status indicators with emojis and colors; timestamp is now plain text on its own line. * Resume workflow: success message now applies consistent coloring to the entire line for improved readability. <!-- end of auto-generated comment: release notes by coderabbit.ai --> * Feature: Update Workflow Tool Calls to Work with workflow_id (#435) * Support for workflow_id and run_id * Update temporal workflow registry * tests * Update LLMS.txt * Fix config * Return bool for cancel result * Validate ids provided * Fix cancel workflow id * Fix workflows-resume response * Add workflow-name specific resume and cancel tools * Fix return type * Fix examples * Remove redundant workflows-{name}-tool tool calls * Add _workflow_status back * Use registry helper * Changes from review * Add back evaluator_optimizer enum fix * Fix a hang that can happen at shutdown (#440) * Fix a shutdown hang * Fix tests * fix taskgroup closed in a different context than when it was started in error * some PR feedback fixes * PR feedback * Fix random failures of server aggregator not found for agent in temporal (#441) * Fix a shutdown hang * Fix tests * fix taskgroup closed in a different context than when it was started in error * some PR feedback fixes * Fix random failures of server aggregator not found for agent in temporal environment * Bump pyproject version * Fix gateway URL resolution (#443) * Fix gateway URL resolution Removed incorrect dependence on ServerRegistry for gateway URLs; the gateway is not an MCP server. App server (src/mcp_agent/server/app_server.py) builds workflow memo with: - gateway_url precedence: X-MCP-Gateway-URL or X-Forwarded-Url → reconstruct X-Forwarded-Proto/Host/Prefix → request.base_url → MCP_GATEWAY_URL env. - gateway_token precedence: X-MCP-Gateway-Token → MCP_GATEWAY_TOKEN env. Worker-side (SystemActivities/SessionProxy) uses memo.gateway_url and gateway_token; falls back to worker env. Client proxy helpers (src/mcp_agent/mcp/client_proxy.py): - _resolve_gateway_url: explicit param → context → env → local default. - Updated public signatures to drop server_registry parameter. * Cloud/deployable temporal example (#395) * Move workflows to workflows.py file * Fix router example * Add remaining dependencies * Update orchestrator to @app.async_tool example * Changes from review * Fix interactive_workflow to be runnable via tool * Fix resume tool params * Fix: Use helpful typer and invoke for root cli commands (#444) * Use helpful typer and invoke for root cli commands * Fix lint * Fix enum check (#445) * Fix/swap relative mcp agent dependency on deploy (#446) * Update wrangler wrapper to handle requirements.txt processing * Fix backup handling * pass api key to workflow (#447) * pass api key to workflow * guard against settings not existing --------- Co-authored-by: John Corbett <[email protected]> Co-authored-by: Sarmad Qadri <[email protected]> Co-authored-by: StreetLamb <[email protected]> Co-authored-by: Yi <[email protected]> Co-authored-by: Ryan Holinshead <[email protected]> Co-authored-by: roman-van-der-krogt <[email protected]>
Description
Minor updates to the temporal examples to make them closer to cloud-deployable state. Mainly, move the workflow imports into the
workflows.py
that is expected, since the localrun_worker.py
script won't be used. And fix requirements to properly import requirements.Fixed the router example since it wasn't working with some recent library changes. Fixed interactive workflow to work as a tool (needs to support Workflow init params and updated resume tool call to take Dict input instead of string)
Note the following things are still needed to deploy the examples:
@ file://../../ # Link to the local mcp-agent project root
Testing
Summary by CodeRabbit