Skip to content

Conversation

@moonbox3
Copy link

@moonbox3 moonbox3 commented Nov 6, 2025

Add Microsoft Agent Framework Python Integration for Dojo

This PR adds a complete Python integration for the Microsoft Agent Framework with the AG-UI protocol, demonstrating all 7 Dojo features.

Changes

New Integration (integrations/microsoft-agent-framework/python/examples/)

  • FastAPI server exposing 7 AG-UI example agents from agent-framework-ag-ui package
  • Agents demonstrate: agentic chat, backend tool rendering, human-in-the-loop, agentic generative UI, tool-based generative UI, shared state, and predictive state updates
  • Complete setup with pyproject.toml, .env.example, and documentation

Key Features

  • Uses published agent-framework-ag-ui==1.0.0b251106.post1 from PyPI (includes examples package)
  • Imports agents from agent_framework_ag_ui_examples.agents module
  • Server runs on port 8888 with endpoints for each Dojo feature
  • Full integration with existing Dojo frontend

Dependencies

{
"name": "tool_based_generative_ui.py",
"content": "\"\"\"\nAn example demonstrating tool-based generative UI.\n\"\"\"\n\nfrom crewai.flow.flow import Flow, start\nfrom litellm import completion\nfrom ..sdk import copilotkit_stream, CopilotKitState\n\n\n# This tool generates a haiku on the server.\n# The tool call will be streamed to the frontend as it is being generated.\nGENERATE_HAIKU_TOOL = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"generate_haiku\",\n \"description\": \"Generate a haiku in Japanese and its English translation\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"japanese\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\"\n },\n \"description\": \"An array of three lines of the haiku in Japanese\"\n },\n \"english\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\"\n },\n \"description\": \"An array of three lines of the haiku in English\"\n },\n \"image_names\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\"\n },\n \"description\": \"Names of 3 relevant images from the provided list\"\n }\n },\n \"required\": [\"japanese\", \"english\", \"image_names\"]\n }\n }\n}\n\n\nclass ToolBasedGenerativeUIFlow(Flow[CopilotKitState]):\n \"\"\"\n A flow that demonstrates tool-based generative UI.\n \"\"\"\n\n @start()\n async def chat(self):\n \"\"\"\n The main function handling chat and tool calls.\n \"\"\"\n system_prompt = \"You assist the user in generating a haiku. When generating a haiku using the 'generate_haiku' tool, you MUST also select exactly 3 image filenames from the following list that are most relevant to the haiku's content or theme. Return the filenames in the 'image_names' parameter. Dont provide the relavent image names in your final response to the user. \"\n\n\n # 1. Run the model and stream the response\n # Note: In order to stream the response, wrap the completion call in\n # copilotkit_stream and set stream=True.\n response = await copilotkit_stream(\n completion(\n\n # 1.1 Specify the model to use\n model=\"openai/gpt-4o\",\n messages=[\n {\n \"role\": \"system\", \n \"content\": system_prompt\n },\n *self.state.messages\n ],\n\n # 1.2 Bind the available tools to the model\n tools=[ GENERATE_HAIKU_TOOL ],\n\n # 1.3 Disable parallel tool calls to avoid race conditions,\n # enable this for faster performance if you want to manage\n # the complexity of running tool calls in parallel.\n parallel_tool_calls=False,\n stream=True\n )\n )\n message = response.choices[0].message\n\n # 2. Append the message to the messages in state\n self.state.messages.append(message)\n\n # 3. If there are tool calls, append a tool message to the messages in state\n if message.tool_calls:\n self.state.messages.append(\n {\n \"tool_call_id\": message.tool_calls[0].id,\n \"role\": \"tool\",\n \"content\": \"Haiku generated.\"\n }\n )\n",
"content": "\"\"\"\nAn example demonstrating tool-based generative UI.\n\"\"\"\n\nfrom crewai.flow.flow import Flow, start\nfrom litellm import completion\nfrom ag_ui.core import MessagesSnapshotEvent, EventType\nfrom ..sdk import copilotkit_stream, CopilotKitState\n\n\n# This tool generates a haiku on the server.\n# The tool call will be streamed to the frontend as it is being generated.\nGENERATE_HAIKU_TOOL = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"generate_haiku\",\n \"description\": \"Generate a haiku in Japanese and its English translation\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"japanese\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\"\n },\n \"description\": \"An array of three lines of the haiku in Japanese\"\n },\n \"english\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\"\n },\n \"description\": \"An array of three lines of the haiku in English\"\n },\n \"image_names\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\"\n },\n \"description\": \"Names of 3 relevant images from the provided list\"\n }\n },\n \"required\": [\"japanese\", \"english\", \"image_names\"]\n }\n }\n}\n\n\nclass ToolBasedGenerativeUIFlow(Flow[CopilotKitState]):\n \"\"\"\n A flow that demonstrates tool-based generative UI.\n \"\"\"\n\n @start()\n async def chat(self):\n \"\"\"\n The main function handling chat and tool calls.\n \"\"\"\n system_prompt = \"You assist the user in generating a haiku. When generating a haiku using the 'generate_haiku' tool, you MUST also select exactly 3 image filenames from the following list that are most relevant to the haiku's content or theme. Return the filenames in the 'image_names' parameter. Dont provide the relavent image names in your final response to the user. \"\n\n\n # 1. Run the model and stream the response\n # Note: In order to stream the response, wrap the completion call in\n # copilotkit_stream and set stream=True.\n response = await copilotkit_stream(\n completion(\n\n # 1.1 Specify the model to use\n model=\"openai/gpt-4o\",\n messages=[\n {\n \"role\": \"system\", \n \"content\": system_prompt\n },\n *self.state.messages\n ],\n\n # 1.2 Bind the available tools to the model\n tools=[ GENERATE_HAIKU_TOOL ],\n\n # 1.3 Disable parallel tool calls to avoid race conditions,\n # enable this for faster performance if you want to manage\n # the complexity of running tool calls in parallel.\n parallel_tool_calls=False,\n stream=True\n )\n )\n message = response.choices[0].message\n\n # 2. Append the message to the messages in state\n self.state.messages.append(message)\n\n # 3. If there are tool calls, append a tool message to the messages in state\n if message.tool_calls:\n self.state.messages.append(\n {\n \"tool_call_id\": message.tool_calls[0].id,\n \"role\": \"tool\",\n \"content\": \"Haiku generated.\"\n }\n )\n \n # 4. Emit MessagesSnapshotEvent to notify frontend about tool result\n yield MessagesSnapshotEvent(\n type=EventType.MESSAGES_SNAPSHOT,\n messages=self.state.messages\n )\n",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was auto-generated when I started the Dojo frontend. Should I keep or revert?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant