diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index 6392f559b..92d60e212 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -51,7 +51,7 @@ jobs: - name: Run tests with coverage if: env.skip_tests == 'false' run: | - pytest --cov=. --cov-report=term-missing --cov-report=xml + pytest --cov=. --cov-report=term-missing --cov-report=xml --ignore=tests/e2e-test/tests - name: Skip coverage report if no tests if: env.skip_tests == 'true' diff --git a/docs/CustomizingAzdParameters.md b/docs/CustomizingAzdParameters.md index ec8f5d742..bc28fc345 100644 --- a/docs/CustomizingAzdParameters.md +++ b/docs/CustomizingAzdParameters.md @@ -9,8 +9,8 @@ By default this template will use the environment name as the prefix to prevent | Name | Type | Default Value | Purpose | | ------------------------------- | ------ | ----------------- | --------------------------------------------------------------------------------------------------- | | `AZURE_ENV_NAME` | string | `macae` | Used as a prefix for all resource names to ensure uniqueness across environments. | -| `AZURE_LOCATION` | string | `swedencentral` | Location of the Azure resources. Controls where the infrastructure will be deployed. | -| `AZURE_ENV_OPENAI_LOCATION` | string | `swedencentral` | Specifies the region for OpenAI resource deployment. | +| `AZURE_LOCATION` | string | `` | Location of the Azure resources. Controls where the infrastructure will be deployed. | +| `AZURE_ENV_OPENAI_LOCATION` | string | `` | Specifies the region for OpenAI resource deployment. | | `AZURE_ENV_MODEL_DEPLOYMENT_TYPE` | string | `GlobalStandard` | Defines the deployment type for the AI model (e.g., Standard, GlobalStandard). | | `AZURE_ENV_MODEL_NAME` | string | `gpt-4o` | Specifies the name of the GPT model to be deployed. | | `AZURE_ENV_FOUNDRY_PROJECT_ID` | string | `` | Set this if you want to reuse an AI Foundry Project instead of creating a new one. | diff --git a/docs/DeploymentGuide.md b/docs/DeploymentGuide.md index 18442dfc7..165cf320d 100644 --- a/docs/DeploymentGuide.md +++ b/docs/DeploymentGuide.md @@ -233,7 +233,7 @@ The easiest way to run this accelerator is in a VS Code Dev Containers, which wi ## Detailed Development Container setup instructions -The solution contains a [development container](https://code.visualstudio.com/docs/remote/containers) with all the required tooling to develop and deploy the accelerator. To deploy the Chat With Your Data accelerator using the provided development container you will also need: +The solution contains a [development container](https://code.visualstudio.com/docs/remote/containers) with all the required tooling to develop and deploy the accelerator. To deploy the Multi-Agent solutions accelerator using the provided development container you will also need: - [Visual Studio Code](https://code.visualstudio.com) - [Remote containers extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) @@ -287,7 +287,7 @@ The files for the dev container are located in `/.devcontainer/` folder. - You can use the Bicep extension for VSCode (Right-click the `.bicep` file, then select "Show deployment plan") or use the Azure CLI: ```bash - az deployment group create -g -f deploy/macae-dev.bicep --query 'properties.outputs' + az deployment group create -g -f infra/main.bicep --query 'properties.outputs' ``` - **Note**: You will be prompted for a `principalId`, which is the ObjectID of your user in Entra ID. To find it, use the Azure Portal or run: @@ -301,7 +301,7 @@ The files for the dev container are located in `/.devcontainer/` folder. **Role Assignments in Bicep Deployment:** - The **macae-dev.bicep** deployment includes the assignment of the appropriate roles to AOAI and Cosmos services. If you want to modify an existing implementation—for example, to use resources deployed as part of the simple deployment for local debugging—you will need to add your own credentials to access the Cosmos and AOAI services. You can add these permissions using the following commands: + The **main.bicep** deployment includes the assignment of the appropriate roles to AOAI and Cosmos services. If you want to modify an existing implementation—for example, to use resources deployed as part of the simple deployment for local debugging—you will need to add your own credentials to access the Cosmos and AOAI services. You can add these permissions using the following commands: ```bash az cosmosdb sql role assignment create --resource-group --account-name --role-definition-name "Cosmos DB Built-in Data Contributor" --principal-id --scope /subscriptions//resourceGroups//providers/Microsoft.DocumentDB/databaseAccounts/ @@ -320,11 +320,16 @@ The files for the dev container are located in `/.devcontainer/` folder. 5. **Create a `.env` file:** - - Navigate to the `src` folder and create a `.env` file based on the provided `.env.sample` file. + - Navigate to the `src\backend` folder and create a `.env` file based on the provided `.env.sample` file. + - Update the `.env` file with the required values from your Azure resource group in Azure Portal App Service environment variables. + - Alternatively, if resources were + provisioned using `azd provision` or `azd up`, a `.env` file is automatically generated in the `.azure//.env` + file. To get your `` run `azd env list` to see which env is default. 6. **Fill in the `.env` file:** - Use the output from the deployment or check the Azure Portal under "Deployments" in the resource group. + - Make sure to set APP_ENV to "**dev**" in `.env` file. 7. **(Optional) Set up a virtual environment:** @@ -337,8 +342,19 @@ The files for the dev container are located in `/.devcontainer/` folder. ```bash pip install -r requirements.txt ``` + +9. **Build the frontend (important):** -9. **Run the application:** + - Before running the frontend server, you must build the frontend to generate the necessary `build/assets` directory. + + From the `src/frontend` directory, run: + + ```bash + npm install + npm run build + ``` + +10. **Run the application:** - From the src/backend directory: diff --git a/docs/LocalDeployment.md b/docs/LocalDeployment.md deleted file mode 100644 index da1eb1415..000000000 --- a/docs/LocalDeployment.md +++ /dev/null @@ -1,164 +0,0 @@ -# Guide to local development - -## Requirements: - -- Python 3.10 or higher + PIP -- Azure CLI, and an Azure Subscription -- Visual Studio Code IDE - -# Local setup - -> **Note for macOS Developers**: If you are using macOS on Apple Silicon (ARM64) the DevContainer will **not** work. This is due to a limitation with the Azure Functions Core Tools (see [here](https://github.com/Azure/azure-functions-core-tools/issues/3112)). We recommend using the [Non DevContainer Setup](./NON_DEVCONTAINER_SETUP.md) instructions to run the accelerator locally. - -The easiest way to run this accelerator is in a VS Code Dev Containers, which will open the project in your local VS Code using the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers): - -1. Start Docker Desktop (install it if not already installed) -1. Open the project: - [![Open in Dev Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/microsoft/Multi-Agent-Custom-Automation-Engine-Solution-Accelerator) - -1. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window - -## Detailed Development Container setup instructions - -The solution contains a [development container](https://code.visualstudio.com/docs/remote/containers) with all the required tooling to develop and deploy the accelerator. To deploy the Chat With Your Data accelerator using the provided development container you will also need: - -* [Visual Studio Code](https://code.visualstudio.com) -* [Remote containers extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) - -If you are running this on Windows, we recommend you clone this repository in [WSL](https://code.visualstudio.com/docs/remote/wsl) - -```cmd -git clone https://github.com/microsoft/Multi-Agent-Custom-Automation-Engine-Solution-Accelerator -``` - -Open the cloned repository in Visual Studio Code and connect to the development container. - -```cmd -code . -``` - -!!! tip - Visual Studio Code should recognize the available development container and ask you to open the folder using it. For additional details on connecting to remote containers, please see the [Open an existing folder in a container](https://code.visualstudio.com/docs/remote/containers#_quick-start-open-an-existing-folder-in-a-container) quickstart. - -When you start the development container for the first time, the container will be built. This usually takes a few minutes. **Please use the development container for all further steps.** - -The files for the dev container are located in `/.devcontainer/` folder. - -## Local deployment and debugging: - -1. **Clone the repository.** - -2. **Log into the Azure CLI:** - - - Check your login status using: - ```bash - az account show - ``` - - If not logged in, use: - ```bash - az login - ``` - - To specify a tenant, use: - ```bash - az login --tenant - ``` - -3. **Create a Resource Group:** - - - You can create it either through the Azure Portal or the Azure CLI: - ```bash - az group create --name --location EastUS2 - ``` - -4. **Deploy the Bicep template:** - - - You can use the Bicep extension for VSCode (Right-click the `.bicep` file, then select "Show deployment plane") or use the Azure CLI: - ```bash - az deployment group create -g -f deploy/macae-dev.bicep --query 'properties.outputs' - ``` - - **Note**: You will be prompted for a `principalId`, which is the ObjectID of your user in Entra ID. To find it, use the Azure Portal or run: - ```bash - az ad signed-in-user show --query id -o tsv - ``` - You will also be prompted for locations for Cosmos and OpenAI services. This is to allow separate regions where there may be service quota restrictions. - - - **Additional Notes**: - - **Role Assignments in Bicep Deployment:** - - The **macae-dev.bicep** deployment includes the assignment of the appropriate roles to AOAI and Cosmos services. If you want to modify an existing implementation—for example, to use resources deployed as part of the simple deployment for local debugging—you will need to add your own credentials to access the Cosmos and AOAI services. You can add these permissions using the following commands: - ```bash - az cosmosdb sql role assignment create --resource-group --account-name --role-definition-name "Cosmos DB Built-in Data Contributor" --principal-id --scope /subscriptions//resourceGroups//providers/Microsoft.DocumentDB/databaseAccounts/ - ``` - - ```bash - az role assignment create --assignee --role "Azure AI User" --scope /subscriptions//resourceGroups//providers/Microsoft.CognitiveServices/accounts/ - ``` - **Using a Different Database in Cosmos:** - - You can set the solution up to use a different database in Cosmos. For example, you can name it something like autogen-dev. To do this: - 1. Change the environment variable **COSMOSDB_DATABASE** to the new database name. - 2. You will need to create the database in the Cosmos DB account. You can do this from the Data Explorer pane in the portal, click on the drop down labeled “_+ New Container_” and provide all the necessary details. - -6. **Create a `.env` file:** - - - Navigate to the `src` folder and create a `.env` file based on the provided `.env.sample` file. - -7. **Fill in the `.env` file:** - - - Use the output from the deployment or check the Azure Portal under "Deployments" in the resource group. - -8. **(Optional) Set up a virtual environment:** - - - If you are using `venv`, create and activate your virtual environment for both the frontend and backend folders. - -9. **Install requirements - frontend:** - - - In each of the frontend and backend folders - - Open a terminal in the `src` folder and run: - ```bash - pip install -r requirements.txt - ``` - -10. **Run the application:** - - From the src/backend directory: - ```bash - python app_kernel.py - ``` - - In a new terminal from the src/frontend directory - ```bash - python frontend_server.py - ``` - -10. Open a browser and navigate to `http://localhost:3000` -11. To see swagger API documentation, you can navigate to `http://localhost:8000/docs` - -## Debugging the solution locally - -You can debug the API backend running locally with VSCode using the following launch.json entry: - -``` - { - "name": "Python Debugger: Backend", - "type": "debugpy", - "request": "launch", - "cwd": "${workspaceFolder}/src/backend", - "module": "uvicorn", - "args": ["app:app", "--reload"], - "jinja": true - } -``` -To debug the python server in the frontend directory (frontend_server.py) and related, add the following launch.json entry: - -``` - { - "name": "Python Debugger: Frontend", - "type": "debugpy", - "request": "launch", - "cwd": "${workspaceFolder}/src/frontend", - "module": "uvicorn", - "args": ["frontend_server:app", "--port", "3000", "--reload"], - "jinja": true - } -``` - diff --git a/infra/main.bicep b/infra/main.bicep index 8ee54772d..f6ea978ee 100644 --- a/infra/main.bicep +++ b/infra/main.bicep @@ -18,6 +18,8 @@ param enableTelemetry bool = true param existingLogAnalyticsWorkspaceId string = '' +param azureopenaiVersion string = '2025-01-01-preview' + // Restricting deployment to only supported Azure OpenAI regions validated with GPT-4o model @metadata({ azd : { @@ -1000,7 +1002,7 @@ module containerApp 'br/public:avm/res/app/container-app:0.14.2' = if (container } { name: 'AZURE_OPENAI_API_VERSION' - value: '2025-01-01-preview' //TODO: set parameter/variable + value: azureopenaiVersion } { name: 'APPLICATIONINSIGHTS_INSTRUMENTATION_KEY' @@ -1718,3 +1720,22 @@ type webSiteConfigurationType = { @description('Optional. The tag of the container image to be used by the Web Site.') containerImageTag: string? } + + +output COSMOSDB_ENDPOINT string = 'https://${cosmosDbResourceName}.documents.azure.com:443/' +output COSMOSDB_DATABASE string = cosmosDbDatabaseName +output COSMOSDB_CONTAINER string = cosmosDbDatabaseMemoryContainerName +output AZURE_OPENAI_ENDPOINT string = 'https://${aiFoundryAiServicesResourceName}.openai.azure.com/' +output AZURE_OPENAI_MODEL_NAME string = aiFoundryAiServicesModelDeployment.name +output AZURE_OPENAI_DEPLOYMENT_NAME string = aiFoundryAiServicesModelDeployment.name +output AZURE_OPENAI_API_VERSION string = azureopenaiVersion +// output APPLICATIONINSIGHTS_INSTRUMENTATION_KEY string = applicationInsights.outputs.instrumentationKey +// output AZURE_AI_PROJECT_ENDPOINT string = aiFoundryAiServices.outputs.aiProjectInfo.apiEndpoint +output AZURE_AI_SUBSCRIPTION_ID string = subscription().subscriptionId +output AZURE_AI_RESOURCE_GROUP string = resourceGroup().name +output AZURE_AI_PROJECT_NAME string = aiFoundryAiProjectName +output AZURE_AI_MODEL_DEPLOYMENT_NAME string = aiFoundryAiServicesModelDeployment.name +// output APPLICATIONINSIGHTS_CONNECTION_STRING string = applicationInsights.outputs.connectionString +output AZURE_AI_AGENT_MODEL_DEPLOYMENT_NAME string = aiFoundryAiServicesModelDeployment.name +output AZURE_AI_AGENT_ENDPOINT string = aiFoundryAiServices.outputs.aiProjectInfo.apiEndpoint +output APP_ENV string = 'Prod' diff --git a/src/backend/app_kernel.py b/src/backend/app_kernel.py index 0c0273b45..5cfadbd42 100644 --- a/src/backend/app_kernel.py +++ b/src/backend/app_kernel.py @@ -202,69 +202,55 @@ async def input_task_endpoint(input_task: InputTask, request: Request): if not input_task.session_id: input_task.session_id = str(uuid.uuid4()) + # Wrap initialization and agent creation in its own try block for setup errors try: - # Create all agents instead of just the planner agent - # This ensures other agents are created first and the planner has access to them kernel, memory_store = await initialize_runtime_and_context( input_task.session_id, user_id ) - client = None - try: - client = config.get_ai_project_client() - except Exception as client_exc: - logging.error(f"Error creating AIProjectClient: {client_exc}") - + client = config.get_ai_project_client() agents = await AgentFactory.create_all_agents( session_id=input_task.session_id, user_id=user_id, memory_store=memory_store, client=client, ) + except Exception as setup_exc: + logging.error(f"Failed to initialize agents or context: {setup_exc}") + track_event_if_configured( + "InputTaskSetupError", + {"session_id": input_task.session_id, "error": str(setup_exc)}, + ) + raise HTTPException( + status_code=500, detail="Could not initialize services for your request." + ) from setup_exc + try: group_chat_manager = agents[AgentType.GROUP_CHAT_MANAGER.value] - - # Convert input task to JSON for the kernel function, add user_id here - - # Use the planner to handle the task await group_chat_manager.handle_input_task(input_task) - # Get plan from memory store plan = await memory_store.get_plan_by_session(input_task.session_id) - - if not plan: # If the plan is not found, raise an error + if not plan: track_event_if_configured( "PlanNotFound", - { - "status": "Plan not found", - "session_id": input_task.session_id, - "description": input_task.description, - }, + {"status": "Plan not found", "session_id": input_task.session_id}, ) raise HTTPException(status_code=404, detail="Plan not found") - # Log custom event for successful input task processing + track_event_if_configured( "InputTaskProcessed", - { - "status": f"Plan created with ID: {plan.id}", - "session_id": input_task.session_id, - "plan_id": plan.id, - "description": input_task.description, - }, + {"status": f"Plan created with ID: {plan.id}", "session_id": input_task.session_id}, ) - if client: - try: - client.close() - except Exception as e: - logging.error(f"Error sending to AIProjectClient: {e}") return { "status": f"Plan created with ID: {plan.id}", "session_id": input_task.session_id, "plan_id": plan.id, "description": input_task.description, } - + except HTTPException: + # Re-raise HTTPExceptions so they are not caught by the generic block + raise except Exception as e: - # Extract clean error message for rate limit errors + # This now specifically handles errors during task processing error_msg = str(e) if "Rate limit is exceeded" in error_msg: match = re.search(r"Rate limit is exceeded\. Try again in (\d+) seconds?\.", error_msg) @@ -273,13 +259,16 @@ async def input_task_endpoint(input_task: InputTask, request: Request): track_event_if_configured( "InputTaskError", - { - "session_id": input_task.session_id, - "description": input_task.description, - "error": str(e), - }, + {"session_id": input_task.session_id, "error": str(e)}, ) - raise HTTPException(status_code=400, detail=f"{error_msg}") from e + raise HTTPException(status_code=400, detail=f"Error processing plan: {error_msg}") from e + finally: + # Ensure the client is closed even if an error occurs + if 'client' in locals() and client: + try: + client.close() + except Exception as e: + logging.error(f"Error closing AIProjectClient: {e}") @app.post("/api/human_feedback") diff --git a/src/backend/test_utils_date_fixed.py b/src/backend/test_utils_date_fixed.py index 62eb8fc67..04b3fcdf2 100644 --- a/src/backend/test_utils_date_fixed.py +++ b/src/backend/test_utils_date_fixed.py @@ -4,7 +4,19 @@ import os from datetime import datetime -from utils_date import format_date_for_user + +# ---- Robust import for format_date_for_user ---- +# Tries: root-level shim -> src package path -> package-relative (when collected as src.backend.*) +try: + # Works if a root-level utils_date.py shim exists or PYTHONPATH includes project root + from utils_date import format_date_for_user # type: ignore +except ModuleNotFoundError: + try: + # Works when running from project root with 'src' on the path + from src.backend.utils_date import format_date_for_user # type: ignore + except ModuleNotFoundError: + # Works when this test is imported as 'src.backend.test_utils_date_fixed' + from .utils_date import format_date_for_user # type: ignore def test_date_formatting(): diff --git a/src/backend/tests/context/test_cosmos_memory.py b/src/backend/tests/context/test_cosmos_memory.py index 441bb1ef1..0467d9907 100644 --- a/src/backend/tests/context/test_cosmos_memory.py +++ b/src/backend/tests/context/test_cosmos_memory.py @@ -1,68 +1,151 @@ +# src/backend/tests/context/test_cosmos_memory.py +# Drop-in test that self-stubs all external imports used by cosmos_memory_kernel +# so we don't need to modify the repo structure or CI env. + +import sys +import types import pytest -from unittest.mock import AsyncMock, patch -from azure.cosmos.partition_key import PartitionKey -from src.backend.context.cosmos_memory import CosmosBufferedChatCompletionContext +from unittest.mock import AsyncMock +# ----------------- Preload stub modules so the SUT can import cleanly ----------------- -# Helper to create async iterable -async def async_iterable(mock_items): - """Helper to create an async iterable.""" - for item in mock_items: - yield item +# 1) helpers.azure_credential_utils.get_azure_credential +helpers_mod = types.ModuleType("helpers") +helpers_cred_mod = types.ModuleType("helpers.azure_credential_utils") +def _fake_get_azure_credential(*_a, **_k): + return object() +helpers_cred_mod.get_azure_credential = _fake_get_azure_credential +helpers_mod.azure_credential_utils = helpers_cred_mod +sys.modules.setdefault("helpers", helpers_mod) +sys.modules.setdefault("helpers.azure_credential_utils", helpers_cred_mod) +# 2) app_config.config (the SUT does: from app_config import config) +app_config_mod = types.ModuleType("app_config") +app_config_mod.config = types.SimpleNamespace( + COSMOSDB_CONTAINER="mock-container", + COSMOSDB_ENDPOINT="https://mock-endpoint", + COSMOSDB_DATABASE="mock-database", +) +sys.modules.setdefault("app_config", app_config_mod) -@pytest.fixture -def mock_env_variables(monkeypatch): - """Mock all required environment variables.""" - env_vars = { - "COSMOSDB_ENDPOINT": "https://mock-endpoint", - "COSMOSDB_KEY": "mock-key", - "COSMOSDB_DATABASE": "mock-database", - "COSMOSDB_CONTAINER": "mock-container", - "AZURE_OPENAI_DEPLOYMENT_NAME": "mock-deployment-name", - "AZURE_OPENAI_API_VERSION": "2023-01-01", - "AZURE_OPENAI_ENDPOINT": "https://mock-openai-endpoint", - } - for key, value in env_vars.items(): - monkeypatch.setenv(key, value) +# 3) models.messages_kernel (the SUT does: from models.messages_kernel import ...) +models_mod = types.ModuleType("models") +models_messages_mod = types.ModuleType("models.messages_kernel") + +# Minimal stand-ins so type hints/imports succeed (not used in this test path) +class _Base: ... +class BaseDataModel(_Base): ... +class Plan(_Base): ... +class Session(_Base): ... +class Step(_Base): ... +class AgentMessage(_Base): ... + +models_messages_mod.BaseDataModel = BaseDataModel +models_messages_mod.Plan = Plan +models_messages_mod.Session = Session +models_messages_mod.Step = Step +models_messages_mod.AgentMessage = AgentMessage +models_mod.messages_kernel = models_messages_mod +sys.modules.setdefault("models", models_mod) +sys.modules.setdefault("models.messages_kernel", models_messages_mod) + +# 4) azure.cosmos.partition_key.PartitionKey (provide if sdk isn't installed) +try: + from azure.cosmos.partition_key import PartitionKey # type: ignore +except Exception: # pragma: no cover + azure_mod = sys.modules.setdefault("azure", types.ModuleType("azure")) + azure_cosmos_mod = sys.modules.setdefault("azure.cosmos", types.ModuleType("azure.cosmos")) + azure_cosmos_pk_mod = types.ModuleType("azure.cosmos.partition_key") + class PartitionKey: # minimal shim + def __init__(self, path: str): self.path = path + azure_cosmos_pk_mod.PartitionKey = PartitionKey + sys.modules.setdefault("azure.cosmos.partition_key", azure_cosmos_pk_mod) + +# 5) azure.cosmos.aio.CosmosClient (we’ll patch it in a fixture, but ensure import exists) +try: + from azure.cosmos.aio import CosmosClient # type: ignore +except Exception: # pragma: no cover + azure_cosmos_aio_mod = types.ModuleType("azure.cosmos.aio") + class CosmosClient: # placeholder; we patch this class below + def __init__(self, *a, **k): ... + def get_database_client(self, *a, **k): ... + azure_cosmos_aio_mod.CosmosClient = CosmosClient + sys.modules.setdefault("azure.cosmos.aio", azure_cosmos_aio_mod) + +# ----------------- Import the SUT (after stubs are in place) ----------------- +try: + # If you added an alias file src/backend/context/cosmos_memory.py, this will work: + from src.backend.context.cosmos_memory import CosmosMemoryContext as CosmosBufferedChatCompletionContext +except Exception: + # Fallback to the kernel module (your provided code) + from src.backend.context.cosmos_memory_kernel import CosmosMemoryContext as CosmosBufferedChatCompletionContext # type: ignore + +# Import PartitionKey (either real or our shim) for assertions +try: + from azure.cosmos.partition_key import PartitionKey # type: ignore +except Exception: # already defined above in shim + pass +# ----------------- Fixtures ----------------- @pytest.fixture -def mock_cosmos_client(): - """Fixture for mocking Cosmos DB client and container.""" - mock_client = AsyncMock() +def fake_cosmos_stack(monkeypatch): + """ + Patch the *SUT's* CosmosClient symbol so initialize() uses our AsyncMocks: + CosmosClient(...).get_database_client() -> mock_db + mock_db.create_container_if_not_exists(...) -> mock_container + """ + import sys + mock_container = AsyncMock() - mock_client.create_container_if_not_exists.return_value = mock_container + mock_db = AsyncMock() + mock_db.create_container_if_not_exists = AsyncMock(return_value=mock_container) - # Mocking context methods - mock_context = AsyncMock() - mock_context.store_message = AsyncMock() - mock_context.retrieve_messages = AsyncMock( - return_value=async_iterable([{"id": "test_id", "content": "test_content"}]) - ) + def _fake_ctor(*_a, **_k): + # mimic a client object with get_database_client returning our mock_db + return types.SimpleNamespace( + get_database_client=lambda *_a2, **_k2: mock_db + ) - return mock_client, mock_container, mock_context + # Find the actual module where CosmosBufferedChatCompletionContext is defined + sut_module_name = CosmosBufferedChatCompletionContext.__module__ + sut_module = sys.modules[sut_module_name] + # Patch the symbol the SUT imported (its local binding), not the SDK module + monkeypatch.setattr(sut_module, "CosmosClient", _fake_ctor, raising=False) + + return mock_db, mock_container @pytest.fixture -def mock_config(mock_cosmos_client): - """Fixture to patch Config with mock Cosmos DB client.""" - mock_client, _, _ = mock_cosmos_client - with patch( - "src.backend.config.Config.GetCosmosDatabaseClient", return_value=mock_client - ), patch("src.backend.config.Config.COSMOSDB_CONTAINER", "mock-container"): - yield +def mock_env(monkeypatch): + # Optional: not strictly needed because we stubbed app_config.config above, + # but keeps parity with your previous env fixture. + env_vars = { + "COSMOSDB_ENDPOINT": "https://mock-endpoint", + "COSMOSDB_KEY": "mock-key", + "COSMOSDB_DATABASE": "mock-database", + "COSMOSDB_CONTAINER": "mock-container", + } + for k, v in env_vars.items(): + monkeypatch.setenv(k, v) +# ----------------- Test ----------------- @pytest.mark.asyncio -async def test_initialize(mock_config, mock_cosmos_client): - """Test if the Cosmos DB container is initialized correctly.""" - mock_client, mock_container, _ = mock_cosmos_client - context = CosmosBufferedChatCompletionContext( - session_id="test_session", user_id="test_user" - ) - await context.initialize() - mock_client.create_container_if_not_exists.assert_called_once_with( - id="mock-container", partition_key=PartitionKey(path="/session_id") +async def test_initialize(fake_cosmos_stack, mock_env): + mock_db, mock_container = fake_cosmos_stack + + ctx = CosmosBufferedChatCompletionContext( + session_id="test_session", + user_id="test_user", ) - assert context._container == mock_container + await ctx.initialize() + + mock_db.create_container_if_not_exists.assert_called_once() + # Strict arg check: + args, kwargs = mock_db.create_container_if_not_exists.call_args + assert kwargs.get("id") == "mock-container" + pk = kwargs.get("partition_key") + assert isinstance(pk, PartitionKey) and getattr(pk, "path", None) == "/session_id" + + assert ctx._container == mock_container diff --git a/src/backend/tests/helpers/test_azure_credential_utils.py b/src/backend/tests/helpers/test_azure_credential_utils.py index fd98527f5..58f3aa1e2 100644 --- a/src/backend/tests/helpers/test_azure_credential_utils.py +++ b/src/backend/tests/helpers/test_azure_credential_utils.py @@ -1,18 +1,28 @@ -import pytest -import sys -import os -from unittest.mock import patch, MagicMock +import os, sys, importlib -# Ensure src/backend is on the Python path for imports -sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))) +# 1) Put repo's src/backend first on sys.path so "helpers" resolves to our package +HERE = os.path.dirname(__file__) +SRC_BACKEND = os.path.abspath(os.path.join(HERE, "..", "..")) +if SRC_BACKEND not in sys.path: + sys.path.insert(0, SRC_BACKEND) +# 2) Evict any stub/foreign modules injected by other tests or site-packages +sys.modules.pop("helpers.azure_credential_utils", None) +sys.modules.pop("helpers", None) + +# 3) Now import the real module under test import helpers.azure_credential_utils as azure_credential_utils +# src/backend/tests/helpers/test_azure_credential_utils.py + +import pytest +from unittest.mock import patch, MagicMock + # Synchronous tests -@patch("helpers.azure_credential_utils.os.getenv") -@patch("helpers.azure_credential_utils.DefaultAzureCredential") -@patch("helpers.azure_credential_utils.ManagedIdentityCredential") +@patch("helpers.azure_credential_utils.os.getenv", create=True) +@patch("helpers.azure_credential_utils.DefaultAzureCredential", create=True) +@patch("helpers.azure_credential_utils.ManagedIdentityCredential", create=True) def test_get_azure_credential_dev_env(mock_managed_identity_credential, mock_default_azure_credential, mock_getenv): """Test get_azure_credential in dev environment.""" mock_getenv.return_value = "dev" @@ -26,14 +36,15 @@ def test_get_azure_credential_dev_env(mock_managed_identity_credential, mock_def mock_managed_identity_credential.assert_not_called() assert credential == mock_default_credential -@patch("helpers.azure_credential_utils.os.getenv") -@patch("helpers.azure_credential_utils.DefaultAzureCredential") -@patch("helpers.azure_credential_utils.ManagedIdentityCredential") +@patch("helpers.azure_credential_utils.os.getenv", create=True) +@patch("helpers.azure_credential_utils.DefaultAzureCredential", create=True) +@patch("helpers.azure_credential_utils.ManagedIdentityCredential", create=True) def test_get_azure_credential_non_dev_env(mock_managed_identity_credential, mock_default_azure_credential, mock_getenv): """Test get_azure_credential in non-dev environment.""" mock_getenv.return_value = "prod" mock_managed_credential = MagicMock() mock_managed_identity_credential.return_value = mock_managed_credential + credential = azure_credential_utils.get_azure_credential(client_id="test-client-id") mock_getenv.assert_called_once_with("APP_ENV", "prod") @@ -44,9 +55,9 @@ def test_get_azure_credential_non_dev_env(mock_managed_identity_credential, mock # Asynchronous tests @pytest.mark.asyncio -@patch("helpers.azure_credential_utils.os.getenv") -@patch("helpers.azure_credential_utils.AioDefaultAzureCredential") -@patch("helpers.azure_credential_utils.AioManagedIdentityCredential") +@patch("helpers.azure_credential_utils.os.getenv", create=True) +@patch("helpers.azure_credential_utils.AioDefaultAzureCredential", create=True) +@patch("helpers.azure_credential_utils.AioManagedIdentityCredential", create=True) async def test_get_azure_credential_async_dev_env(mock_aio_managed_identity_credential, mock_aio_default_azure_credential, mock_getenv): """Test get_azure_credential_async in dev environment.""" mock_getenv.return_value = "dev" @@ -61,9 +72,9 @@ async def test_get_azure_credential_async_dev_env(mock_aio_managed_identity_cred assert credential == mock_aio_default_credential @pytest.mark.asyncio -@patch("helpers.azure_credential_utils.os.getenv") -@patch("helpers.azure_credential_utils.AioDefaultAzureCredential") -@patch("helpers.azure_credential_utils.AioManagedIdentityCredential") +@patch("helpers.azure_credential_utils.os.getenv", create=True) +@patch("helpers.azure_credential_utils.AioDefaultAzureCredential", create=True) +@patch("helpers.azure_credential_utils.AioManagedIdentityCredential", create=True) async def test_get_azure_credential_async_non_dev_env(mock_aio_managed_identity_credential, mock_aio_default_azure_credential, mock_getenv): """Test get_azure_credential_async in non-dev environment.""" mock_getenv.return_value = "prod" @@ -75,4 +86,4 @@ async def test_get_azure_credential_async_non_dev_env(mock_aio_managed_identity_ mock_getenv.assert_called_once_with("APP_ENV", "prod") mock_aio_managed_identity_credential.assert_called_once_with(client_id="test-client-id") mock_aio_default_azure_credential.assert_not_called() - assert credential == mock_aio_managed_credential \ No newline at end of file + assert credential == mock_aio_managed_credential diff --git a/src/backend/tests/models/test_messages.py b/src/backend/tests/models/test_messages.py index 49fb1b7fc..829c15657 100644 --- a/src/backend/tests/models/test_messages.py +++ b/src/backend/tests/models/test_messages.py @@ -1,9 +1,9 @@ # File: test_message.py import uuid -from src.backend.models.messages import ( +from src.backend.models.messages_kernel import ( DataType, - BAgentType, + AgentType as BAgentType, # map to your enum StepStatus, PlanStatus, HumanFeedbackStatus, @@ -20,7 +20,7 @@ def test_enum_values(): """Test enumeration values for consistency.""" assert DataType.session == "session" assert DataType.plan == "plan" - assert BAgentType.human_agent == "HumanAgent" + assert BAgentType.HUMAN == "Human_Agent" # was human_agent / "HumanAgent" assert StepStatus.completed == "completed" assert PlanStatus.in_progress == "in_progress" assert HumanFeedbackStatus.requested == "requested" @@ -31,7 +31,7 @@ def test_plan_with_steps_update_counts(): step1 = Step( plan_id=str(uuid.uuid4()), action="Review document", - agent=BAgentType.human_agent, + agent=BAgentType.HUMAN, status=StepStatus.completed, session_id=str(uuid.uuid4()), user_id=str(uuid.uuid4()), @@ -39,7 +39,7 @@ def test_plan_with_steps_update_counts(): step2 = Step( plan_id=str(uuid.uuid4()), action="Approve document", - agent=BAgentType.hr_agent, + agent=BAgentType.HR, status=StepStatus.failed, session_id=str(uuid.uuid4()), user_id=str(uuid.uuid4()), @@ -78,10 +78,10 @@ def test_action_request_creation(): plan_id=str(uuid.uuid4()), session_id=str(uuid.uuid4()), action="Review and approve", - agent=BAgentType.procurement_agent, + agent=BAgentType.PROCUREMENT, ) assert action_request.action == "Review and approve" - assert action_request.agent == BAgentType.procurement_agent + assert action_request.agent == BAgentType.PROCUREMENT def test_human_feedback_creation(): @@ -114,7 +114,7 @@ def test_step_defaults(): step = Step( plan_id=str(uuid.uuid4()), action="Prepare report", - agent=BAgentType.generic_agent, + agent=BAgentType.GENERIC, session_id=str(uuid.uuid4()), user_id=str(uuid.uuid4()), ) diff --git a/src/backend/tests/test_agent_integration.py b/src/backend/tests/test_agent_integration.py index 03e2f16e2..47e66954f 100644 --- a/src/backend/tests/test_agent_integration.py +++ b/src/backend/tests/test_agent_integration.py @@ -3,19 +3,50 @@ This test file verifies that the agent system correctly loads environment variables and can use functions from the JSON tool files. """ -import os -import sys -import unittest -import asyncio -import uuid +import os, sys, unittest, asyncio, uuid from dotenv import load_dotenv -# Add the parent directory to the path so we can import our modules +# Make src/backend importable sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +# --- begin: test-only stub to satisfy config_kernel import --- +import types +sys.modules.pop("app_config", None) +app_config_stub = types.ModuleType("app_config") +app_config_stub.config = types.SimpleNamespace( + AZURE_TENANT_ID="test-tenant", + AZURE_CLIENT_ID="test-client", + AZURE_CLIENT_SECRET="test-secret", + COSMOSDB_ENDPOINT="https://mock-cosmos.documents.azure.com", + COSMOSDB_DATABASE="mock-db", + COSMOSDB_CONTAINER="mock-container", + AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o", + AZURE_OPENAI_API_VERSION="2024-11-20", + AZURE_OPENAI_ENDPOINT="https://example.openai.azure.com/", + AZURE_OPENAI_SCOPES=["https://cognitiveservices.azure.com/.default"], + AZURE_AI_SUBSCRIPTION_ID="sub-id", + AZURE_AI_RESOURCE_GROUP="rg", + AZURE_AI_PROJECT_NAME="proj", + AZURE_AI_AGENT_ENDPOINT="https://agents.example.com/", + FRONTEND_SITE_NAME="http://127.0.0.1:3000", + get_user_local_browser_language=lambda: os.environ.get("USER_LOCAL_BROWSER_LANGUAGE","en-US"), +) +sys.modules["app_config"] = app_config_stub +os.environ.setdefault("AZURE_OPENAI_DEPLOYMENT_NAME", "gpt-4o") +os.environ.setdefault("AZURE_OPENAI_API_VERSION", "2024-11-20") +os.environ.setdefault("AZURE_OPENAI_ENDPOINT", "https://example.openai.azure.com/") +# --- end: test-only stub --- + +# --------- CRUCIAL: evict any prior stub of models.messages_kernel BEFORE imports --------- +import importlib +sys.modules.pop("models.messages_kernel", None) +sys.modules.pop("models", None) +importlib.invalidate_caches() +from models.messages_kernel import AgentType # load the real module now +# ----------------------------------------------------------------------------------------- + from config_kernel import Config from kernel_agents.agent_factory import AgentFactory -from models.messages_kernel import AgentType from utils_kernel import get_agents from semantic_kernel.functions.kernel_arguments import KernelArguments diff --git a/src/backend/tests/test_app.py b/src/backend/tests/test_app.py index 0e9f0d1e6..e9a6f3b4b 100644 --- a/src/backend/tests/test_app.py +++ b/src/backend/tests/test_app.py @@ -21,13 +21,51 @@ os.environ["AZURE_OPENAI_API_VERSION"] = "2023-01-01" os.environ["AZURE_OPENAI_ENDPOINT"] = "https://mock-openai-endpoint" +# Ensure repo root is on sys.path so `src.backend...` imports work +ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../..")) +if ROOT_DIR not in sys.path: + sys.path.insert(0, ROOT_DIR) + +# Provide safe defaults for vars that app_config reads at import-time +os.environ.setdefault("AZURE_AI_SUBSCRIPTION_ID", "00000000-0000-0000-0000-000000000000") +os.environ.setdefault("AZURE_AI_RESOURCE_GROUP", "rg-test") +os.environ.setdefault("AZURE_AI_PROJECT_NAME", "proj-test") +os.environ.setdefault("AZURE_AI_AGENT_ENDPOINT", "https://agents.example.com/") +os.environ.setdefault("USER_LOCAL_BROWSER_LANGUAGE", "en-US") + # Mock telemetry initialization to prevent errors with patch("azure.monitor.opentelemetry.configure_azure_monitor", MagicMock()): - from src.backend.app import app + try: + from src.backend.app import app # preferred if file exists + except ModuleNotFoundError: + # fallback to app_kernel which exists in this repo + import importlib + mod = importlib.import_module("src.backend.app_kernel") + app = getattr(mod, "app", None) + if app is None: + create_app = getattr(mod, "create_app", None) + if create_app is not None: + app = create_app() + else: + raise # Initialize FastAPI test client client = TestClient(app) +from fastapi.routing import APIRoute + +def _find_input_task_path(app): + for r in app.routes: + if isinstance(r, APIRoute): + # prefer exact or known names, but fall back to substring + if r.name in ("input_task", "handle_input_task"): + return r.path + if "input_task" in r.path: + return r.path + return "/input_task" # fallback + +INPUT_TASK_PATH = _find_input_task_path(app) + @pytest.fixture(autouse=True) def mock_dependencies(monkeypatch): @@ -37,34 +75,26 @@ def mock_dependencies(monkeypatch): lambda headers: {"user_principal_id": "mock-user-id"}, ) monkeypatch.setattr( - "src.backend.utils.retrieve_all_agent_tools", + "src.backend.utils_kernel.retrieve_all_agent_tools", lambda: [{"agent": "test_agent", "function": "test_function"}], + raising=False, # allow creating the attr if it doesn't exist ) def test_input_task_invalid_json(): """Test the case where the input JSON is invalid.""" - invalid_json = "Invalid JSON data" - headers = {"Authorization": "Bearer mock-token"} - response = client.post("/input_task", data=invalid_json, headers=headers) - - # Assert response for invalid JSON + # syntactically valid but fails validation -> 422 + response = client.post(INPUT_TASK_PATH, json={}, headers=headers) assert response.status_code == 422 assert "detail" in response.json() def test_input_task_missing_description(): """Test the case where the input task description is missing.""" - input_task = { - "session_id": None, - "user_id": "mock-user-id", - } - + input_task = {"session_id": None, "user_id": "mock-user-id"} headers = {"Authorization": "Bearer mock-token"} - response = client.post("/input_task", json=input_task, headers=headers) - - # Assert response for missing description + response = client.post(INPUT_TASK_PATH, json=input_task, headers=headers) assert response.status_code == 422 assert "detail" in response.json() @@ -79,10 +109,9 @@ def test_input_task_empty_description(): """Tests if /input_task handles an empty description.""" empty_task = {"session_id": None, "user_id": "mock-user-id", "description": ""} headers = {"Authorization": "Bearer mock-token"} - response = client.post("/input_task", json=empty_task, headers=headers) - + response = client.post(INPUT_TASK_PATH, json=empty_task, headers=headers) assert response.status_code == 422 - assert "detail" in response.json() # Assert error message for missing description + assert "detail" in response.json() if __name__ == "__main__": diff --git a/src/backend/tests/test_config.py b/src/backend/tests/test_config.py index 07ff0d0b4..5b9cae1f9 100644 --- a/src/backend/tests/test_config.py +++ b/src/backend/tests/test_config.py @@ -1,49 +1,70 @@ -# tests/test_config.py -from unittest.mock import patch +# src/backend/tests/test_config.py import os +import sys +from unittest.mock import patch -# Mock environment variables globally +# Make repo root importable so `src.backend...` works +ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../..")) +if ROOT_DIR not in sys.path: + sys.path.insert(0, ROOT_DIR) + +# Mock environment variables so app_config can construct safely at import time MOCK_ENV_VARS = { + # Cosmos "COSMOSDB_ENDPOINT": "https://mock-cosmosdb.documents.azure.com:443/", "COSMOSDB_DATABASE": "mock_database", "COSMOSDB_CONTAINER": "mock_container", + + # Azure OpenAI "AZURE_OPENAI_DEPLOYMENT_NAME": "mock-deployment", - "AZURE_OPENAI_API_VERSION": "2024-05-01-preview", + "AZURE_OPENAI_API_VERSION": "2024-11-20", "AZURE_OPENAI_ENDPOINT": "https://mock-openai-endpoint.azure.com/", - "AZURE_OPENAI_API_KEY": "mock-api-key", + + # Optional auth (kept for completeness) "AZURE_TENANT_ID": "mock-tenant-id", "AZURE_CLIENT_ID": "mock-client-id", "AZURE_CLIENT_SECRET": "mock-client-secret", + + # Azure AI Project (required by current AppConfig) + "AZURE_AI_SUBSCRIPTION_ID": "00000000-0000-0000-0000-000000000000", + "AZURE_AI_RESOURCE_GROUP": "rg-test", + "AZURE_AI_PROJECT_NAME": "proj-test", + "AZURE_AI_AGENT_ENDPOINT": "https://agents.example.com/", + + # Misc + "USER_LOCAL_BROWSER_LANGUAGE": "en-US", } -with patch.dict(os.environ, MOCK_ENV_VARS): - from src.backend.config import ( - Config, - GetRequiredConfig, - GetOptionalConfig, - GetBoolConfig, - ) +# Import the current config objects/functions under the mocked env +with patch.dict(os.environ, MOCK_ENV_VARS, clear=False): + # New codebase: config lives in app_config/config_kernel + from src.backend.app_config import config as app_config + from src.backend.config_kernel import Config +# Provide thin wrappers so the old test names still work +def GetRequiredConfig(name: str, default=None): + return app_config._get_required(name, default) -@patch.dict(os.environ, MOCK_ENV_VARS) +def GetOptionalConfig(name: str, default: str = ""): + return app_config._get_optional(name, default) + +def GetBoolConfig(name: str) -> bool: + return app_config._get_bool(name) + + +# ---- Tests (unchanged semantics) ---- + +@patch.dict(os.environ, MOCK_ENV_VARS, clear=False) def test_get_required_config(): - """Test GetRequiredConfig.""" assert GetRequiredConfig("COSMOSDB_ENDPOINT") == MOCK_ENV_VARS["COSMOSDB_ENDPOINT"] - -@patch.dict(os.environ, MOCK_ENV_VARS) +@patch.dict(os.environ, MOCK_ENV_VARS, clear=False) def test_get_optional_config(): - """Test GetOptionalConfig.""" assert GetOptionalConfig("NON_EXISTENT_VAR", "default_value") == "default_value" - assert ( - GetOptionalConfig("COSMOSDB_DATABASE", "default_db") - == MOCK_ENV_VARS["COSMOSDB_DATABASE"] - ) - + assert GetOptionalConfig("COSMOSDB_DATABASE", "default_db") == MOCK_ENV_VARS["COSMOSDB_DATABASE"] -@patch.dict(os.environ, MOCK_ENV_VARS) +@patch.dict(os.environ, MOCK_ENV_VARS, clear=False) def test_get_bool_config(): - """Test GetBoolConfig.""" with patch.dict("os.environ", {"FEATURE_ENABLED": "true"}): assert GetBoolConfig("FEATURE_ENABLED") is True with patch.dict("os.environ", {"FEATURE_ENABLED": "false"}): diff --git a/src/backend/tests/test_group_chat_manager_integration.py b/src/backend/tests/test_group_chat_manager_integration.py index 6068cf5c9..fc718b5b8 100644 --- a/src/backend/tests/test_group_chat_manager_integration.py +++ b/src/backend/tests/test_group_chat_manager_integration.py @@ -18,7 +18,54 @@ # Add the parent directory to the path so we can import our modules sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +# ---- Force a test stub for app_config BEFORE importing Config ---- +import types +import os, sys + +# Evict any previously loaded app_config to avoid stale stubs +sys.modules.pop("app_config", None) + +app_config_stub = types.ModuleType("app_config") +app_config_stub.config = types.SimpleNamespace( + # Cosmos settings (non-emulator so setUp() passes) + COSMOSDB_ENDPOINT="https://mock-cosmos.documents.azure.com", + COSMOSDB_DATABASE="mock-database", + COSMOSDB_CONTAINER="mock-container", + + # Azure OpenAI settings (dummies so imports don't crash) + AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o", + AZURE_OPENAI_API_VERSION="2024-11-20", + AZURE_OPENAI_ENDPOINT="https://example.openai.azure.com/", + AZURE_OPENAI_SCOPES=["https://cognitiveservices.azure.com/.default"], + + # Azure AI Project (dummies) + AZURE_AI_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000", + AZURE_AI_RESOURCE_GROUP="rg-ci", + AZURE_AI_PROJECT_NAME="proj-ci", + AZURE_AI_AGENT_ENDPOINT="https://agents.example.azure.com/", + + # Misc used by some tools + FRONTEND_SITE_NAME="http://127.0.0.1:3000", + + # Some modules expect this method on config + get_user_local_browser_language=lambda: os.environ.get("USER_LOCAL_BROWSER_LANGUAGE", "en-US"), +) +sys.modules["app_config"] = app_config_stub +# ------------------------------------------------------------------ + from config_kernel import Config + +# ---- Ensure Config has a non-emulator Cosmos endpoint for these tests ---- +Config.COSMOSDB_ENDPOINT = app_config_stub.config.COSMOSDB_ENDPOINT +Config.COSMOSDB_DATABASE = app_config_stub.config.COSMOSDB_DATABASE +Config.COSMOSDB_CONTAINER = app_config_stub.config.COSMOSDB_CONTAINER + +# Also set env vars in case any code checks os.environ directly +os.environ.setdefault("COSMOSDB_ENDPOINT", app_config_stub.config.COSMOSDB_ENDPOINT) +os.environ.setdefault("COSMOSDB_DATABASE", app_config_stub.config.COSMOSDB_DATABASE) +os.environ.setdefault("COSMOSDB_CONTAINER", app_config_stub.config.COSMOSDB_CONTAINER) +# -------------------------------------------------------------------------- + from kernel_agents.group_chat_manager import GroupChatManager from kernel_agents.planner_agent import PlannerAgent from kernel_agents.human_agent import HumanAgent