|
1 | | -# Azure Functions API |
| 1 | +<div align="center"> |
2 | 2 |
|
3 | | -This project uses [Azure Functions](https://learn.microsoft.com/azure/azure-functions/functions-overview?pivots=programming-language-javascript) as a serverless API, and [LangChain.js](https://js.langchain.com/) to implement the AI capabilities. |
| 3 | +# Agent API (Azure Functions + LangChain.js) |
4 | 4 |
|
5 | | -## Available Scripts |
| 5 | +[](https://codespaces.new/Azure-Samples/mcp-agent-langchainjs?hide_repo_select=true&ref=main&quickstart=true) |
| 6 | + |
| 7 | +[](https://www.typescriptlang.org) |
| 8 | +[](https://js.langchain.com) |
6 | 9 |
|
7 | | -In the project directory, you can run: |
| 10 | +[Overview](#overview) • [API Endpoints](#api-endpoints) • [Development](#development) |
8 | 11 |
|
9 | | -### `npm start` |
| 12 | +</div> |
10 | 13 |
|
11 | | -This command will start the API in dev mode, and you will be able to access it through the URL `http://localhost:7071/api/`. |
| 14 | +## Overview |
12 | 15 |
|
13 | | -You can use the `api.http` file to test the API using the [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) extension for Visual Studio Code. |
| 16 | +The Agent API hosts the LangChain.js powered burger ordering agent. It: |
14 | 17 |
|
15 | | -### `npm run build` |
| 18 | +- Streams chat completions with intermediate tool + LLM steps |
| 19 | +- Connects to the Burger MCP server (Streamable HTTP) to invoke tools |
| 20 | +- Persists chat history + session titles in Azure Cosmos DB for NoSQL |
| 21 | +- Derives user identity from Azure Static Web Apps authentication |
| 22 | +- Emits OpenTelemetry traces (to Azure Monitor if configured, else local OTLP exporter) |
16 | 23 |
|
17 | | -To build the API for production to the `dist` folder. |
| 24 | +<div align="center"> |
| 25 | + <img src="../../docs/images/agent-architecture.drawio.png" alt="Architecture" /> |
| 26 | +</div> |
| 27 | + |
| 28 | +## API Endpoints |
| 29 | + |
| 30 | +| Method | Path | Description | |
| 31 | +|--------|------|-------------| |
| 32 | +| GET | /api/me | Returns (and lazily creates) the internal hashed user id for the authenticated user | |
| 33 | +| GET | /api/chats | Lists all chat sessions for the current user | |
| 34 | +| GET | /api/chats/{sessionId} | Returns messages for a specific session | |
| 35 | +| DELETE | /api/chats/{sessionId} | Deletes a chat session and its messages | |
| 36 | +| POST | /api/chats/stream | Streams an agent response for provided messages; creates/updates session | |
| 37 | + |
| 38 | +### Streaming format |
| 39 | + |
| 40 | +`POST /api/chats/stream` returns `application/x-ndjson`. Each line is a JSON object shaped like: |
| 41 | + |
| 42 | +```jsonc |
| 43 | +{ |
| 44 | + "delta": { |
| 45 | + "content": "<partial text>", |
| 46 | + "role": "assistant", |
| 47 | + "context": { "currentStep": { ... } } |
| 48 | + }, |
| 49 | + "context": { "sessionId": "<uuid>" } |
| 50 | +} |
| 51 | +``` |
| 52 | + |
| 53 | +Tool and LLM steps surface in `context.currentStep` / `context.intermediateSteps` enabling progressive UI rendering. |
| 54 | + |
| 55 | +## Development |
| 56 | + |
| 57 | +### Getting started |
| 58 | + |
| 59 | +Follow the instructions [here](../../README.md#getting-started) to set up the development environment for the entire Pizza MCP Agents project. |
| 60 | + |
| 61 | +### Run the application |
| 62 | + |
| 63 | +Use the following command to run the application locally: |
| 64 | + |
| 65 | +```bash |
| 66 | +npm start |
| 67 | +``` |
| 68 | + |
| 69 | +This command will start the Azure Functions application locally. You can test the endpoints by opening the file `api.http` and click on **Send Request** to test the endpoints. |
| 70 | + |
| 71 | +The agent API needs the Burger MCP server (and Burger API if running everything locally) to be running as well. You can start all the services at once by running `npm start` in the root of the project. |
| 72 | + |
| 73 | +> [!NOTE] |
| 74 | +> If you have not deployed the Azure resources, it will fall back to in-memory data. You can test the API without deploying it to Azure. |
| 75 | +
|
| 76 | +### Available Scripts |
| 77 | + |
| 78 | +| Script | Description | |
| 79 | +|--------|-------------| |
| 80 | +| `npm start` | Start the development server with hot reload | |
| 81 | +| `npm run build` | Build the TypeScript source | |
| 82 | +| `npm run clean` | Clean build artifacts | |
| 83 | +| `update:local-settings` | Create or update `local.settings.json` needed by the Functions runtime | |
| 84 | + |
| 85 | +## Configuration |
| 86 | + |
| 87 | +The application uses environment variables for configuration: |
| 88 | + |
| 89 | +| Variable | Required | Purpose | Default / Fallback | |
| 90 | +|----------|----------|---------|--------------------| |
| 91 | +| `AZURE_OPENAI_API_ENDPOINT` | Yes | Azure OpenAI endpoint used for chat completions | – | |
| 92 | +| `AZURE_OPENAI_MODEL` | No | Model name passed to LangChain.js | `gpt-5-mini` | |
| 93 | +| `BURGER_MCP_URL` | Yes* | Streamable HTTP endpoint of Burger MCP server | `http://localhost:3000/mcp` | |
| 94 | +| `AZURE_COSMOSDB_NOSQL_ENDPOINT` | No | Enables persistent chat + session titles | In‑memory fallback | |
| 95 | +| `APPLICATIONINSIGHTS_CONNECTION_STRING` | No | Sends traces to Azure Monitor | Local OTLP exporter | |
| 96 | + |
| 97 | +> <sup>*</sup>The code defaults `BURGER_MCP_URL` to the local dev port; production must set this. |
| 98 | +
|
| 99 | +### Local OpenTelemetry traces |
| 100 | + |
| 101 | +When running locally, spans are exported to `http://localhost:4318/v1/traces` and you can capture traces by running a local collector. |
| 102 | + |
| 103 | +For example, you can use the [AI Toolkit VSCode extension](https://marketplace.visualstudio.com/items?itemName=ms-windows-ai-studio.windows-ai-studio) which includes a local OpenTelemetry collector. |
| 104 | + |
| 105 | +After installing the extension, open the **I Toolkit** panel in the sidebar, go to the **Tracing** tool under **Agent and workflow tools**, and select **Start Collector**. When calling the agent API, you should see traces appear in the panel. You will then be able to inspect each trace in detail: The one named `LangGraph` contains the full sequence of LLM calls and tool invocations, allowing you to see how the agent arrived at its final response. |
0 commit comments