You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: Use custom tool nodes for auth handling (#551)
This PR updates the existing workflow to replace the prebuilt tool node
with a new custom tool node. This new node is designed to intelligently
handle tool auth by reading auth headers from the provided
`RunnableConfig` by LangGraph.
The custom node inspects the auth requirements of the underlying core
tool within the `ToolboxTool`. If the tool requires authentication, the
node dynamically creates an authenticated copy of the tool by attaching
the necessary auth token getters using the `add_auth_token_getter` API.
This authenticated tool instance is then used for the call and
subsequently discarded. This same auth handling logic has also been
applied to the node responsible for ticket insertion.
> [!NOTE]
> The functionality introduced in these custom nodes will be abstracted
into the `ToolboxTool` itself in an upcoming release of the
`toolbox-langchain`
[#291](googleapis/mcp-toolbox-sdk-python#291).
This will simplify the workflow in the future by handling authentication
directly within the tool.
1. Open a local connection to your database by starting the [AlloyDB Auth Proxy][alloydb-proxy] or [Cloud SQL Auth Proxy][cloudsql-proxy] or a [SSH tunnel][tunnel] to your AlloyDB instance (for non-cloud postgres such as AlloyDB Omni).
29
+
### Setup Toolbox
33
30
34
-
1. You should already have a [`config.yml` created with your database config][config]. Continue to use `host: 127.0.0.1` and `port: 5432`, unless you instruct the proxy to listen or the SSH tunnel to forward to a different address.
31
+
To setup Toolbox (locally or on Cloud Run), follow [these
| vertexai-function-calling | VertexAI Function Calling orchestrator. |
79
-
80
-
```bash
81
-
export ORCHESTRATION_TYPE=<orchestration-type>
82
-
```
83
-
84
74
1. To run the app using uvicorn, execute the following:
85
75
86
76
```bash
87
77
python run_app.py
88
78
```
89
79
90
-
Note: for hot reloading of the app use: `python run_app.py --reload`
80
+
1. View the app in your browser at http://localhost:8081.
91
81
92
-
1. View app at `http://localhost:8081/`
82
+
> [!TIP]
83
+
> For hot-reloading during development, use the `--reload` flag:
84
+
> ```bash
85
+
> python run_app.py --reload
86
+
> ```
93
87
94
88
## Testing
95
89
96
90
### Run tests locally
97
91
98
-
1. Change into the `retrieval_service` directory
99
-
1. Open a local connection to your database by starting the [AlloyDB Auth Proxy][alloydb-proxy] or [Cloud SQL Auth Proxy][cloudsql-proxy] or a [SSH tunnel][tunnel] to your AlloyDB instance (for non-cloud postgres such as AlloyDB Omni).
100
-
1. Set environment variables (different provider requires different environment variables):
92
+
The unit tests for this application mock the API calls to the MCP Toolbox, so
93
+
you do not need a live database or a running Toolbox instance to run them.
1. Create secret, `db_user` and `db_pass`, with your database user and database password defined [here](https://googleapis.github.io/genai-toolbox/resources/sources/).
158
138
159
139
1. Allow Cloud Build to access secret
160
-
1. Add role Vertex AI User (roles/aiplatform.user) to Cloud Build Service account. Needed to run database init script.
140
+
1. Add role Vertex AI User (`roles/aiplatform.user`) to Cloud Build Service
141
+
account. Needed to run database init script.
161
142
162
-
##### Setup for demo service tests
143
+
##### Setup for Agent App
163
144
164
-
1. Add roles Cloud Run Admin, Service Account User, Log Writer, and Artifact Registry Admin to the demo service's Cloud Build trigger service account.
145
+
Add roles `Cloud Run Admin`, `Service Account User`, `Log Writer`, and `Artifact
146
+
Registry Admin` to the demo service's Cloud Build trigger service account.
Note: Make sure to setup secrets describe in [Setup for retrieval service](#setup-for-retrieval-service)
184
-
185
-
#### Trigger
186
-
187
-
To run Cloud Build tests on GitHub from external contributors, ie RenovateBot, comment: `/gcbrun`.
188
-
189
-
#### Code Coverage
190
-
Please make sure your code is fully tested. The Cloud Build integration tests are run with the `pytest-cov` code coverage plugin. They fail for PRs with a code coverage less than the threshold specified in `retrieval_service/coverage/.<test>-coveragerc`. If your file is inside the main module and should be ignored by code coverage check, add it to the `omit` section of `retrieval_service/coverage/.<test>-coveragerc`.
191
-
192
-
Check for code coverage report any Cloud Build integration test log.
193
-
Here is a breakdown of the report:
194
-
- `Stmts`: lines of executable code (statements).
195
-
- `Miss`: number of lines not covered by tests.
196
-
- `Branch`: branches of executable code (e.g an if-else clause may count as 1 statement but 2 branches; test for both conditions to have both branches covered).
197
-
- `BrPart`: number of branches not covered by tests.
198
-
- `Cover`: average coverage of files.
199
-
- `Missing`: lines that are not covered by tests.
200
-
201
-
## LLM Evaluation
202
-
203
-
[Optional] Export detailed metric table with row-specific scores by setting the `EXPORT_CSV` envrionment variable:
0 commit comments