Replies: 1 comment
-
|
H were u able to solve this issue? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I'm working with a large OpenAPI 3.0 spec (~300 paths) and using FastMCP.from_openapi to dynamically generate tools based on groups of paths. This is part of a hierarchical discovery pattern where the LLM can request to "activate" a capability group.
I'm running into a significant performance issue. When a tool is called that triggers the creation of a new group of tools from the spec, the server process maxes out its CPU and becomes unresponsive for 30+ seconds. A simple performance test isolating the FastMCP.from_openapi call confirms that this function is the bottleneck.
My question is: Is this a known performance characteristic when generating a large number of tools (e.g., 50-100) dynamically during a request?
My current plan is to pre-generate all the tool groups at server startup and cache them, which I suspect is the right architectural pattern. I just wanted to confirm if this is the expected behavior or if there might be a more efficient way to handle dynamic tool loading from a large spec.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions