-
Notifications
You must be signed in to change notification settings - Fork 53
Intra-vendor Model Routing Support AI APIs #3353
Copy link
Copy link
Closed
2 / 22 of 2 issues completedDescription
Problem
AI API designers need to be able to route incoming requests across models within a single vendor’s ecosystem.
With the API Manager 4.4.0 release, AI API support was introduced. We allow API consumers to specify the model they wish to consume within that LLM vendor. This leaves us with the following concerns:
- API designers have no control over which models are used, how frequently they are accessed, or the costs incurred.
- Risks such as model exhaustion and model misuse could lead to API being throttled out.
- Inability to enforce a model routing strategy at design time considering the use cases of AI applications consuming the API.
Proposed Solution
Support intra-vendor model routing for AI APIs.
The proposed solution is to enforce the model routing strategy as a policy for the API request flow. Policies can be categorised as follows:
- Static routing techniques (only relies on the incoming request)
- Dynamic routing techniques (based on metrics from previous invocations)
- Failover
- Custom routing strategy enforcement
Targeting the APIM 4.5.0 release, we are shipping the following policies:
- Model Round-robin Policy
- Model Weighted Round-robin Policy
- Model Failover Policy
Task Breakdown
- Feature design and UI wireframes
- Admin REST API changes for AI Vendor model list maintenance
- Publisher REST API changes for multi-endpoint support
- Revamp the existing flow to handle AI APIs via a separate velocity template
- Onboard round-robin policy
- Onboard failover policy
- Admin UI changes to handle model list under the AI vendor
- Handle APICTL flows: import/export APIs with multiple endpoints
- Write unit tests
- Write integration tests
- Write CTL tests
- Write feature documentation
Version
4.5.0
Reactions are currently unavailable