-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Description
Proposing a concrete integration between Tapes and the Vercel AI SDK and get alignment on the recommended path.
Goal
Create a minimal, working integration where AI SDK requests go through a Tapes proxy via a custom fetch, so we can record requests/responses without changing the SDK itself. From there, we can decide what (if anything) should be generalized or documented.
@gr2m pinging you for awareness and feedback if this looks like a realistic plan.
High‑level flow:
AI SDK → custom fetch → Tapes proxy → upstream provider (OpenAI/Anthropic/etc.)
Tapes records all requests/responses for observability/debugging, currently into local SQLite (with options to swap storage).
Proposed scope
For the first iteration:
- Use AI SDK with one or more existing providers (e.g. OpenAI / Anthropic).
- Override
fetchso that:- All relevant AI SDK traffic is routed to a locally running Tapes proxy.
- The proxy forwards to the actual provider and returns the response transparently.
- Tapes records requests/responses (and possibly additional metadata) in SQLite.
- Keep storage concerns on the Tapes side:
- SQLite by default with potential to add a hosted provider for the data.
- Optionally Turso/Postgres/etc. later, but not required for the initial integration.
- Defer Vercel AI Gateway integration to a follow‑up unless there’s a strong reason to include it now.
Concrete steps
-
Minimal example
- Create a small example (Node/Next.js) that:
- Uses the AI SDK with a standard model.
- Configures a custom
fetchimplementation. - Points that
fetchat a running Tapes proxy. - Verifies that:
- Responses are unchanged from the perspective of AI SDK consumers.
- Requests/responses are recorded correctly by Tapes.
- Create a small example (Node/Next.js) that:
-
Custom
fetchintegration- Document the recommended way (in AI SDK) to:
- Override
fetchglobally or per‑client. - Handle streaming responses and retries correctly.
- Override
- Tapes will:
- Implement the proxy endpoint that conforms to the provider APIs (OpenAI/Anthropic style).
- Capture and store telemetry/metrics during the proxying step.
- Document the recommended way (in AI SDK) to:
-
Review + internal alignment
- Once a minimal integration is working:
- Share the example and configuration details for review.
- Get feedback on:
- Whether this is the idiomatic way to integrate with AI SDK.
- Any rough edges we should address (e.g. streaming, error handling, multi‑turn context).
- Once a minimal integration is working:
-
Docs / follow‑ups
- Turn the working example into a short “Using Tapes with Vercel AI SDK” guide.
- Optionally explore:
- Integration patterns with the Vercel AI Gateway (Tapes proxy → Gateway → providers).
- How this relates to existing or planned observability APIs (Ayush’s work).
Questions
- Is custom
fetchthe recommended integration point for this use case today? - Are there any existing patterns or constraints in AI SDK (e.g. around observability/telemetry hooks) that we should align with from the start?
- Any gotchas around:
- Streaming responses
- Retries/failover
- Multi‑turn conversations / request IDs
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels