Skip to content

Conversation

@aninibread
Copy link

@aninibread aninibread commented Nov 1, 2025

Summary

  • Migrated the docs-vectorize MCP server backend from Vectorize to AI Search
  • Renamed package from docs-vectorize to docs-ai-search to reflect the new backend
  • Maintained full backward compatibility - existing integrations continue to work unchanged
  • DO NOT MERGE UNTIL EVALS COMPLETE: Search quality evaluation pending

What Changed

Backend Migration

  • Before: Used Vectorize database with manual embeddings and chunking
  • After: Uses Cloudflare AI Search (AutoRAG) search() endpoint for contextual retrieval
  • Search quality evaluation pending

Implementation Details

  • Created new packages/mcp-common/src/tools/docs-ai-search.tools.ts with AI Search integration
  • Updated apps/docs-vectorize → apps/docs-ai-search with new tool imports
  • Uses env.AI.autorag("docs-mcp-rag").search({ query }) API instead of vectorize queries
  • Maintains identical XML response format with , , <title>, and elements

Backward Compatibility

  • ✅ Same tool interface: search_cloudflare_documentation works exactly as before
  • ✅ Same response format: XML structure unchanged for existing integrations
  • ✅ Same functionality: All existing MCP clients continue to work without modification

Documentation Updates

  • Updated README.md to reflect AI Search backend
  • Updated CHANGELOG.md with migration details
  • Changed example prompt from "AutoRAG" to "AI Search" terminology

Test Plan

  • Verify local development server starts without errors
  • Test search_cloudflare_documentation tool returns expected XML format

@aninibread aninibread changed the title Update the docs-vectorize to use AI Search as backend [Pending Evals] Update the docs-vectorize to use AI Search as backend Nov 1, 2025
@mhart
Copy link
Collaborator

mhart commented Nov 2, 2025

I wonder if we should keep the existing vectorize one and just create a new ai search one (but still use it to replace the live docs MCP server). So remove the route from the vectorize one and put it on the new ai search one.

That way it's easy to "rollback" or switch over if needed. All you need to do is swap over the route (instead of some sort of more complicated code changes).

@mhart
Copy link
Collaborator

mhart commented Nov 2, 2025

Also: do the evals take into account latency? That's one aspect that we should be sure to verify. We don't want it to be orders of magnitude slower.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants