-
Notifications
You must be signed in to change notification settings - Fork 18
Description
Summary
Add an AI-powered "Auto-configure" tab to the endpoint creation form that lets users paste anything — a curl command, Python code, a route definition, API documentation, or even a plain-text description — and automatically generates the request mapping, response mapping, and headers.
Problem / Opportunity
Configuring endpoint mappings is the highest-friction step in the onboarding flow. Users must manually write Jinja2 request templates and JSONPath response mappings, which requires understanding both their own API's schema and Rhesis's platform-managed variables (input, output, context, messages, conversation_id, etc.).
Most users already have a working example of their endpoint — a curl command, a test script, a route definition, or Postman collection — but they have to mentally translate that into Rhesis mapping syntax.
Proposal
User Experience
Add a new "Auto-configure" tab to the endpoint creation form (alongside existing tabs: Basic Information, Request Settings, Response Settings, Test Connection). The tab contains:
- URL field (pre-filled if already entered in Basic Information tab)
- Auth token field (pre-filled if already entered)
- "Paste anything about your endpoint" — a large text area accepting any format:
- curl commands
- Python code (requests, Flask/FastAPI routes, httpx, etc.)
- JavaScript/TypeScript code (fetch, Express routes, etc.)
- OpenAPI/Swagger spec (YAML or JSON, full or partial)
- API documentation text
- Plain-text descriptions ("My endpoint accepts a JSON with 'query' and 'history' fields and returns {'answer': '...', 'sources': [...]}")
- "Auto-configure" button that triggers the AI pipeline
- Results panel showing the proposed mapping with explanations and a "Apply" button
AI Pipeline (Backend)
Step 1: Parse and Understand
Send the user's pasted content to the LLM with a structured prompt that extracts:
- Endpoint URL, HTTP method, content type (if not already provided)
- Request body schema: field names, types, which field carries the user input
- Request headers: authentication pattern, content type, custom headers
- Response body schema: field names, types, which field carries the model output
- Whether the endpoint appears to support multi-turn conversations (messages array, session/conversation IDs)
Output: a structured JSON with the inferred schema.
Step 2: Generate Test Request
From the extracted schema, generate a minimal test request body with a simple test message (e.g., "Hello, this is a test message from Rhesis.").
If the user provided a URL and auth token, actually call the endpoint to:
- Validate the inferred request structure
- Capture the real response structure (which may differ from what was inferred from code alone)
Step 3: Self-Correct on Failure
If the test call fails:
- Parse the error response (many frameworks return structured validation errors with field names and types)
- Feed the error back to the LLM to correct the request structure
- Retry (up to 3 attempts)
Step 4: Generate Mappings
From the confirmed request/response structures, generate:
request_mapping: Jinja2 template mapping Rhesis variables to endpoint fieldsresponse_mapping: JSONPath/Jinja2 expressions mapping endpoint response fields to Rhesis variablesrequest_headers: JSON headers including{{ auth_token }}placeholder if authentication was detected- Detection flags: whether the endpoint is stateless multi-turn (
{{ messages }}), stateful (conversation ID tracking), or single-turn
Step 5: Present Results
Return to the frontend:
- Generated mappings (pre-filled into form fields)
- Confidence level and reasoning for each mapping decision
- The actual test response (if a live call was made)
- Suggestions or warnings (e.g., "This endpoint appears to support streaming but Rhesis will use non-streaming mode")
Implementation Details
Backend
New service: apps/backend/src/rhesis/backend/app/services/endpoint/auto_configure.py
class EndpointAutoConfigureService:
"""AI-powered endpoint auto-configuration."""
async def auto_configure(
self,
user_input: str, # The "paste anything" content
url: str | None, # Optional pre-filled URL
auth_token: str | None, # Optional pre-filled auth token
db: Session,
user: User,
) -> AutoConfigureResult:
"""Parse user input, optionally probe endpoint, generate mappings."""
...LLM integration: Use the existing get_user_generation_model() from apps/backend/src/rhesis/backend/app/utils/user_model_utils.py to get the user's configured LLM. Fall back to the platform default model if none configured.
Prompt template: New Jinja2 template at apps/backend/src/rhesis/backend/app/templates/endpoint_auto_configure.jinja2 with:
- System prompt explaining Rhesis's mapping system and all platform-managed variables
- The user's pasted content
- Instructions to output structured JSON with the inferred schema
Pydantic schemas for structured LLM output:
class InferredEndpointSchema(BaseModel):
url: str | None
method: str
headers: dict[str, str]
request_body_schema: dict[str, FieldInfo]
response_body_schema: dict[str, FieldInfo]
input_field: str # Which request field maps to Rhesis "input"
output_field: str # Which response field maps to Rhesis "output"
conversation_mode: Literal["single_turn", "stateless_multi_turn", "stateful_multi_turn"]
conversation_id_field: str | None
confidence: float
reasoning: str
class AutoConfigureResult(BaseModel):
request_mapping: dict # Generated Jinja2 template
response_mapping: dict # Generated JSONPath mapping
request_headers: dict # Generated headers
conversation_mode: str
confidence: float
reasoning: str
test_response: dict | None # Actual response if probe was made
warnings: list[str]Endpoint probing: Reuse the existing EndpointService.test_endpoint() infrastructure from apps/backend/src/rhesis/backend/app/services/endpoint/service.py and the RestEndpointInvoker from apps/backend/src/rhesis/backend/app/services/invokers/rest_invoker.py to make the actual test call.
New API endpoint: POST /endpoints/auto-configure in apps/backend/src/rhesis/backend/app/routers/endpoint.py
Frontend
New component: AutoConfigureTab.tsx in apps/frontend/src/app/(protected)/endpoints/components/
- Large Monaco editor or text area for "paste anything" input
- "Auto-configure" button with loading state
- Results panel showing:
- Generated request mapping (editable Monaco editor)
- Generated response mapping (editable Monaco editor)
- Generated headers (editable Monaco editor)
- Confidence indicator and reasoning text
- Test response preview (if available)
- "Apply to endpoint" button that populates the other tabs
Integration with existing form: Add as a new tab in EndpointForm.tsx. When the user clicks "Apply", populate the fields in the Request Settings, Response Settings, and Basic Information tabs.
New server action: apps/frontend/src/actions/endpoints/auto-configure.ts
New API client method: Add autoConfigure() to EndpointsClient in apps/frontend/src/utils/api-client/endpoints-client.ts
Acceptance Criteria
- New "Auto-configure" tab in the endpoint creation form
- Accepts any text format (curl, Python, JS, OpenAPI, plain text)
- LLM correctly identifies request/response field mappings
- Optionally probes the live endpoint if URL and auth token are provided
- Self-corrects on probe failure (up to 3 retries with error feedback)
- Generates valid
request_mapping,response_mapping, andrequest_headers - Detects conversation mode (single-turn, stateless multi-turn, stateful)
- Populates existing form fields when user clicks "Apply"
- Shows confidence level and reasoning for transparency
- Works with the user's configured LLM model (via
get_user_generation_model()) - Falls back gracefully if LLM generation fails (shows error, user can still configure manually)
References
- Existing LLM integration:
apps/backend/src/rhesis/backend/app/utils/user_model_utils.py - Existing test endpoint:
POST /endpoints/testin endpoint router - Endpoint invoker infrastructure:
apps/backend/src/rhesis/backend/app/services/invokers/ - Template rendering:
apps/backend/src/rhesis/backend/app/services/invokers/templating/renderer.py - Response mapping:
apps/backend/src/rhesis/backend/app/services/invokers/templating/response_mapper.py - Frontend endpoint form:
apps/frontend/src/app/(protected)/endpoints/components/EndpointForm.tsx - Swagger form (placeholder):
apps/frontend/src/app/(protected)/endpoints/components/SwaggerEndpointForm.tsx