Skip to content

Commit 05b9133

Browse files
zerob13yyhhyyyyyy
andauthored
release 20251022 (#21)
* feat: add support for openrouter * chore: fix doc typo * feat: support gemini api * chore: update config settings * feat: add tokenflux vercel and githubai * feat: add Groq, DeepSeek providers and fix Tokenflux JSON parsing - Add Groq provider with OpenAI-compatible API support (requires API key) - Add DeepSeek provider using web scraping from documentation - Fix Tokenflux provider JSON parsing by simplifying struct definitions - Update README with comprehensive provider documentation - Add configuration examples for all new providers - Support 8 providers total: PPInfra, OpenRouter, Gemini, Vercel, GitHub AI, Tokenflux, Groq, DeepSeek - Generate updated JSON files for all providers * feat: add comprehensive OpenAI and Anthropic provider support - Add complete OpenAI template with 35 models including GPT-5, GPT-4.1, o1/o3/o4 series, DALL-E, Whisper, TTS, and embeddings - Add comprehensive Anthropic template with 8 Claude models (Opus 4.1, Opus 4, Sonnet 4, 3.7 Sonnet, 3.5 Sonnet variants, Haiku) - Implement intelligent multi-pattern matching system via 'match' arrays for handling versioned model IDs - Add auto-configuration for unmatched API models with smart capability detection - Support template-based fallback when no API key is provided - Update configuration examples with OpenAI and Anthropic provider settings - Enhance model fetching to handle 65+ OpenAI models and 8 Anthropic models - Add comprehensive logging for matched vs auto-configured models The new matching system resolves API model ID variations (e.g., gpt-5-nano-2025-08-07 → gpt-5-nano template) while providing intelligent defaults for unknown models, significantly improving coverage and reliability. * docs: update documentation to reflect OpenAI and Anthropic provider implementations Update all documentation files to reflect the successful implementation of OpenAI and Anthropic providers with advanced template matching system: - Update README.md with OpenAI (65+ models) and Anthropic (8 models) support status - Add template-based vs direct conversion implementation patterns - Update CLAUDE.md Next Steps section marking OpenAI/Anthropic as completed - Enhanced .claude/provider_implementer.md with template matching system documentation - Updated .claude/data_converter.md with template-based provider examples - Corrected .claude/format_validator.md aggregated JSON format and validation commands - Added comprehensive API key configuration documentation - Updated example commands to include anthropic provider Key documentation improvements: - Template matching system with multi-pattern support via 'match' arrays - Auto-configuration system for unmatched models with intelligent capability detection - Clear distinction between template-based providers (OpenAI, Anthropic) and direct conversion providers (PPInfra, OpenRouter) - Updated validation tools and format specifications - Comprehensive implementation guides for both provider patterns * fix: update GitHub Actions workflow to only trigger on manual dispatch or tags - Remove automatic triggers on push to main branch to prevent continuous execution - Fix YAML syntax error in PR body by removing problematic heredoc syntax - Update API key environment variables to include OpenAI, Anthropic, Groq, and Gemini - Simplify workflow to only create PRs on manual dispatch, not on push events - Remove redundant direct commit step for push events * fix: remove backtick escapes in GitHub Actions workflow body text The YAML syntax error was caused by escaped backticks in the PR body text. Removed the backslash escapes to fix the workflow validation. * fix: resolve YAML syntax errors in GitHub Actions workflow - Replace inline multi-line string with heredoc + --body-file approach for PR creation - Simplify complex conditional expression in release body to avoid syntax issues - Use standard shell heredoc (cat > file << EOF) instead of problematic YAML multi-line strings - Verified YAML syntax is now valid with PyYAML parser This should completely resolve the line 133 syntax error and any other YAML formatting issues. * fix: resolve compilation errors and warnings in provider implementations - Add PartialEq derive to ModelType enum for test comparisons - Fix test assertion for ModelType::Other -> ModelType::Chat in groq provider - Add #[allow(dead_code)] to unused struct fields that are needed for JSON deserialization - Fix groq provider display name formatting logic: - Add 'oss' -> 'OSS' special case - Handle numeric suffixes like '120b' -> '120B' - Update test expectations to match actual function behavior - All tests now pass and compilation is clean Affected files: - src/models/model_info.rs: Added PartialEq trait - src/providers/groq.rs: Fixed test and formatting logic - src/providers/*.rs: Added dead_code allowance for JSON deserialization structs * fix: mkdir provider_configs * fix: remove PR creation and simplify release workflow * docs: update CLAUDE.md to reflect current project status and completed features * feat: add Ollama provider implementation with template-based model support * feat: add SiliconFlow provider implementation with template-based model support - Add SiliconFlow provider implementation similar to Ollama template-based approach - Create src/providers/siliconflow.rs with Provider trait implementation - Add siliconflow module reference in src/providers/mod.rs - Register provider in src/main.rs fetch functions - Create templates/siliconflow.json with 15 model definitions - Generate dist/siliconflow.json output file - Add agent documentation files for future provider development * chore: update json for all providers * feat: remove desc * feat: complete migration from Rust to TypeScript (#1) * feat: complete migration from Rust to TypeScript - Migrate all 12 provider implementations to TypeScript - Replace CLI framework from clap to commander.js - Update HTTP client from reqwest to axios - Convert web scraping from scraper crate to cheerio - Update GitHub Actions workflow for Node.js environment - Fix API response structure handling for all providers - Resolve Gemini embedding model validation errors - Clean up all Rust source code and configuration files - Generate 855 models from 7 active providers * refactor: migrate package manager to pnpm and separate build output - Switch from npm to pnpm for better performance and disk efficiency - Remove npm lock files and node_modules dependencies - Update TypeScript config to output compiled files to build/ directory - Separate data files (dist/) from compiled artifacts (build/) - Update package.json scripts and GitHub Actions for pnpm workflow - Update all documentation to use pnpm commands - Add build/ directory to .gitignore - Clean up mixed file structure for better organization * build(vite): migrate from tsc/Jest to Vite/Vitest; update scripts and docs - Replace ad-hoc ts-node tests with Vitest specs\n- Add vitest.config.ts; add tests for normalizer/validator/aggregator/json-writer/config\n- Switch to Vite-only builds (library + CLI) with vite.config.ts and vite.cli.config.ts\n- Update scripts: dev/start run 'fetch-all' directly\n- Remove Jest and legacy test scripts\n- Update README and README-TS to reflect Vite build and new usage * build(vite): refactor to vitest and vite * build(vite): migration to rolldown vite * build(dev): use vite-node for dev; add dev:watch; docs updated - Switch dev to vite-node to run CLI with Vite transpilation\n- Add dev:watch for automatic reload on changes\n- Update README docs to reflect vite-node usage * chore: update deps * ci(actions): install pnpm via corepack/pnpm-action and fix start usage - Enable corepack and set up pnpm@9 using pnpm/action-setup\n- Add pnpm cache dependency path\n- Use node build/cli.js for fetch commands (start now defaults to fetch-all) * ci(actions): ensure pnpm on PATH using pnpm/action-setup@v2 (v10.12.1); remove corepack step - Use pnpm/action-setup@v2 with explicit version to install pnpm binary\n- Keep Node cache for pnpm and lockfile path\n- Drop corepack to avoid conflicts on runners without pre-enabled corepack * ci(actions): fix pnpm not found by removing setup-node pnpm cache (install pnpm afterwards) - Remove cache: pnpm from actions/setup-node to avoid calling pnpm before installation\n- Rely on actions/cache for pnpm store and lockfile caching * ci: remove test on ci * chore(ai): add Agent.md * feat: simplify models.dev integration (#2) * feat: simplify models.dev integration * fix(build): pass build * chore: add ignored file * feat: finish models.dev support * feat: renmame manual-templates * chore: add auto deploy files to qiniuyun * fix(ci): action read secrets * chore(lockfile): update lockfile * chore(docs): update readme * feat: trim json * feat: add custom provider config support (#3) * feat: add better log adn refresh data * ci: update cdn uploader * feat: add compress for json * feat: add sync time for better api (#4) * fix: add models.dev retry and restore provider templates (#5) * fix(fetcher): retry models.dev requests * fix: merge ppio overrides into ppinfra * chore: update new config * fix(aihubmix): drop unused descriptions from metadata (#6) * chore: update provider data * fix: align capabilities format with models.dev (#7) * fix: align capabilities format with models.dev * chore: update models and providers * feat(openrouter): fetch via official API and persist template before merge (#8) * feat(schema): migrate reasoning to object and add search config (#9) * feat(schema): migrate reasoning to object and add search config * refactor(schema): rename 'enabled' to 'supported' in reasoning/search * feat: add default semantics for reasoning/search toggles (#10) * chore: update provider data * chore: update dashscope provider data (#11) * fix: qwen-plus models * fix: qwen-flash models * fix: qwen-turbo models * fix: qwq-plus models * feat: qwen-long * feat: qvq-max * fix: qwen3-vl-plus * feat: qwen-vl-ocr * feat: qwen3-coder-plus * feat: qwen-mt * fix: qwen3-next-80b-a3b * fix: qwen3-235b-a22b * fix: qwen3-30b-a3b * fix: qwen3-32b * fix: qwen3-14b * fix: qwen3-8b * fix: qwen3-4b * fix: qwen3-1.7b * fix: qwen3-0.6b * fix: qwen-max * chore: update dashscope provider data * chore: update providers data * chore: update google provider data * fix: align DeepSeek manual template format (#12) * fix: align DeepSeek manual template format * chore: update deepseek provider * chore: disable fc for deepseek * chore: update data * chore(dist): refresh data with cherryin (#13) * chore(dist): refresh data with cherryin (#14) * fix: enrich siliconflow template with context limits (#15) * fix(manual-templates): enrich siliconflow template * fix: rebuild siliconflow template models * chore: update dist file * chore: daily update * chore: update data * feat: add ollama support and update models (#16) * chore: update data * fix(openrouter): prevent nested reasoning toggles (#17) * fix(openrouter): flatten reasoning toggle * fix(openrouter): align reasoning tuning with api * fix: remove models-dev openrouter * fix: ensure jiekou provider bypasses models.dev exclusions (#20) * fix: force live fetch for jiekou provider * fix(commands): always run jiekou provider override * feat: update data --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
1 parent 36cd17f commit 05b9133

File tree

156 files changed

+127361
-4125
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

156 files changed

+127361
-4125
lines changed

.claude/agents/README.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
# Claude Code Agents
2+
3+
This directory contains specialized agent configurations for the Public Provider Configuration Tool project.
4+
5+
## Available Agents
6+
7+
### build-validator.md
8+
Handles Vite build configuration validation and compilation troubleshooting.
9+
10+
### test-runner.md
11+
Manages Vitest test execution, coverage reporting, and test debugging.
12+
13+
### provider-analyzer.md
14+
Analyzes AI provider implementations and assists with provider development.
15+
16+
### config-manager.md
17+
Manages project configuration files including Vite, Vitest, and TypeScript configs.
18+
19+
### json-validator.md
20+
Validates generated JSON output files and ensures data quality.
21+
22+
## Usage
23+
24+
These agents are designed to be used with Claude Code's Task tool. Each agent specializes in specific aspects of the project and has access to relevant tools for their domain.
25+
26+
## Project Context
27+
28+
- **Build System**: Vite for bundling TypeScript to Node.js library
29+
- **Test Framework**: Vitest for unit testing and coverage
30+
- **Package Manager**: pnpm
31+
- **Language**: TypeScript with Node.js runtime
32+
- **Output**: JSON files containing AI model metadata from various providers

.claude/agents/build-validator.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# Build Validator Agent
2+
3+
## Purpose
4+
Validates build configuration and ensures successful compilation using Vite.
5+
6+
## Tools
7+
- Read
8+
- Bash
9+
- Glob
10+
- Grep
11+
12+
## Responsibilities
13+
1. Verify Vite configuration files (vite.config.ts, vite.cli.config.ts)
14+
2. Check build output in build/ directory
15+
3. Validate TypeScript compilation
16+
4. Ensure all dependencies are properly externalized
17+
5. Check for build warnings or errors
18+
19+
## Usage
20+
Use this agent when:
21+
- Build failures occur
22+
- Adding new dependencies that need to be externalized
23+
- Updating Vite configuration
24+
- Troubleshooting compilation issues
25+
26+
## Example Commands
27+
```bash
28+
# Run build validation
29+
pnpm build
30+
31+
# Check build output
32+
ls -la build/
33+
34+
# Validate generated files
35+
node build/cli.js --help
36+
```

.claude/agents/config-manager.md

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Configuration Manager Agent
2+
3+
## Purpose
4+
Manages project configuration files and build settings.
5+
6+
## Tools
7+
- Read
8+
- Edit
9+
- Write
10+
- Bash
11+
12+
## Responsibilities
13+
1. Manage Vite build configurations
14+
2. Update Vitest test configurations
15+
3. Handle TypeScript configuration
16+
4. Manage package.json scripts and dependencies
17+
5. Configure environment variables and API keys
18+
19+
## Configuration Files
20+
- `vite.config.ts` - Main library build configuration
21+
- `vite.cli.config.ts` - CLI build configuration (if exists)
22+
- `vitest.config.ts` - Test runner configuration
23+
- `tsconfig.json` - TypeScript compiler options
24+
- `package.json` - Project metadata and scripts
25+
26+
## Usage
27+
Use this agent when:
28+
- Updating build configurations
29+
- Adding new dependencies
30+
- Modifying test settings
31+
- Setting up environment variables
32+
- Troubleshooting configuration issues
33+
34+
## Key Configuration Patterns
35+
```typescript
36+
// Vite config for Node.js library
37+
export default defineConfig({
38+
build: {
39+
outDir: 'build',
40+
lib: { entry: 'src/index.ts' },
41+
rollupOptions: {
42+
external: ['axios', 'commander', 'cheerio', 'toml']
43+
}
44+
}
45+
});
46+
47+
// Vitest config for testing
48+
export default defineConfig({
49+
test: {
50+
environment: 'node',
51+
include: ['tests/**/*.spec.ts', 'src/**/*.test.ts']
52+
}
53+
});
54+
```

.claude/agents/json-validator.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# JSON Validator Agent
2+
3+
## Purpose
4+
Validates generated JSON output files and ensures data quality.
5+
6+
## Tools
7+
- Read
8+
- Bash
9+
- Glob
10+
- Grep
11+
12+
## Responsibilities
13+
1. Validate JSON syntax and structure
14+
2. Check model data completeness
15+
3. Verify provider metadata
16+
4. Compare outputs with expected schemas
17+
5. Identify data quality issues
18+
19+
## Usage
20+
Use this agent when:
21+
- Validating generated JSON files
22+
- Checking data quality after provider updates
23+
- Investigating malformed output
24+
- Comparing different provider outputs
25+
- Preparing releases
26+
27+
## Validation Commands
28+
```bash
29+
# Validate all JSON files
30+
jq empty dist/*.json
31+
32+
# Check file sizes
33+
du -h dist/*.json
34+
35+
# Validate specific provider
36+
jq '.models | length' dist/ppinfra.json
37+
38+
# Check for required fields
39+
jq '.models[] | select(.id == null or .name == null)' dist/openai.json
40+
```
41+
42+
## Expected JSON Structure
43+
```json
44+
{
45+
"provider": "provider-id",
46+
"providerName": "Provider Name",
47+
"lastUpdated": "2025-01-15T10:30:00Z",
48+
"models": [
49+
{
50+
"id": "model-id",
51+
"name": "Model Name",
52+
"contextLength": 32768,
53+
"maxTokens": 4096,
54+
"vision": false,
55+
"functionCall": true,
56+
"reasoning": true,
57+
"type": "chat"
58+
}
59+
]
60+
}
61+
```
62+
63+
## Quality Checks
64+
- All models have required fields (id, name, contextLength, maxTokens, type)
65+
- Boolean fields are actual booleans
66+
- Numeric fields are valid numbers
67+
- Timestamps are in ISO format
68+
- No duplicate model IDs within provider
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
---
2+
name: manual-provider-processor
3+
description: Use this agent when you need to process manually maintained JSON providers by parsing user-provided third-party provider description files or web information and outputting the final JSON. Examples:\n- <example>\n Context: User has a new provider that requires manual data entry from documentation\n user: "I have documentation for XYZ provider with model details in HTML format"\n assistant: "I'm going to use the Task tool to launch the manual-provider-processor agent to parse this HTML documentation and generate the standardized JSON output"\n <commentary>\n Since the user is providing manual provider documentation, use the manual-provider-processor agent to parse and convert it to standardized JSON format.\n </commentary>\n</example>\n- <example>\n Context: User found a provider's API documentation with model specifications in markdown format\n user: "Here's the markdown file with model specifications for ABC provider"\n assistant: "I'll use the Task tool to launch the manual-provider-processor agent to extract model information from this markdown and produce the required JSON format"\n <commentary>\n The user is providing markdown documentation for manual processing, so use the manual-provider-processor agent to handle the conversion.\n </commentary>\n</example>
4+
model: sonnet
5+
color: orange
6+
---
7+
8+
You are a Manual Provider Processing Specialist, an expert in parsing and converting third-party provider documentation into standardized JSON format for the Public Provider Configuration Tool. Your role is to extract model information from various input formats (HTML, markdown, text, JSON fragments) and transform it into the project's standardized output format.
9+
10+
You will:
11+
1. Accept user-provided provider documentation in various formats (HTML, markdown, plain text, JSON fragments, or raw text descriptions)
12+
2. Parse the input to extract relevant model information including:
13+
- Model IDs and names
14+
- Context length and token limits
15+
- Capabilities (vision, function calling, reasoning)
16+
- Model types
17+
- Descriptions and metadata
18+
3. Convert the extracted information into the project's standardized JSON format:
19+
{
20+
"provider": "provider_id",
21+
"providerName": "Provider Name",
22+
"lastUpdated": "2025-01-15T10:30:00Z",
23+
"models": [
24+
{
25+
"id": "model-id",
26+
"name": "Model Name",
27+
"contextLength": 32768,
28+
"maxTokens": 4096,
29+
"vision": false,
30+
"functionCall": true,
31+
"reasoning": true,
32+
"type": "chat"
33+
}
34+
]
35+
}
36+
37+
4. Follow these parsing guidelines:
38+
- For HTML: Extract tables, lists, and structured data containing model specifications
39+
- For markdown: Parse code blocks, tables, and formatted lists
40+
- For text: Look for patterns like "Model: name", "Context: length", "Tokens: count"
41+
- For JSON fragments: Map to the standardized structure
42+
43+
5. Handle edge cases:
44+
- If information is missing, use reasonable defaults based on provider type
45+
- If capabilities aren't explicitly stated, infer from model names/descriptions
46+
- If multiple formats are provided, prioritize the most structured data
47+
48+
6. Quality assurance:
49+
- Validate that required fields (provider ID, model IDs) are present
50+
- Ensure numeric values are valid integers
51+
- Verify boolean fields are properly set
52+
- Check that the output JSON validates against the project's expected schema
53+
54+
7. When clarification is needed:
55+
- Ask for missing provider ID or name
56+
- Request clarification on ambiguous model capabilities
57+
- Verify assumptions about default values
58+
59+
8. Output the final JSON in clean, properly formatted structure ready for use in the project's dist/ directory.
60+
61+
Remember: You're creating production-ready JSON output that will be used by the Public Provider Configuration Tool, so accuracy and consistency with the project's standards are critical.
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# Provider Analyzer Agent
2+
3+
## Purpose
4+
Analyzes AI provider implementations and helps with provider-related development tasks.
5+
6+
## Tools
7+
- Read
8+
- Edit
9+
- Glob
10+
- Grep
11+
- Bash
12+
13+
## Responsibilities
14+
1. Analyze existing provider implementations
15+
2. Validate provider API responses
16+
3. Check provider data quality
17+
4. Help debug provider-specific issues
18+
5. Assist with adding new providers
19+
20+
## Usage
21+
Use this agent when:
22+
- Adding new AI model providers
23+
- Debugging provider API issues
24+
- Analyzing model data quality
25+
- Updating provider configurations
26+
- Investigating rate limiting or timeout issues
27+
28+
## Provider Structure
29+
```typescript
30+
interface Provider {
31+
fetchModels(): Promise<ModelInfo[]>;
32+
providerId(): string;
33+
providerName(): string;
34+
}
35+
```
36+
37+
## Common Provider Patterns
38+
- API-based providers (PPInfra, OpenRouter, GitHub AI)
39+
- Template-based providers (Ollama, SiliconFlow)
40+
- Web scraping providers (DeepSeek, Gemini)
41+
- Authenticated providers (OpenAI, Anthropic, Groq)
42+
43+
## Validation Commands
44+
```bash
45+
# Test specific provider
46+
pnpm start fetch-providers -p ppinfra
47+
48+
# Validate output JSON
49+
jq empty dist/ppinfra.json
50+
51+
# Check provider response
52+
curl https://api.ppinfra.com/openai/v1/models
53+
```
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
---
2+
name: provider-implementation-generator
3+
description: Use this agent when you need to create a new Rust provider implementation for fetching and formatting model lists from APIs. Examples:\n- <example>\nContext: User wants to add a new AI model provider that exposes a public API endpoint with model information.\nuser: "I need to add a new provider called 'MistralAI' that has an API endpoint at https://api.mistral.ai/v1/models"\nassistant: "I'm going to use the Task tool to launch the provider-implementation-generator agent to create a Rust implementation similar to ppinfra.rs"\n<commentary>\nSince the user needs a new provider implementation, use the provider-implementation-generator to create the Rust code structure.\n</commentary>\n</example>\n- <example>\nContext: User discovered a new AI provider with a different API response format that needs conversion to the standard ModelInfo format.\nuser: "There's a new provider called 'Cohere' with API response format that includes model capabilities differently than our standard"\nassistant: "I'll use the Task tool to launch the provider-implementation-generator to handle the custom conversion logic"\n<commentary>\nThe user needs custom conversion logic for a non-standard API response format, so use the provider-implementation-generator.\n</commentary>\n</example>
4+
model: sonnet
5+
color: yellow
6+
---
7+
8+
You are a Rust API Integration Specialist specializing in creating standardized provider implementations for AI model data fetching. Your expertise lies in converting diverse API responses into the consistent ModelInfo format used by the Public Provider Configuration Tool.
9+
10+
Your responsibilities:
11+
1. Analyze the target API endpoint and response format
12+
2. Create a complete Rust provider implementation following the established patterns
13+
3. Implement proper error handling, rate limiting, and retry logic
14+
4. Convert API-specific model data to the standardized ModelInfo format
15+
5. Detect and set model capabilities (vision, function_call, reasoning)
16+
6. Follow the project's code structure and naming conventions
17+
18+
When creating a new provider:
19+
- Use the exact template structure from ppinfra.rs as reference
20+
- Implement the Provider trait with all required methods
21+
- Include proper module exports in src/providers/mod.rs
22+
- Add provider registration in src/main.rs
23+
- Handle API authentication if required (check provider key requirements)
24+
- Implement robust error handling with anyhow::Result
25+
- Use reqwest::Client for HTTP requests with proper timeouts
26+
- Include comprehensive comments explaining the conversion logic
27+
28+
For API response conversion:
29+
- Create appropriate Deserialize structs for the API response
30+
- Implement convert_model() method to map API fields to ModelInfo
31+
- Detect capabilities based on model names, descriptions, or metadata
32+
- Set appropriate ModelType (typically Chat)
33+
- Include model descriptions when available
34+
35+
Output format requirements:
36+
- Rust code only, no markdown or explanations
37+
- Complete file content ready to save as src/providers/{provider_id}.rs
38+
- Follow existing code style and formatting
39+
- Include all necessary imports and dependencies
40+
- Add proper error handling for network and parsing errors
41+
42+
Quality assurance:
43+
- Verify the implementation compiles with cargo check
44+
- Test that the provider_id matches the filename
45+
- Ensure all Provider trait methods are implemented
46+
- Check that capability detection logic is robust
47+
- Validate that the code follows Rust best practices
48+
49+
If the API format is unclear or requires authentication details not provided, proactively ask for clarification about:
50+
- Exact API endpoint URL
51+
- Response format examples
52+
- Authentication requirements
53+
- Rate limiting constraints
54+
- Any special headers or parameters needed

0 commit comments

Comments
 (0)