feat(vertex-anthropic): add support for custom Vertex AI Anthropic model#1651
feat(vertex-anthropic): add support for custom Vertex AI Anthropic model#1651Abmarne wants to merge 4 commits intoeyaltoledano:nextfrom
Conversation
…del provider - Introduce VERTEX_ANTHROPIC custom provider with env var validation - Add VertexAnthropicProvider to AI providers exports and unified service - Enable handling of vertex-anthropic in task manager model settings - Include Vertex Anthropic in provider constants and CLI model configs - Add unit test mock support for vertex-anthropic provider ID
🦋 Changeset detectedLatest commit: eb5331b The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
📝 WalkthroughWalkthroughAdds a new Vertex Anthropic provider ("vertex-anthropic"): provider implementation, export, registration, CLI/type/constants updates, task-manager handling, tests, and a changelog entry to route Anthropic (Claude) models via Google Vertex AI. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Provider as VertexAnthropicProvider
participant VertexAPI as Vertex AI API
Client->>Provider: instantiate/getClient(params: creds, projectId, location, baseURL?)
activate Provider
Provider->>Provider: validate credentials (API key or service account)
alt credentials valid
Provider->>Provider: createVertexAnthropic(config, fetch)
Provider->>VertexAPI: initialize Anthropic endpoint / client
VertexAPI-->>Provider: client ready
Provider-->>Client: return client instance
else credentials invalid
Provider->>Provider: handleError()
Provider-->>Client: throw authentication error
end
deactivate Provider
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/modules/task-manager/models.js (1)
700-703:⚠️ Potential issue | 🟡 MinorAdd
--vertex-anthropicto the error message for consistency.The error message listing available provider flags doesn't include the new
--vertex-anthropicflag, which could confuse users trying to use the new provider.🐛 Proposed fix
- message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.` + message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/modules/task-manager/models.js` around lines 700 - 703, Update the error message returned when a model ID is not found in Taskmaster's supported models (the object containing code: 'MODEL_NOT_FOUND_NO_HINT' and its message in scripts/modules/task-manager/models.js) to include the new provider flag --vertex-anthropic in the comma-separated list of flags (alongside --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, --codex-cli) so users see the new option; modify only the message string for the MODEL_NOT_FOUND_NO_HINT response to append --vertex-anthropic in the same style and punctuation as the other flags.
🧹 Nitpick comments (1)
src/ai-providers/google-vertex-anthropic.js (1)
8-11: Remove unused importresolveEnvVariable.The
resolveEnvVariablefunction is imported but never used in this file. Environment variable resolution is handled byai-services-unified.jsbefore calling the provider.♻️ Proposed fix
import { createVertexAnthropic } from '@ai-sdk/google-vertex/anthropic'; -import { resolveEnvVariable } from '../../scripts/modules/utils.js'; import { log } from '../../scripts/modules/utils.js'; import { BaseAIProvider } from './base-provider.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ai-providers/google-vertex-anthropic.js` around lines 8 - 11, The import resolveEnvVariable from '../../scripts/modules/utils.js' is unused in this module; remove that named import so only used symbols (log) are imported, leaving the other imports (createVertexAnthropic from '@ai-sdk/google-vertex/anthropic' and BaseAIProvider) untouched; ensure there are no remaining references to resolveEnvVariable in the file (e.g., in any functions or class methods) before committing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@scripts/modules/task-manager/models.js`:
- Around line 700-703: Update the error message returned when a model ID is not
found in Taskmaster's supported models (the object containing code:
'MODEL_NOT_FOUND_NO_HINT' and its message in
scripts/modules/task-manager/models.js) to include the new provider flag
--vertex-anthropic in the comma-separated list of flags (alongside --openrouter,
--ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible,
--gemini-cli, --codex-cli) so users see the new option; modify only the message
string for the MODEL_NOT_FOUND_NO_HINT response to append --vertex-anthropic in
the same style and punctuation as the other flags.
---
Nitpick comments:
In `@src/ai-providers/google-vertex-anthropic.js`:
- Around line 8-11: The import resolveEnvVariable from
'../../scripts/modules/utils.js' is unused in this module; remove that named
import so only used symbols (log) are imported, leaving the other imports
(createVertexAnthropic from '@ai-sdk/google-vertex/anthropic' and
BaseAIProvider) untouched; ensure there are no remaining references to
resolveEnvVariable in the file (e.g., in any functions or class methods) before
committing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: dfe1f1b9-9a1b-466b-80d3-2a627236bd80
📒 Files selected for processing (8)
apps/cli/src/commands/models/custom-providers.tsapps/cli/src/commands/models/types.tspackages/tm-core/src/common/constants/providers.tsscripts/modules/ai-services-unified.jsscripts/modules/task-manager/models.jssrc/ai-providers/google-vertex-anthropic.jssrc/ai-providers/index.jstests/unit/scripts/modules/task-manager/models-baseurl.test.js
- Remove VertexAnthropicProvider duplication by extending VertexAIProvider instead of BaseAIProvider (reuses auth, validation, and error handling) - Remove unused resolveEnvVariable import from both vertex providers - Add --vertex-anthropic to MODEL_NOT_FOUND_NO_HINT error message
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/ai-providers/google-vertex-anthropic.js (2)
13-59: Add direct unit coverage for this adapter.The only test change in this PR is in the model-selection path, so
createVertexAnthropicconstruction and error handling here are still untested. A small provider-level test would catch regressions in auth and fetch wiring quickly.Based on learnings: Create unit tests in
tests/unit/ai-providers/<provider-name>.test.jsthat mock the provider's AI SDK module and test each exported function for correct client instantiation, parameter passing, result handling, and error handling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ai-providers/google-vertex-anthropic.js` around lines 13 - 59, Add direct unit tests for the VertexAnthropicProvider to cover client construction and error handling: write tests under tests/unit/ai-providers/google-vertex-anthropic.test.js that mock the createVertexAnthropic export and assert that VertexAnthropicProvider.getClient passes the correct params (projectId, location, apiKey or credentials mapped to googleAuthOptions, optional baseURL, and fetch from createProxyFetch) and returns the mocked client; also include a test where the mocked createVertexAnthropic throws and verify getClient triggers handleError (spy or stub on VertexAnthropicProvider.handleError) to exercise the catch path. Use the unique symbols VertexAnthropicProvider, getClient, createVertexAnthropic, createProxyFetch, and handleError to locate targets in the code.
33-45: Normalize credentials type for safety in both Vertex providers.
isValidCredential()acceptsstring | object, butgetClient()spreads credentials without type checking. Whileai-services-unified.jscurrently normalizes string paths to objects before calling the provider, this defensive gap could cause issues if code evolves. Add type normalization in bothgoogle-vertex.jsandgoogle-vertex-anthropic.jsto ensure consistency between validation and usage.Suggested normalization
const normalizedCredentials = typeof credentials === 'string' ? { keyFilename: credentials } : credentials;Apply this before line 139 in
google-vertex.jsand before line 39 ingoogle-vertex-anthropic.js, then usenormalizedCredentialsfor spreading and fallback assignment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ai-providers/google-vertex-anthropic.js` around lines 33 - 45, Normalize the credentials value before spreading in the provider client setup: in google-vertex-anthropic.js (and likewise in google-vertex.js) create a normalizedCredentials variable that converts a string credentials to { keyFilename: credentials } and leaves objects untouched, then use normalizedCredentials everywhere you currently spread or assign credentials (e.g., where authOptions.googleAuthOptions and any getClient() calls use credentials) so isValidCredential()’s accepted types match actual usage and avoid runtime type errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/modules/task-manager/models.js`:
- Line 701: The recovery hint for unknown model IDs is missing the
`--claude-code` option even though `setModel()` supports
CUSTOM_PROVIDERS.CLAUDE_CODE; update the message string (the template that
references modelId) to include `--claude-code` among the listed flags so users
can disambiguate custom Claude Code models.
- Around line 586-590: Add the missing Vertex Anthropic provider to the
registries: update the keyMap object in config-manager.js to include an entry
for CUSTOM_PROVIDERS.VERTEX_ANTHROPIC mapping to the correct API key config so
isApiKeySet() and getMcpApiKeyStatus() can detect its key, and update
supported-models.json to register "vertex-anthropic" in MODEL_MAP and the list
used by getAvailableModels() with appropriate model metadata (display name,
family, and any required fields) so MODEL_MAP validation and model discovery
recognize the provider.
---
Nitpick comments:
In `@src/ai-providers/google-vertex-anthropic.js`:
- Around line 13-59: Add direct unit tests for the VertexAnthropicProvider to
cover client construction and error handling: write tests under
tests/unit/ai-providers/google-vertex-anthropic.test.js that mock the
createVertexAnthropic export and assert that VertexAnthropicProvider.getClient
passes the correct params (projectId, location, apiKey or credentials mapped to
googleAuthOptions, optional baseURL, and fetch from createProxyFetch) and
returns the mocked client; also include a test where the mocked
createVertexAnthropic throws and verify getClient triggers handleError (spy or
stub on VertexAnthropicProvider.handleError) to exercise the catch path. Use the
unique symbols VertexAnthropicProvider, getClient, createVertexAnthropic,
createProxyFetch, and handleError to locate targets in the code.
- Around line 33-45: Normalize the credentials value before spreading in the
provider client setup: in google-vertex-anthropic.js (and likewise in
google-vertex.js) create a normalizedCredentials variable that converts a string
credentials to { keyFilename: credentials } and leaves objects untouched, then
use normalizedCredentials everywhere you currently spread or assign credentials
(e.g., where authOptions.googleAuthOptions and any getClient() calls use
credentials) so isValidCredential()’s accepted types match actual usage and
avoid runtime type errors.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4fc3ba26-2bc4-4f03-bf35-39af9fec93ef
📒 Files selected for processing (4)
.changeset/vertex-anthropic-provider.mdscripts/modules/task-manager/models.jssrc/ai-providers/google-vertex-anthropic.jssrc/ai-providers/google-vertex.js
💤 Files with no reviewable changes (1)
- src/ai-providers/google-vertex.js
| } else if (providerHint === CUSTOM_PROVIDERS.VERTEX_ANTHROPIC) { | ||
| // Set provider without model validation since Vertex Anthropic models are managed by Google Cloud | ||
| determinedProvider = CUSTOM_PROVIDERS.VERTEX_ANTHROPIC; | ||
| warningMessage = `Warning: Custom Vertex AI Anthropic model '${modelId}' set. Please ensure the model is valid and accessible in your Google Cloud project.`; | ||
| report('warn', warningMessage); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n -C3 '\bVERTEX_ANTHROPIC\b|vertex-anthropic' \
scripts/modules/config-manager.js \
scripts/modules/supported-models.json \
scripts/modules/ai-services-unified.js \
packages/tm-core/src/common/constants/providers.tsRepository: eyaltoledano/claude-task-master
Length of output: 1990
🏁 Script executed:
#!/bin/bash
# Check config-manager.js for vertex-anthropic provider support
echo "=== Checking config-manager.js ==="
rg -n 'VALID_PROVIDERS|MODEL_MAP|keyMap|vertex-anthropic' scripts/modules/config-manager.js | head -50
# Check if vertex-anthropic is in supported-models.json
echo -e "\n=== Checking supported-models.json ==="
grep -n 'vertex-anthropic' scripts/modules/supported-models.jsonRepository: eyaltoledano/claude-task-master
Length of output: 1228
🏁 Script executed:
#!/bin/bash
# View keyMap and surrounding context
echo "=== keyMap definition in config-manager.js ==="
sed -n '895,930p' scripts/modules/config-manager.js
# Check for VALID_PROVIDERS
echo -e "\n=== Checking for VALID_PROVIDERS ==="
rg -n 'VALID_PROVIDERS\s*=' scripts/modules/config-manager.js
# View supported-models.json structure
echo -e "\n=== supported-models.json structure (first 100 lines) ==="
head -100 scripts/modules/supported-models.jsonRepository: eyaltoledano/claude-task-master
Length of output: 3425
🏁 Script executed:
#!/bin/bash
# Check full supported-models.json for vertex-anthropic
echo "=== Full search for vertex-anthropic in supported-models.json ==="
grep -c 'vertex-anthropic' scripts/modules/supported-models.json || echo "NOT FOUND"
# List all provider keys in supported-models.json
echo -e "\n=== All provider keys in supported-models.json ==="
grep -o '^\s*"[^"]*":' scripts/modules/supported-models.json | head -20Repository: eyaltoledano/claude-task-master
Length of output: 528
🏁 Script executed:
#!/bin/bash
# Extract top-level provider keys from supported-models.json using jq
echo "=== Providers registered in supported-models.json ==="
jq 'keys' scripts/modules/supported-models.jsonRepository: eyaltoledano/claude-task-master
Length of output: 353
Add vertex-anthropic to config-manager.js and supported-models.json registries.
Lines 586–590 allow users to set VERTEX_ANTHROPIC as the provider, but the integration is incomplete. The provider is missing from:
- keyMap in config-manager.js (lines 895–912) —
isApiKeySet()andgetMcpApiKeyStatus()will fail for this provider. - supported-models.json —
getAvailableModels()will not discoververtex-anthropicmodels, andMODEL_MAPvalidation will not recognize the provider.
Users can persist the provider here but will encounter broken key-status checks and missing model metadata downstream. Add the provider to both registries with the appropriate API key mapping and model metadata.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/modules/task-manager/models.js` around lines 586 - 590, Add the
missing Vertex Anthropic provider to the registries: update the keyMap object in
config-manager.js to include an entry for CUSTOM_PROVIDERS.VERTEX_ANTHROPIC
mapping to the correct API key config so isApiKeySet() and getMcpApiKeyStatus()
can detect its key, and update supported-models.json to register
"vertex-anthropic" in MODEL_MAP and the list used by getAvailableModels() with
appropriate model metadata (display name, family, and any required fields) so
MODEL_MAP validation and model discovery recognize the provider.
| error: { | ||
| code: 'MODEL_NOT_FOUND_NO_HINT', | ||
| message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.` | ||
| message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.` |
There was a problem hiding this comment.
Include --claude-code in the recovery hint.
setModel() still supports CUSTOM_PROVIDERS.CLAUDE_CODE, so this message is missing one valid way to disambiguate a custom model.
💡 Proposed fix
- message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`
+ message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --claude-code, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/modules/task-manager/models.js` at line 701, The recovery hint for
unknown model IDs is missing the `--claude-code` option even though `setModel()`
supports CUSTOM_PROVIDERS.CLAUDE_CODE; update the message string (the template
that references modelId) to include `--claude-code` among the listed flags so
users can disambiguate custom Claude Code models.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/unit/ai-services-unified.test.js (1)
234-240: Couple theVertexAnthropicProvidermock contract toVertexAIProviderbehavior.Because production
VertexAnthropicProviderinherits fromVertexAIProvider, duplicating API-key methods inline here can drift from base-provider behavior and hide provider-resolution regressions in unified service tests.Refactor suggestion
+const createMockVertexProvider = () => ({ + generateText: jest.fn(), + streamText: jest.fn(), + generateObject: jest.fn(), + getRequiredApiKeyName: jest.fn(() => null), + isRequiredApiKey: jest.fn(() => false) +}); + +const mockVertexAIProvider = createMockVertexProvider(); +const mockVertexAnthropicProvider = createMockVertexProvider(); ... - VertexAIProvider: jest.fn(() => ({ - generateText: jest.fn(), - streamText: jest.fn(), - generateObject: jest.fn(), - getRequiredApiKeyName: jest.fn(() => null), - isRequiredApiKey: jest.fn(() => false) - })), + VertexAIProvider: jest.fn(() => mockVertexAIProvider), VertexAnthropicProvider: jest.fn(() => ({ - generateText: jest.fn(), - streamText: jest.fn(), - generateObject: jest.fn(), - getRequiredApiKeyName: jest.fn(() => null), - isRequiredApiKey: jest.fn(() => false) - })), + VertexAnthropicProvider: jest.fn(() => mockVertexAnthropicProvider),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ai-services-unified.test.js` around lines 234 - 240, The VertexAnthropicProvider test mock duplicates API-key methods and should delegate those to the VertexAIProvider behavior instead of hardcoding them; update the VertexAnthropicProvider jest.fn mock to reuse or extend the VertexAIProvider mock/implementation for getRequiredApiKeyName and isRequiredApiKey (e.g., obtain the VertexAIProvider mock instance or prototype and call/assign its getRequiredApiKeyName and isRequiredApiKey into the VertexAnthropicProvider mock) so the unified tests track base-provider behavior and avoid drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/unit/ai-services-unified.test.js`:
- Around line 234-240: The VertexAnthropicProvider test mock duplicates API-key
methods and should delegate those to the VertexAIProvider behavior instead of
hardcoding them; update the VertexAnthropicProvider jest.fn mock to reuse or
extend the VertexAIProvider mock/implementation for getRequiredApiKeyName and
isRequiredApiKey (e.g., obtain the VertexAIProvider mock instance or prototype
and call/assign its getRequiredApiKeyName and isRequiredApiKey into the
VertexAnthropicProvider mock) so the unified tests track base-provider behavior
and avoid drift.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2855da73-3385-4a59-bd99-2ef0391c0339
📒 Files selected for processing (1)
tests/unit/ai-services-unified.test.js
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| error: { | ||
| code: 'MODEL_NOT_FOUND_NO_HINT', | ||
| message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.` | ||
| message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, --vertex, --vertex-anthropic, --lmstudio, --openai-compatible, --gemini-cli, or --codex-cli.` |
There was a problem hiding this comment.
CLI flag --vertex-anthropic never defined, breaking advertised workflow
High Severity
The error message now tells users to specify --vertex-anthropic as a CLI flag, but this option is never defined in the Commander.js command setup in commands.js. There is no .option('--vertex-anthropic', ...) call, no entry in the providerFlags validation array, and no mapping in the providerHint ternary chain. Running task-master models --set-main claude-sonnet-4-6 --vertex-anthropic will fail because Commander doesn't recognize the flag. The interactive setup path works via custom-providers.ts, but the direct set workflow advertised in the PR description and error message is non-functional.


Fixes:#1648
This document describes the Vertex Anthropic provider implementation for Task Master, which enables using Claude models through Google Vertex AI.
Overview
The Vertex Anthropic provider (
vertex-anthropic) allows users to access Claude models (e.g., Claude Sonnet, Claude Opus) through Google Vertex AI'spublishers/anthropicendpoint, rather than the standardpublishers/googleendpoint used for Gemini models.Background
Google Vertex AI hosts models from multiple providers:
publishers/google/models/{model}:generateContentpublishers/anthropic/models/{model}:generateContentThe standard Vertex provider only uses the Google endpoint, which fails when requesting Claude models. This provider uses the
@ai-sdk/google-vertex/anthropicsubpath export to route requests correctly.Implementation
Files Added/Modified
src/ai-providers/google-vertex-anthropic.jscreateVertexAnthropicsrc/ai-providers/index.jsVertexAnthropicProviderscripts/modules/ai-services-unified.jspackages/tm-core/src/common/constants/providers.tsVERTEX_ANTHROPICtoCUSTOM_PROVIDERSapps/cli/src/commands/models/types.tsVERTEX_ANTHROPICtoCUSTOM_PROVIDER_IDSapps/cli/src/commands/models/custom-providers.tsscripts/modules/task-manager/models.jstests/unit/scripts/modules/task-manager/models-baseurl.test.jsProvider Class
Configuration
Environment Variables
The same environment variables are used as the standard Vertex provider:
GOOGLE_API_KEYGOOGLE_APPLICATION_CREDENTIALSVERTEX_PROJECT_IDVERTEX_LOCATIONus-central1)At least one of
GOOGLE_API_KEYorGOOGLE_APPLICATION_CREDENTIALSmust be provided.Config.json
{ "models": { "main": { "provider": "vertex-anthropic", "modelId": "claude-sonnet-4-6" }, "research": { "provider": "vertex-anthropic", "modelId": "claude-opus-4-6" }, "fallback": { "provider": "anthropic", "modelId": "claude-3-5-sonnet-20241022" } } }CLI Usage
Supported Models
Available Claude models on Vertex AI include:
claude-sonnet-4-6claude-opus-4-6claude-haiku-4-6MCP Configuration
For Cursor, VS Code, or other MCP-enabled editors:
{ "mcpServers": { "task-master-ai": { "command": "npx", "args": ["-y", "task-master-ai"], "env": { "VERTEX_PROJECT_ID": "your-project-id", "VERTEX_LOCATION": "us-central1", "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/service-account.json", "ANTHROPIC_API_KEY": "your-anthropic-key-for-fallback" } } } }Then in chat:
Troubleshooting
Model Not Found Error
If you see:
Cause: Using the standard
vertexprovider instead ofvertex-anthropic.Solution: Change provider to
vertex-anthropicin your configuration.Authentication Errors
Solution: Set one of the required authentication environment variables.
Project/Location Errors
Solution: Set
VERTEX_PROJECT_IDandVERTEX_LOCATIONenvironment variables.Comparison: vertex vs vertex-anthropic
vertexvertex-anthropicpublishers/google/...publishers/anthropic/...@ai-sdk/google-vertex@ai-sdk/google-vertex/anthropicReferences
Note
Medium Risk
Adds a new AI provider and wires it through CLI/config/model-selection flows; misconfiguration or missing env vars could break model setup or runtime provider resolution.
Overview
Adds a new
vertex-anthropiccustom provider to route Claude model calls through Google Vertex AI’s Anthropic publisher endpoint via a newVertexAnthropicProvider(built on the existing Vertex provider).Updates the CLI model setup and config/model-setting logic to accept
--vertex-anthropic/"* Custom Vertex Anthropic model" with the same Google auth env-var checks, and extends unified AI service execution to apply Vertex project/location configuration for bothvertexandvertex-anthropic.Includes a changeset for a minor release bump and adjusts unit-test mocks to recognize the new provider.
Written by Cursor Bugbot for commit eb5331b. This will update automatically on new commits. Configure here.
Summary by CodeRabbit