diff --git a/docs/cody/clients/install-vscode.mdx b/docs/cody/clients/install-vscode.mdx index 7d66d2796..076f940f7 100644 --- a/docs/cody/clients/install-vscode.mdx +++ b/docs/cody/clients/install-vscode.mdx @@ -394,14 +394,15 @@ Example VS Code user settings JSON configuration: { "provider": "google", "model": "gemini-1.5-pro-latest", - "tokens": 1000000, + "inputTokens": 1000000, + "outputTokens": 8192, "apiKey": "xyz" }, // Groq (e.g. llama2 70b) { "provider": "groq", "model": "llama2-70b-4096", - "tokens": 4096, + "inputTokens": 4096, "apiKey": "xyz" }, // OpenAI & OpenAI-compatible APIs @@ -427,8 +428,10 @@ Example VS Code user settings JSON configuration: - The LLM provider type. - `model`: `string` - The ID of the model, e.g. `"gemini-1.5-pro-latest"` -- `tokens`: `number` - optional +- `inputTokens`: `number` - optional - The context window size of the model. Default: `7000`. +- `outputTokens`: `number` - optional + - The number of tokens for the response of the model. Default: `1000`. - `apiKey`: `string` - optional - The API key for the endpoint. Required if the provider is `"google"` or `"groq"`. - `apiEndpoint`: `string` - optional