Skip to content

Commit a38b5fd

Browse files
committed
remove candidateCount
1 parent 781e377 commit a38b5fd

File tree

3 files changed

+0
-20
lines changed

3 files changed

+0
-20
lines changed

common/api-review/ai.api.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -707,7 +707,6 @@ export class IntegerSchema extends Schema {
707707

708708
// @beta
709709
export interface LiveGenerationConfig {
710-
candidateCount?: number;
711710
frequencyPenalty?: number;
712711
maxOutputTokens?: number;
713712
presencePenalty?: number;

docs-devsite/ai.livegenerationconfig.md

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@ export interface LiveGenerationConfig
2525

2626
| Property | Type | Description |
2727
| --- | --- | --- |
28-
| [candidateCount](./ai.livegenerationconfig.md#livegenerationconfigcandidatecount) | number | <b><i>(Public Preview)</i></b> The maximum number of generated response messages to return. This value must be between 1 and 8. If unset, this will default to 1. |
2928
| [frequencyPenalty](./ai.livegenerationconfig.md#livegenerationconfigfrequencypenalty) | number | <b><i>(Public Preview)</i></b> Frequency penalties. |
3029
| [maxOutputTokens](./ai.livegenerationconfig.md#livegenerationconfigmaxoutputtokens) | number | <b><i>(Public Preview)</i></b> Specifies the maximum number of tokens that can be generated in the response. The number of tokens per word varies depending on the language outputted. Is unbounded by default. |
3130
| [presencePenalty](./ai.livegenerationconfig.md#livegenerationconfigpresencepenalty) | number | <b><i>(Public Preview)</i></b> Positive penalties. |
@@ -35,19 +34,6 @@ export interface LiveGenerationConfig
3534
| [topK](./ai.livegenerationconfig.md#livegenerationconfigtopk) | number | <b><i>(Public Preview)</i></b> Changes how the model selects token for output. A <code>topK</code> value of 1 means the select token is the most probable among all tokens in the model's vocabulary, while a <code>topK</code> value 3 means that the next token is selected from among the 3 most probably using probabilities sampled. Tokens are then further filtered with the highest selected <code>temperature</code> sampling. Defaults to 40 if unspecified. |
3635
| [topP](./ai.livegenerationconfig.md#livegenerationconfigtopp) | number | <b><i>(Public Preview)</i></b> Changes how the model selects tokens for output. Tokens are selected from the most to least probable until the sum of their probabilities equals the <code>topP</code> value. For example, if tokens A, B, and C have probabilities of 0.3, 0.2, and 0.1 respectively and the <code>topP</code> value is 0.5, then the model will select either A or B as the next token by using the <code>temperature</code> and exclude C as a candidate. Defaults to 0.95 if unset. |
3736

38-
## LiveGenerationConfig.candidateCount
39-
40-
> This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.
41-
>
42-
43-
The maximum number of generated response messages to return. This value must be between 1 and 8. If unset, this will default to 1.
44-
45-
<b>Signature:</b>
46-
47-
```typescript
48-
candidateCount?: number;
49-
```
50-
5137
## LiveGenerationConfig.frequencyPenalty
5238

5339
> This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

packages/ai/src/types/requests.ts

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -140,11 +140,6 @@ export interface LiveGenerationConfig {
140140
* Configuration for speech synthesis.
141141
*/
142142
speechConfig?: SpeechConfig;
143-
/**
144-
* The maximum number of generated response messages to return. This value must be between
145-
* 1 and 8. If unset, this will default to 1.
146-
*/
147-
candidateCount?: number;
148143
/**
149144
* Specifies the maximum number of tokens that can be generated in the response. The number of
150145
* tokens per word varies depending on the language outputted. Is unbounded by default.

0 commit comments

Comments
 (0)