Skip to content

Commit cb546f5

Browse files
committed
Fix response modalities type
1 parent 12b33b6 commit cb546f5

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

common/api-review/ai.api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -715,7 +715,7 @@ export interface LiveGenerationConfig {
715715
frequencyPenalty?: number;
716716
maxOutputTokens?: number;
717717
presencePenalty?: number;
718-
responseModalities?: [ResponseModality];
718+
responseModalities?: ResponseModality[];
719719
speechConfig?: SpeechConfig;
720720
temperature?: number;
721721
topK?: number;

docs-devsite/ai.livegenerationconfig.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ export interface LiveGenerationConfig
2828
| [frequencyPenalty](./ai.livegenerationconfig.md#livegenerationconfigfrequencypenalty) | number | <b><i>(Public Preview)</i></b> Frequency penalties. |
2929
| [maxOutputTokens](./ai.livegenerationconfig.md#livegenerationconfigmaxoutputtokens) | number | <b><i>(Public Preview)</i></b> Specifies the maximum number of tokens that can be generated in the response. The number of tokens per word varies depending on the language outputted. Is unbounded by default. |
3030
| [presencePenalty](./ai.livegenerationconfig.md#livegenerationconfigpresencepenalty) | number | <b><i>(Public Preview)</i></b> Positive penalties. |
31-
| [responseModalities](./ai.livegenerationconfig.md#livegenerationconfigresponsemodalities) | \[[ResponseModality](./ai.md#responsemodality)<!-- -->\] | <b><i>(Public Preview)</i></b> The modalities of the response. |
31+
| [responseModalities](./ai.livegenerationconfig.md#livegenerationconfigresponsemodalities) | [ResponseModality](./ai.md#responsemodality)<!-- -->\[\] | <b><i>(Public Preview)</i></b> The modalities of the response. |
3232
| [speechConfig](./ai.livegenerationconfig.md#livegenerationconfigspeechconfig) | [SpeechConfig](./ai.speechconfig.md#speechconfig_interface) | <b><i>(Public Preview)</i></b> Configuration for speech synthesis. |
3333
| [temperature](./ai.livegenerationconfig.md#livegenerationconfigtemperature) | number | <b><i>(Public Preview)</i></b> Controls the degree of randomness in token selection. A <code>temperature</code> value of 0 means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible. |
3434
| [topK](./ai.livegenerationconfig.md#livegenerationconfigtopk) | number | <b><i>(Public Preview)</i></b> Changes how the model selects token for output. A <code>topK</code> value of 1 means the select token is the most probable among all tokens in the model's vocabulary, while a <code>topK</code> value 3 means that the next token is selected from among the 3 most probably using probabilities sampled. Tokens are then further filtered with the highest selected <code>temperature</code> sampling. Defaults to 40 if unspecified. |
@@ -83,7 +83,7 @@ The modalities of the response.
8383
<b>Signature:</b>
8484

8585
```typescript
86-
responseModalities?: [ResponseModality];
86+
responseModalities?: ResponseModality[];
8787
```
8888

8989
## LiveGenerationConfig.speechConfig

packages/ai/src/types/requests.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ export interface LiveGenerationConfig {
178178
/**
179179
* The modalities of the response.
180180
*/
181-
responseModalities?: [ResponseModality];
181+
responseModalities?: ResponseModality[];
182182
}
183183

184184
/**

0 commit comments

Comments
 (0)