Skip to content

Commit b3d1e01

Browse files
Add responseLogProbs and logProbs parameters to generateContentReq (#266)
* Add responseLogProbs and logProbs parameters to generateContentReq * update docs & test * Update docs and add avglogprobs and logprobsresult as output * update variable names in responses.ts * Move parameters to GenerationConfig * Update test cases for new parameters to test generationConfig * Updated generatecontentresponse testcase * Update case of logprobs * put back parameters in test case
1 parent dda0b5c commit b3d1e01

30 files changed

+420
-68
lines changed

.changeset/cyan-pants-move.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
"@google/generative-ai": minor
33
---
44

5-
Add `frequencyPenalty` and `presencePenalty` parameters support for `generateContent()`
5+
Add `frequencyPenalty`, `presencePenalty`, `responseLogprobs`, and `logProbs` parameters support for `generationConfig`. Added `avgLogprobs` and `logprobsResult` to `GenerateContentResponse`. Updated test cases.

common/api-review/generative-ai.api.md

Lines changed: 24 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,8 @@
66

77
// @public
88
export interface BaseParams {
9-
frequencyPenalty?: number;
109
// (undocumented)
1110
generationConfig?: GenerationConfig;
12-
presencePenalty?: number;
1311
// (undocumented)
1412
safetySettings?: SafetySetting[];
1513
}
@@ -371,6 +369,7 @@ export interface FunctionResponsePart {
371369

372370
// @public
373371
export interface GenerateContentCandidate {
372+
avgLogprobs?: number;
374373
// (undocumented)
375374
citationMetadata?: CitationMetadata;
376375
// (undocumented)
@@ -381,6 +380,7 @@ export interface GenerateContentCandidate {
381380
finishReason?: FinishReason;
382381
// (undocumented)
383382
index: number;
383+
logprobsResult?: LogprobsResult;
384384
// (undocumented)
385385
safetyRatings?: SafetyRating[];
386386
}
@@ -429,8 +429,12 @@ export interface GenerateContentStreamResult {
429429
export interface GenerationConfig {
430430
// (undocumented)
431431
candidateCount?: number;
432+
frequencyPenalty?: number;
433+
logprobs?: number;
432434
// (undocumented)
433435
maxOutputTokens?: number;
436+
presencePenalty?: number;
437+
responseLogprobs?: boolean;
434438
responseMimeType?: string;
435439
responseSchema?: ResponseSchema;
436440
// (undocumented)
@@ -460,17 +464,13 @@ export class GenerativeModel {
460464
cachedContent: CachedContent;
461465
countTokens(request: CountTokensRequest | string | Array<string | Part>, requestOptions?: SingleRequestOptions): Promise<CountTokensResponse>;
462466
embedContent(request: EmbedContentRequest | string | Array<string | Part>, requestOptions?: SingleRequestOptions): Promise<EmbedContentResponse>;
463-
// (undocumented)
464-
frequencyPenalty?: number;
465467
generateContent(request: GenerateContentRequest | string | Array<string | Part>, requestOptions?: SingleRequestOptions): Promise<GenerateContentResult>;
466468
generateContentStream(request: GenerateContentRequest | string | Array<string | Part>, requestOptions?: SingleRequestOptions): Promise<GenerateContentStreamResult>;
467469
// (undocumented)
468470
generationConfig: GenerationConfig;
469471
// (undocumented)
470472
model: string;
471473
// (undocumented)
472-
presencePenalty?: number;
473-
// (undocumented)
474474
safetySettings: SafetySetting[];
475475
startChat(startChatParams?: StartChatParams): ChatSession;
476476
// (undocumented)
@@ -577,6 +577,19 @@ export interface InlineDataPart {
577577
text?: never;
578578
}
579579

580+
// @public
581+
export interface LogprobsCandidate {
582+
logProbability: number;
583+
token: string;
584+
tokenID: number;
585+
}
586+
587+
// @public
588+
export interface LogprobsResult {
589+
chosenCandidates: LogprobsCandidate[];
590+
topCandidates: TopCandidates[];
591+
}
592+
580593
// @public
581594
export interface ModelParams extends BaseParams {
582595
// (undocumented)
@@ -730,6 +743,11 @@ export interface ToolConfig {
730743
functionCallingConfig: FunctionCallingConfig;
731744
}
732745

746+
// @public
747+
export interface TopCandidates {
748+
candidates: LogprobsCandidate[];
749+
}
750+
733751
// @public
734752
export interface UsageMetadata {
735753
cachedContentTokenCount?: number;

docs/reference/main/generative-ai.baseparams.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,6 @@ export interface BaseParams
1616

1717
| Property | Modifiers | Type | Description |
1818
| --- | --- | --- | --- |
19-
| [frequencyPenalty?](./generative-ai.baseparams.frequencypenalty.md) | | number | _(Optional)_ Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far. |
2019
| [generationConfig?](./generative-ai.baseparams.generationconfig.md) | | [GenerationConfig](./generative-ai.generationconfig.md) | _(Optional)_ |
21-
| [presencePenalty?](./generative-ai.baseparams.presencepenalty.md) | | number | _(Optional)_ Presence penalty applied to the next token's logprobs if the token has already been seen in the response. |
2220
| [safetySettings?](./generative-ai.baseparams.safetysettings.md) | | [SafetySetting](./generative-ai.safetysetting.md)<!-- -->\[\] | _(Optional)_ |
2321

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
2+
3+
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [GenerateContentCandidate](./generative-ai.generatecontentcandidate.md) &gt; [avgLogprobs](./generative-ai.generatecontentcandidate.avglogprobs.md)
4+
5+
## GenerateContentCandidate.avgLogprobs property
6+
7+
Average log probability score of the candidate.
8+
9+
**Signature:**
10+
11+
```typescript
12+
avgLogprobs?: number;
13+
```
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
2+
3+
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [GenerateContentCandidate](./generative-ai.generatecontentcandidate.md) &gt; [logprobsResult](./generative-ai.generatecontentcandidate.logprobsresult.md)
4+
5+
## GenerateContentCandidate.logprobsResult property
6+
7+
Log-likelihood scores for the response tokens and top tokens.
8+
9+
**Signature:**
10+
11+
```typescript
12+
logprobsResult?: LogprobsResult;
13+
```

docs/reference/main/generative-ai.generatecontentcandidate.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,12 @@ export interface GenerateContentCandidate
1616

1717
| Property | Modifiers | Type | Description |
1818
| --- | --- | --- | --- |
19+
| [avgLogprobs?](./generative-ai.generatecontentcandidate.avglogprobs.md) | | number | _(Optional)_ Average log probability score of the candidate. |
1920
| [citationMetadata?](./generative-ai.generatecontentcandidate.citationmetadata.md) | | [CitationMetadata](./generative-ai.citationmetadata.md) | _(Optional)_ |
2021
| [content](./generative-ai.generatecontentcandidate.content.md) | | [Content](./generative-ai.content.md) | |
2122
| [finishMessage?](./generative-ai.generatecontentcandidate.finishmessage.md) | | string | _(Optional)_ |
2223
| [finishReason?](./generative-ai.generatecontentcandidate.finishreason.md) | | [FinishReason](./generative-ai.finishreason.md) | _(Optional)_ |
2324
| [index](./generative-ai.generatecontentcandidate.index.md) | | number | |
25+
| [logprobsResult?](./generative-ai.generatecontentcandidate.logprobsresult.md) | | [LogprobsResult](./generative-ai.logprobsresult.md) | _(Optional)_ Log-likelihood scores for the response tokens and top tokens. |
2426
| [safetyRatings?](./generative-ai.generatecontentcandidate.safetyratings.md) | | [SafetyRating](./generative-ai.safetyrating.md)<!-- -->\[\] | _(Optional)_ |
2527

docs/reference/main/generative-ai.baseparams.frequencypenalty.md renamed to docs/reference/main/generative-ai.generationconfig.frequencypenalty.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
22

3-
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [BaseParams](./generative-ai.baseparams.md) &gt; [frequencyPenalty](./generative-ai.baseparams.frequencypenalty.md)
3+
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [GenerationConfig](./generative-ai.generationconfig.md) &gt; [frequencyPenalty](./generative-ai.generationconfig.frequencypenalty.md)
44

5-
## BaseParams.frequencyPenalty property
5+
## GenerationConfig.frequencyPenalty property
66

77
Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far.
88

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
2+
3+
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [GenerationConfig](./generative-ai.generationconfig.md) &gt; [logprobs](./generative-ai.generationconfig.logprobs.md)
4+
5+
## GenerationConfig.logprobs property
6+
7+
Valid if responseLogProbs is set to True. This will set the number of top logprobs to return at each decoding step in the logprobsResult.
8+
9+
**Signature:**
10+
11+
```typescript
12+
logprobs?: number;
13+
```

docs/reference/main/generative-ai.generationconfig.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,11 @@ export interface GenerationConfig
1717
| Property | Modifiers | Type | Description |
1818
| --- | --- | --- | --- |
1919
| [candidateCount?](./generative-ai.generationconfig.candidatecount.md) | | number | _(Optional)_ |
20+
| [frequencyPenalty?](./generative-ai.generationconfig.frequencypenalty.md) | | number | _(Optional)_ Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far. |
21+
| [logprobs?](./generative-ai.generationconfig.logprobs.md) | | number | _(Optional)_ Valid if responseLogProbs is set to True. This will set the number of top logprobs to return at each decoding step in the logprobsResult. |
2022
| [maxOutputTokens?](./generative-ai.generationconfig.maxoutputtokens.md) | | number | _(Optional)_ |
23+
| [presencePenalty?](./generative-ai.generationconfig.presencepenalty.md) | | number | _(Optional)_ Presence penalty applied to the next token's logprobs if the token has already been seen in the response. |
24+
| [responseLogprobs?](./generative-ai.generationconfig.responselogprobs.md) | | boolean | _(Optional)_ If True, export the logprobs results in response. |
2125
| [responseMimeType?](./generative-ai.generationconfig.responsemimetype.md) | | string | _(Optional)_ Output response mimetype of the generated candidate text. Supported mimetype: <code>text/plain</code>: (default) Text output. <code>application/json</code>: JSON response in the candidates. |
2226
| [responseSchema?](./generative-ai.generationconfig.responseschema.md) | | [ResponseSchema](./generative-ai.responseschema.md) | _(Optional)_ Output response schema of the generated candidate text. Note: This only applies when the specified <code>responseMIMEType</code> supports a schema; currently this is limited to <code>application/json</code>. |
2327
| [stopSequences?](./generative-ai.generationconfig.stopsequences.md) | | string\[\] | _(Optional)_ |

docs/reference/main/generative-ai.baseparams.presencepenalty.md renamed to docs/reference/main/generative-ai.generationconfig.presencepenalty.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
22

3-
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [BaseParams](./generative-ai.baseparams.md) &gt; [presencePenalty](./generative-ai.baseparams.presencepenalty.md)
3+
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [GenerationConfig](./generative-ai.generationconfig.md) &gt; [presencePenalty](./generative-ai.generationconfig.presencepenalty.md)
44

5-
## BaseParams.presencePenalty property
5+
## GenerationConfig.presencePenalty property
66

77
Presence penalty applied to the next token's logprobs if the token has already been seen in the response.
88

0 commit comments

Comments
 (0)