You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which model to call, as a string or a <ahref="../../google/generativeai/types/Model.md"><code>types.Model</code></a>.
15
+
60
16
</td>
61
17
</tr><tr>
62
18
<td>
19
+
63
20
`context`<aid="context"></a>
21
+
64
22
</td>
65
23
<td>
24
+
66
25
Text that should be provided to the model first, to ground the response.
67
26
68
27
If not empty, this `context` will be given to the model first before the
@@ -78,12 +37,16 @@ Examples:
78
37
79
38
Anything included in this field will take precedence over history in `messages`
80
39
if the total input size exceeds the model's <ahref="../../google/generativeai/protos/Model.md#input_token_limit"><code>Model.input_token_limit</code></a>.
40
+
81
41
</td>
82
42
</tr><tr>
83
43
<td>
44
+
84
45
`examples`<aid="examples"></a>
46
+
85
47
</td>
86
48
<td>
49
+
87
50
Examples of what the model should generate.
88
51
89
52
This includes both the user input and the response that the model should
@@ -93,34 +56,46 @@ These `examples` are treated identically to conversation messages except
93
56
that they take precedence over the history in `messages`:
94
57
If the total input size exceeds the model's `input_token_limit` the input
95
58
will be truncated. Items will be dropped from `messages` before `examples`
59
+
96
60
</td>
97
61
</tr><tr>
98
62
<td>
63
+
99
64
`messages`<aid="messages"></a>
65
+
100
66
</td>
101
67
<td>
68
+
102
69
A snapshot of the conversation history sorted chronologically.
103
70
104
71
Turns alternate between two authors.
105
72
106
73
If the total input size exceeds the model's `input_token_limit` the input
107
74
will be truncated: The oldest items will be dropped from `messages`.
75
+
108
76
</td>
109
77
</tr><tr>
110
78
<td>
79
+
111
80
`temperature`<aid="temperature"></a>
81
+
112
82
</td>
113
83
<td>
84
+
114
85
Controls the randomness of the output. Must be positive.
115
86
116
87
Typical values are in the range: `[0.0,1.0]`. Higher values produce a
117
88
more random and varied response. A temperature of zero will be deterministic.
89
+
118
90
</td>
119
91
</tr><tr>
120
92
<td>
93
+
121
94
`candidate_count`<aid="candidate_count"></a>
95
+
122
96
</td>
123
97
<td>
98
+
124
99
The **maximum** number of generated response messages to return.
125
100
126
101
This value must be between `[1, 8]`, inclusive. If unset, this
@@ -129,22 +104,30 @@ will default to `1`.
129
104
Note: Only unique candidates are returned. Higher temperatures are more
130
105
likely to produce unique candidates. Setting `temperature=0.0` will always
131
106
return 1 candidate regardless of the `candidate_count`.
107
+
132
108
</td>
133
109
</tr><tr>
134
110
<td>
111
+
135
112
`top_k`<aid="top_k"></a>
113
+
136
114
</td>
137
115
<td>
116
+
138
117
The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and
139
118
top-k sampling.
140
119
141
120
`top_k` sets the maximum number of tokens to sample from on each step.
121
+
142
122
</td>
143
123
</tr><tr>
144
124
<td>
125
+
145
126
`top_p`<aid="top_p"></a>
127
+
146
128
</td>
147
129
<td>
130
+
148
131
The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and
149
132
top-k sampling.
150
133
@@ -156,29 +139,42 @@ top-k sampling.
156
139
as `[0.625, 0.25, 0.125, 0, 0, 0]`.
157
140
158
141
Typical values are in the `[0.9, 1.0]` range.
142
+
159
143
</td>
160
144
</tr><tr>
161
145
<td>
146
+
162
147
`prompt`<aid="prompt"></a>
148
+
163
149
</td>
164
150
<td>
151
+
165
152
You may pass a <ahref="../../google/generativeai/types/MessagePromptOptions.md"><code>types.MessagePromptOptions</code></a> **instead** of a
166
153
setting `context`/`examples`/`messages`, but not both.
154
+
167
155
</td>
168
156
</tr><tr>
169
157
<td>
158
+
170
159
`client`<aid="client"></a>
160
+
171
161
</td>
172
162
<td>
163
+
173
164
If you're not relying on the default client, you pass a
0 commit comments