Skip to content

Commit 6380877

Browse files
authored
Update chat_async.md
1 parent a1e433b commit 6380877

File tree

1 file changed

+44
-48
lines changed

1 file changed

+44
-48
lines changed

docs/api/google/generativeai/chat_async.md

Lines changed: 44 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,68 +1,27 @@
1-
description: Calls the API to initiate a chat with a model using provided parameters
2-
3-
<div itemscope itemtype="http://developers.google.com/ReferenceObject">
4-
<meta itemprop="name" content="google.generativeai.chat_async" />
5-
<meta itemprop="path" content="Stable" />
6-
</div>
7-
8-
# google.generativeai.chat_async
9-
10-
<!-- Insert buttons and diff -->
11-
12-
<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
13-
<td>
14-
<a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/discuss.py#L411-L441">
15-
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
16-
View source on GitHub
17-
</a>
18-
</td>
19-
</table>
20-
21-
22-
23-
Calls the API to initiate a chat with a model using provided parameters
24-
25-
26-
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
27-
<code>google.generativeai.chat_async(
28-
*,
29-
model=&#x27;models/chat-bison-001&#x27;,
30-
context=None,
31-
examples=None,
32-
messages=None,
33-
temperature=None,
34-
candidate_count=None,
35-
top_p=None,
36-
top_k=None,
37-
prompt=None,
38-
client=None,
39-
request_options=None
40-
)
41-
</code></pre>
42-
43-
44-
45-
<!-- Placeholder for "Used in" -->
46-
47-
48-
<!-- Tabular view -->
491

502
 <table class="responsive fixed orange">
513
<colgroup><col width="214px"><col></colgroup>
524
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
535

546
<tr>
557
<td>
8+
569
`model`<a id="model"></a>
10+
5711
</td>
5812
<td>
13+
5914
Which model to call, as a string or a <a href="../../google/generativeai/types/Model.md"><code>types.Model</code></a>.
15+
6016
</td>
6117
</tr><tr>
6218
<td>
19+
6320
`context`<a id="context"></a>
21+
6422
</td>
6523
<td>
24+
6625
Text that should be provided to the model first, to ground the response.
6726

6827
If not empty, this `context` will be given to the model first before the
@@ -78,12 +37,16 @@ Examples:
7837

7938
Anything included in this field will take precedence over history in `messages`
8039
if the total input size exceeds the model's <a href="../../google/generativeai/protos/Model.md#input_token_limit"><code>Model.input_token_limit</code></a>.
40+
8141
</td>
8242
</tr><tr>
8343
<td>
44+
8445
`examples`<a id="examples"></a>
46+
8547
</td>
8648
<td>
49+
8750
Examples of what the model should generate.
8851

8952
This includes both the user input and the response that the model should
@@ -93,34 +56,46 @@ These `examples` are treated identically to conversation messages except
9356
that they take precedence over the history in `messages`:
9457
If the total input size exceeds the model's `input_token_limit` the input
9558
will be truncated. Items will be dropped from `messages` before `examples`
59+
9660
</td>
9761
</tr><tr>
9862
<td>
63+
9964
`messages`<a id="messages"></a>
65+
10066
</td>
10167
<td>
68+
10269
A snapshot of the conversation history sorted chronologically.
10370

10471
Turns alternate between two authors.
10572

10673
If the total input size exceeds the model's `input_token_limit` the input
10774
will be truncated: The oldest items will be dropped from `messages`.
75+
10876
</td>
10977
</tr><tr>
11078
<td>
79+
11180
`temperature`<a id="temperature"></a>
81+
11282
</td>
11383
<td>
84+
11485
Controls the randomness of the output. Must be positive.
11586

11687
Typical values are in the range: `[0.0,1.0]`. Higher values produce a
11788
more random and varied response. A temperature of zero will be deterministic.
89+
11890
</td>
11991
</tr><tr>
12092
<td>
93+
12194
`candidate_count`<a id="candidate_count"></a>
95+
12296
</td>
12397
<td>
98+
12499
The **maximum** number of generated response messages to return.
125100

126101
This value must be between `[1, 8]`, inclusive. If unset, this
@@ -129,22 +104,30 @@ will default to `1`.
129104
Note: Only unique candidates are returned. Higher temperatures are more
130105
likely to produce unique candidates. Setting `temperature=0.0` will always
131106
return 1 candidate regardless of the `candidate_count`.
107+
132108
</td>
133109
</tr><tr>
134110
<td>
111+
135112
`top_k`<a id="top_k"></a>
113+
136114
</td>
137115
<td>
116+
138117
The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and
139118
top-k sampling.
140119

141120
`top_k` sets the maximum number of tokens to sample from on each step.
121+
142122
</td>
143123
</tr><tr>
144124
<td>
125+
145126
`top_p`<a id="top_p"></a>
127+
146128
</td>
147129
<td>
130+
148131
The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and
149132
top-k sampling.
150133

@@ -156,29 +139,42 @@ top-k sampling.
156139
as `[0.625, 0.25, 0.125, 0, 0, 0]`.
157140

158141
Typical values are in the `[0.9, 1.0]` range.
142+
159143
</td>
160144
</tr><tr>
161145
<td>
146+
162147
`prompt`<a id="prompt"></a>
148+
163149
</td>
164150
<td>
151+
165152
You may pass a <a href="../../google/generativeai/types/MessagePromptOptions.md"><code>types.MessagePromptOptions</code></a> **instead** of a
166153
setting `context`/`examples`/`messages`, but not both.
154+
167155
</td>
168156
</tr><tr>
169157
<td>
158+
170159
`client`<a id="client"></a>
160+
171161
</td>
172162
<td>
163+
173164
If you're not relying on the default client, you pass a
174165
`glm.DiscussServiceClient` instead.
166+
175167
</td>
176168
</tr><tr>
177169
<td>
170+
178171
`request_options`<a id="request_options"></a>
172+
179173
</td>
180174
<td>
175+
181176
Options for the request.
177+
182178
</td>
183179
</tr>
184180
</table>

0 commit comments

Comments
 (0)