Skip to content
Closed

demo #591

Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
93 changes: 45 additions & 48 deletions docs/api/google/generativeai/chat_async.md
Original file line number Diff line number Diff line change
@@ -1,67 +1,27 @@
description: Calls the API to initiate a chat with a model using provided parameters

<div itemscope itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="google.generativeai.chat_async" />
<meta itemprop="path" content="Stable" />
</div>

# google.generativeai.chat_async

<!-- Insert buttons and diff -->

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/discuss.py#L411-L441">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
</td>
</table>



Calls the API to initiate a chat with a model using provided parameters


<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>google.generativeai.chat_async(
*,
model=&#x27;models/chat-bison-001&#x27;,
context=None,
examples=None,
messages=None,
temperature=None,
candidate_count=None,
top_p=None,
top_k=None,
prompt=None,
client=None,
request_options=None
)
</code></pre>



<!-- Placeholder for "Used in" -->


<!-- Tabular view -->
<table class="responsive fixed orange">
 <table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>

<tr>
<td>

`model`<a id="model"></a>

</td>
<td>

Which model to call, as a string or a <a href="../../google/generativeai/types/Model.md"><code>types.Model</code></a>.

</td>
</tr><tr>
<td>

`context`<a id="context"></a>

</td>
<td>

Text that should be provided to the model first, to ground the response.

If not empty, this `context` will be given to the model first before the
Expand All @@ -77,12 +37,16 @@ Examples:

Anything included in this field will take precedence over history in `messages`
if the total input size exceeds the model's <a href="../../google/generativeai/protos/Model.md#input_token_limit"><code>Model.input_token_limit</code></a>.

</td>
</tr><tr>
<td>

`examples`<a id="examples"></a>

</td>
<td>

Examples of what the model should generate.

This includes both the user input and the response that the model should
Expand All @@ -92,34 +56,46 @@ These `examples` are treated identically to conversation messages except
that they take precedence over the history in `messages`:
If the total input size exceeds the model's `input_token_limit` the input
will be truncated. Items will be dropped from `messages` before `examples`

</td>
</tr><tr>
<td>

`messages`<a id="messages"></a>

</td>
<td>

A snapshot of the conversation history sorted chronologically.

Turns alternate between two authors.

If the total input size exceeds the model's `input_token_limit` the input
will be truncated: The oldest items will be dropped from `messages`.

</td>
</tr><tr>
<td>

`temperature`<a id="temperature"></a>

</td>
<td>

Controls the randomness of the output. Must be positive.

Typical values are in the range: `[0.0,1.0]`. Higher values produce a
more random and varied response. A temperature of zero will be deterministic.

</td>
</tr><tr>
<td>

`candidate_count`<a id="candidate_count"></a>

</td>
<td>

The **maximum** number of generated response messages to return.

This value must be between `[1, 8]`, inclusive. If unset, this
Expand All @@ -128,22 +104,30 @@ will default to `1`.
Note: Only unique candidates are returned. Higher temperatures are more
likely to produce unique candidates. Setting `temperature=0.0` will always
return 1 candidate regardless of the `candidate_count`.

</td>
</tr><tr>
<td>

`top_k`<a id="top_k"></a>

</td>
<td>

The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and
top-k sampling.

`top_k` sets the maximum number of tokens to sample from on each step.

</td>
</tr><tr>
<td>

`top_p`<a id="top_p"></a>

</td>
<td>

The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and
top-k sampling.

Expand All @@ -155,29 +139,42 @@ top-k sampling.
as `[0.625, 0.25, 0.125, 0, 0, 0]`.

Typical values are in the `[0.9, 1.0]` range.

</td>
</tr><tr>
<td>

`prompt`<a id="prompt"></a>

</td>
<td>

You may pass a <a href="../../google/generativeai/types/MessagePromptOptions.md"><code>types.MessagePromptOptions</code></a> **instead** of a
setting `context`/`examples`/`messages`, but not both.

</td>
</tr><tr>
<td>

`client`<a id="client"></a>

</td>
<td>

If you're not relying on the default client, you pass a
`glm.DiscussServiceClient` instead.

</td>
</tr><tr>
<td>

`request_options`<a id="request_options"></a>

</td>
<td>

Options for the request.

</td>
</tr>
</table>
Expand Down
Loading