|
| 1 | +description: Calls the API to initiate a chat with a model using provided parameters |
| 2 | + |
| 3 | +<div itemscope itemtype="http://developers.google.com/ReferenceObject"> |
| 4 | +<meta itemprop="name" content="google.generativeai.chat" /> |
| 5 | +<meta itemprop="path" content="Stable" /> |
| 6 | +</div> |
| 7 | + |
| 8 | +# google.generativeai.chat |
| 9 | + |
| 10 | +<!-- Insert buttons and diff --> |
| 11 | + |
| 12 | +<table class="tfo-notebook-buttons tfo-api nocontent" align="left"> |
| 13 | +<td> |
| 14 | + <a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/discuss.py#L312-L408"> |
| 15 | + <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> |
| 16 | + View source on GitHub |
| 17 | + </a> |
| 18 | +</td> |
| 19 | +</table> |
| 20 | + |
| 21 | + |
| 22 | + |
| 23 | +Calls the API to initiate a chat with a model using provided parameters |
| 24 | + |
| 25 | + |
| 26 | +<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> |
| 27 | +<code>google.generativeai.chat( |
| 28 | + *, |
| 29 | + model: (model_types.AnyModelNameOptions | None) = 'models/chat-bison-001', |
| 30 | + context: (str | None) = None, |
| 31 | + examples: (discuss_types.ExamplesOptions | None) = None, |
| 32 | + messages: (discuss_types.MessagesOptions | None) = None, |
| 33 | + temperature: (float | None) = None, |
| 34 | + candidate_count: (int | None) = None, |
| 35 | + top_p: (float | None) = None, |
| 36 | + top_k: (float | None) = None, |
| 37 | + prompt: (discuss_types.MessagePromptOptions | None) = None, |
| 38 | + client: (glm.DiscussServiceClient | None) = None, |
| 39 | + request_options: (helper_types.RequestOptionsType | None) = None |
| 40 | +) -> discuss_types.ChatResponse |
| 41 | +</code></pre> |
| 42 | + |
| 43 | + |
| 44 | + |
| 45 | +<!-- Placeholder for "Used in" --> |
| 46 | + |
| 47 | + |
| 48 | +<!-- Tabular view --> |
| 49 | + <table class="responsive fixed orange"> |
| 50 | +<colgroup><col width="214px"><col></colgroup> |
| 51 | +<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr> |
| 52 | + |
| 53 | +<tr> |
| 54 | +<td> |
| 55 | +`model`<a id="model"></a> |
| 56 | +</td> |
| 57 | +<td> |
| 58 | +Which model to call, as a string or a <a href="../../google/generativeai/types/Model.md"><code>types.Model</code></a>. |
| 59 | +</td> |
| 60 | +</tr><tr> |
| 61 | +<td> |
| 62 | +`context`<a id="context"></a> |
| 63 | +</td> |
| 64 | +<td> |
| 65 | +Text that should be provided to the model first, to ground the response. |
| 66 | + |
| 67 | +If not empty, this `context` will be given to the model first before the |
| 68 | +`examples` and `messages`. |
| 69 | + |
| 70 | +This field can be a description of your prompt to the model to help provide |
| 71 | +context and guide the responses. |
| 72 | + |
| 73 | +Examples: |
| 74 | + |
| 75 | +* "Translate the phrase from English to French." |
| 76 | +* "Given a statement, classify the sentiment as happy, sad or neutral." |
| 77 | + |
| 78 | +Anything included in this field will take precedence over history in `messages` |
| 79 | +if the total input size exceeds the model's <a href="../../google/generativeai/protos/Model.md#input_token_limit"><code>Model.input_token_limit</code></a>. |
| 80 | +</td> |
| 81 | +</tr><tr> |
| 82 | +<td> |
| 83 | +`examples`<a id="examples"></a> |
| 84 | +</td> |
| 85 | +<td> |
| 86 | +Examples of what the model should generate. |
| 87 | + |
| 88 | +This includes both the user input and the response that the model should |
| 89 | +emulate. |
| 90 | + |
| 91 | +These `examples` are treated identically to conversation messages except |
| 92 | +that they take precedence over the history in `messages`: |
| 93 | +If the total input size exceeds the model's `input_token_limit` the input |
| 94 | +will be truncated. Items will be dropped from `messages` before `examples` |
| 95 | +</td> |
| 96 | +</tr><tr> |
| 97 | +<td> |
| 98 | +`messages`<a id="messages"></a> |
| 99 | +</td> |
| 100 | +<td> |
| 101 | +A snapshot of the conversation history sorted chronologically. |
| 102 | + |
| 103 | +Turns alternate between two authors. |
| 104 | + |
| 105 | +If the total input size exceeds the model's `input_token_limit` the input |
| 106 | +will be truncated: The oldest items will be dropped from `messages`. |
| 107 | +</td> |
| 108 | +</tr><tr> |
| 109 | +<td> |
| 110 | +`temperature`<a id="temperature"></a> |
| 111 | +</td> |
| 112 | +<td> |
| 113 | +Controls the randomness of the output. Must be positive. |
| 114 | + |
| 115 | +Typical values are in the range: `[0.0,1.0]`. Higher values produce a |
| 116 | +more random and varied response. A temperature of zero will be deterministic. |
| 117 | +</td> |
| 118 | +</tr><tr> |
| 119 | +<td> |
| 120 | +`candidate_count`<a id="candidate_count"></a> |
| 121 | +</td> |
| 122 | +<td> |
| 123 | +The **maximum** number of generated response messages to return. |
| 124 | + |
| 125 | +This value must be between `[1, 8]`, inclusive. If unset, this |
| 126 | +will default to `1`. |
| 127 | + |
| 128 | +Note: Only unique candidates are returned. Higher temperatures are more |
| 129 | +likely to produce unique candidates. Setting `temperature=0.0` will always |
| 130 | +return 1 candidate regardless of the `candidate_count`. |
| 131 | +</td> |
| 132 | +</tr><tr> |
| 133 | +<td> |
| 134 | +`top_k`<a id="top_k"></a> |
| 135 | +</td> |
| 136 | +<td> |
| 137 | +The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and |
| 138 | +top-k sampling. |
| 139 | + |
| 140 | +`top_k` sets the maximum number of tokens to sample from on each step. |
| 141 | +</td> |
| 142 | +</tr><tr> |
| 143 | +<td> |
| 144 | +`top_p`<a id="top_p"></a> |
| 145 | +</td> |
| 146 | +<td> |
| 147 | +The API uses combined [nucleus](https://arxiv.org/abs/1904.09751) and |
| 148 | +top-k sampling. |
| 149 | + |
| 150 | +`top_p` configures the nucleus sampling. It sets the maximum cumulative |
| 151 | + probability of tokens to sample from. |
| 152 | + |
| 153 | + For example, if the sorted probabilities are |
| 154 | + `[0.5, 0.2, 0.1, 0.1, 0.05, 0.05]` a `top_p` of `0.8` will sample |
| 155 | + as `[0.625, 0.25, 0.125, 0, 0, 0]`. |
| 156 | + |
| 157 | + Typical values are in the `[0.9, 1.0]` range. |
| 158 | +</td> |
| 159 | +</tr><tr> |
| 160 | +<td> |
| 161 | +`prompt`<a id="prompt"></a> |
| 162 | +</td> |
| 163 | +<td> |
| 164 | +You may pass a <a href="../../google/generativeai/types/MessagePromptOptions.md"><code>types.MessagePromptOptions</code></a> **instead** of a |
| 165 | +setting `context`/`examples`/`messages`, but not both. |
| 166 | +</td> |
| 167 | +</tr><tr> |
| 168 | +<td> |
| 169 | +`client`<a id="client"></a> |
| 170 | +</td> |
| 171 | +<td> |
| 172 | +If you're not relying on the default client, you pass a |
| 173 | +`glm.DiscussServiceClient` instead. |
| 174 | +</td> |
| 175 | +</tr><tr> |
| 176 | +<td> |
| 177 | +`request_options`<a id="request_options"></a> |
| 178 | +</td> |
| 179 | +<td> |
| 180 | +Options for the request. |
| 181 | +</td> |
| 182 | +</tr> |
| 183 | +</table> |
| 184 | + |
| 185 | + |
| 186 | + |
| 187 | +<!-- Tabular view --> |
| 188 | + <table class="responsive fixed orange"> |
| 189 | +<colgroup><col width="214px"><col></colgroup> |
| 190 | +<tr><th colspan="2"><h2 class="add-link">Returns</h2></th></tr> |
| 191 | +<tr class="alt"> |
| 192 | +<td colspan="2"> |
| 193 | +A <a href="../../google/generativeai/types/ChatResponse.md"><code>types.ChatResponse</code></a> containing the model's reply. |
| 194 | +</td> |
| 195 | +</tr> |
| 196 | + |
| 197 | +</table> |
| 198 | + |
0 commit comments