|
| 1 | +description: Contains an ongoing conversation with the model. |
| 2 | + |
| 3 | +<div itemscope itemtype="http://developers.google.com/ReferenceObject"> |
| 4 | +<meta itemprop="name" content="google.generativeai.ChatSession" /> |
| 5 | +<meta itemprop="path" content="Stable" /> |
| 6 | +<meta itemprop="property" content="__init__"/> |
| 7 | +<meta itemprop="property" content="rewind"/> |
| 8 | +<meta itemprop="property" content="send_message"/> |
| 9 | +<meta itemprop="property" content="send_message_async"/> |
| 10 | +</div> |
| 11 | + |
| 12 | +# google.generativeai.ChatSession |
| 13 | + |
| 14 | +<!-- Insert buttons and diff --> |
| 15 | + |
| 16 | +<table class="tfo-notebook-buttons tfo-api nocontent" align="left"> |
| 17 | +<td> |
| 18 | + <a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L481-L875"> |
| 19 | + <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> |
| 20 | + View source on GitHub |
| 21 | + </a> |
| 22 | +</td> |
| 23 | +</table> |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +Contains an ongoing conversation with the model. |
| 28 | + |
| 29 | +<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> |
| 30 | +<code>google.generativeai.ChatSession( |
| 31 | + model: GenerativeModel, |
| 32 | + history: (Iterable[content_types.StrictContentType] | None) = None, |
| 33 | + enable_automatic_function_calling: bool = False |
| 34 | +) |
| 35 | +</code></pre> |
| 36 | + |
| 37 | + |
| 38 | + |
| 39 | +<!-- Placeholder for "Used in" --> |
| 40 | + |
| 41 | +``` |
| 42 | +>>> model = genai.GenerativeModel('models/gemini-pro') |
| 43 | +>>> chat = model.start_chat() |
| 44 | +>>> response = chat.send_message("Hello") |
| 45 | +>>> print(response.text) |
| 46 | +>>> response = chat.send_message("Hello again") |
| 47 | +>>> print(response.text) |
| 48 | +>>> response = chat.send_message(... |
| 49 | +``` |
| 50 | + |
| 51 | +This `ChatSession` object collects the messages sent and received, in its |
| 52 | +<a href="../../google/generativeai/ChatSession.md#history"><code>ChatSession.history</code></a> attribute. |
| 53 | + |
| 54 | +<!-- Tabular view --> |
| 55 | + <table class="responsive fixed orange"> |
| 56 | +<colgroup><col width="214px"><col></colgroup> |
| 57 | +<tr><th colspan="2"><h2 class="add-link">Arguments</h2></th></tr> |
| 58 | + |
| 59 | +<tr> |
| 60 | +<td> |
| 61 | +`model`<a id="model"></a> |
| 62 | +</td> |
| 63 | +<td> |
| 64 | +The model to use in the chat. |
| 65 | +</td> |
| 66 | +</tr><tr> |
| 67 | +<td> |
| 68 | +`history`<a id="history"></a> |
| 69 | +</td> |
| 70 | +<td> |
| 71 | +A chat history to initialize the object with. |
| 72 | +</td> |
| 73 | +</tr> |
| 74 | +</table> |
| 75 | + |
| 76 | + |
| 77 | + |
| 78 | + |
| 79 | + |
| 80 | +<!-- Tabular view --> |
| 81 | + <table class="responsive fixed orange"> |
| 82 | +<colgroup><col width="214px"><col></colgroup> |
| 83 | +<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr> |
| 84 | + |
| 85 | +<tr> |
| 86 | +<td> |
| 87 | +`history`<a id="history"></a> |
| 88 | +</td> |
| 89 | +<td> |
| 90 | +The chat history. |
| 91 | +</td> |
| 92 | +</tr><tr> |
| 93 | +<td> |
| 94 | +`last`<a id="last"></a> |
| 95 | +</td> |
| 96 | +<td> |
| 97 | +returns the last received `genai.GenerateContentResponse` |
| 98 | +</td> |
| 99 | +</tr> |
| 100 | +</table> |
| 101 | + |
| 102 | + |
| 103 | + |
| 104 | +## Methods |
| 105 | + |
| 106 | +<h3 id="rewind"><code>rewind</code></h3> |
| 107 | + |
| 108 | +<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L785-L794">View source</a> |
| 109 | + |
| 110 | +<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> |
| 111 | +<code>rewind() -> tuple[protos.Content, protos.Content] |
| 112 | +</code></pre> |
| 113 | + |
| 114 | +Removes the last request/response pair from the chat history. |
| 115 | + |
| 116 | + |
| 117 | +<h3 id="send_message"><code>send_message</code></h3> |
| 118 | + |
| 119 | +<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L512-L604">View source</a> |
| 120 | + |
| 121 | +<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> |
| 122 | +<code>send_message( |
| 123 | + content: content_types.ContentType, |
| 124 | + *, |
| 125 | + generation_config: generation_types.GenerationConfigType = None, |
| 126 | + safety_settings: safety_types.SafetySettingOptions = None, |
| 127 | + stream: bool = False, |
| 128 | + tools: (content_types.FunctionLibraryType | None) = None, |
| 129 | + tool_config: (content_types.ToolConfigType | None) = None, |
| 130 | + request_options: (helper_types.RequestOptionsType | None) = None |
| 131 | +) -> generation_types.GenerateContentResponse |
| 132 | +</code></pre> |
| 133 | + |
| 134 | +Sends the conversation history with the added message and returns the model's response. |
| 135 | + |
| 136 | +Appends the request and response to the conversation history. |
| 137 | + |
| 138 | +``` |
| 139 | +>>> model = genai.GenerativeModel('models/gemini-pro') |
| 140 | +>>> chat = model.start_chat() |
| 141 | +>>> response = chat.send_message("Hello") |
| 142 | +>>> print(response.text) |
| 143 | +"Hello! How can I assist you today?" |
| 144 | +>>> len(chat.history) |
| 145 | +2 |
| 146 | +``` |
| 147 | + |
| 148 | +Call it with `stream=True` to receive response chunks as they are generated: |
| 149 | + |
| 150 | +``` |
| 151 | +>>> chat = model.start_chat() |
| 152 | +>>> response = chat.send_message("Explain quantum physics", stream=True) |
| 153 | +>>> for chunk in response: |
| 154 | +... print(chunk.text, end='') |
| 155 | +``` |
| 156 | + |
| 157 | +Once iteration over chunks is complete, the `response` and `ChatSession` are in states identical to the |
| 158 | +`stream=False` case. Some properties are not available until iteration is complete. |
| 159 | + |
| 160 | +Like <a href="../../google/generativeai/GenerativeModel.md#generate_content"><code>GenerativeModel.generate_content</code></a> this method lets you override the model's `generation_config` and |
| 161 | +`safety_settings`. |
| 162 | + |
| 163 | +<!-- Tabular view --> |
| 164 | + <table class="responsive fixed orange"> |
| 165 | +<colgroup><col width="214px"><col></colgroup> |
| 166 | +<tr><th colspan="2">Arguments</th></tr> |
| 167 | + |
| 168 | +<tr> |
| 169 | +<td> |
| 170 | +`content` |
| 171 | +</td> |
| 172 | +<td> |
| 173 | +The message contents. |
| 174 | +</td> |
| 175 | +</tr><tr> |
| 176 | +<td> |
| 177 | +`generation_config` |
| 178 | +</td> |
| 179 | +<td> |
| 180 | +Overrides for the model's generation config. |
| 181 | +</td> |
| 182 | +</tr><tr> |
| 183 | +<td> |
| 184 | +`safety_settings` |
| 185 | +</td> |
| 186 | +<td> |
| 187 | +Overrides for the model's safety settings. |
| 188 | +</td> |
| 189 | +</tr><tr> |
| 190 | +<td> |
| 191 | +`stream` |
| 192 | +</td> |
| 193 | +<td> |
| 194 | +If True, yield response chunks as they are generated. |
| 195 | +</td> |
| 196 | +</tr> |
| 197 | +</table> |
| 198 | + |
| 199 | + |
| 200 | + |
| 201 | +<h3 id="send_message_async"><code>send_message_async</code></h3> |
| 202 | + |
| 203 | +<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L671-L733">View source</a> |
| 204 | + |
| 205 | +<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> |
| 206 | +<code>send_message_async( |
| 207 | + content, |
| 208 | + *, |
| 209 | + generation_config=None, |
| 210 | + safety_settings=None, |
| 211 | + stream=False, |
| 212 | + tools=None, |
| 213 | + tool_config=None, |
| 214 | + request_options=None |
| 215 | +) |
| 216 | +</code></pre> |
| 217 | + |
| 218 | +The async version of <a href="../../google/generativeai/ChatSession.md#send_message"><code>ChatSession.send_message</code></a>. |
| 219 | + |
| 220 | + |
| 221 | + |
| 222 | + |
0 commit comments