Skip to content

Commit c5ef6c0

Browse files
authored
Add markdown docs (#462)
Change-Id: I63ffaa1c0d4af92f4a630ea21c99f927095c1d34
1 parent e8ad653 commit c5ef6c0

File tree

256 files changed

+42898
-1
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

256 files changed

+42898
-1
lines changed

.gitignore

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@
33
/.idea/
44
/.pytype/
55
/build/
6-
/docs/api
76
*.egg-info
87
.DS_Store
98
__pycache__

docs/api/google/generativeai.md

Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
description: Google AI Python SDK
2+
3+
<div itemscope itemtype="http://developers.google.com/ReferenceObject">
4+
<meta itemprop="name" content="google.generativeai" />
5+
<meta itemprop="path" content="Stable" />
6+
<meta itemprop="property" content="__version__"/>
7+
<meta itemprop="property" content="annotations"/>
8+
</div>
9+
10+
# Module: google.generativeai
11+
12+
<!-- Insert buttons and diff -->
13+
14+
<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
15+
<td>
16+
<a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/__init__.py">
17+
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
18+
View source on GitHub
19+
</a>
20+
</td>
21+
</table>
22+
23+
24+
25+
Google AI Python SDK
26+
27+
28+
29+
## Setup
30+
31+
```posix-terminal
32+
pip install google-generativeai
33+
```
34+
35+
## GenerativeModel
36+
37+
Use `genai.GenerativeModel` to access the API:
38+
39+
```
40+
import google.generativeai as genai
41+
import os
42+
43+
genai.configure(api_key=os.environ['API_KEY'])
44+
45+
model = genai.GenerativeModel(model_name='gemini-1.5-flash')
46+
response = model.generate_content('Teach me about how an LLM works')
47+
48+
print(response.text)
49+
```
50+
51+
See the [python quickstart](https://ai.google.dev/tutorials/python_quickstart) for more details.
52+
53+
## Modules
54+
55+
[`protos`](../google/generativeai/protos.md) module: This module provides low level access to the ProtoBuffer "Message" classes used by the API.
56+
57+
[`types`](../google/generativeai/types.md) module: A collection of type definitions used throughout the library.
58+
59+
## Classes
60+
61+
[`class ChatSession`](../google/generativeai/ChatSession.md): Contains an ongoing conversation with the model.
62+
63+
[`class GenerationConfig`](../google/generativeai/types/GenerationConfig.md): A simple dataclass used to configure the generation parameters of <a href="../google/generativeai/GenerativeModel.md#generate_content"><code>GenerativeModel.generate_content</code></a>.
64+
65+
[`class GenerativeModel`](../google/generativeai/GenerativeModel.md): The `genai.GenerativeModel` class wraps default parameters for calls to <a href="../google/generativeai/GenerativeModel.md#generate_content"><code>GenerativeModel.generate_content</code></a>, <a href="../google/generativeai/GenerativeModel.md#count_tokens"><code>GenerativeModel.count_tokens</code></a>, and <a href="../google/generativeai/GenerativeModel.md#start_chat"><code>GenerativeModel.start_chat</code></a>.
66+
67+
## Functions
68+
69+
[`chat(...)`](../google/generativeai/chat.md): Calls the API to initiate a chat with a model using provided parameters
70+
71+
[`chat_async(...)`](../google/generativeai/chat_async.md): Calls the API to initiate a chat with a model using provided parameters
72+
73+
[`configure(...)`](../google/generativeai/configure.md): Captures default client configuration.
74+
75+
[`count_message_tokens(...)`](../google/generativeai/count_message_tokens.md): Calls the API to calculate the number of tokens used in the prompt.
76+
77+
[`count_text_tokens(...)`](../google/generativeai/count_text_tokens.md): Calls the API to count the number of tokens in the text prompt.
78+
79+
[`create_tuned_model(...)`](../google/generativeai/create_tuned_model.md): Calls the API to initiate a tuning process that optimizes a model for specific data, returning an operation object to track and manage the tuning progress.
80+
81+
[`delete_file(...)`](../google/generativeai/delete_file.md): Calls the API to permanently delete a specified file using a supported file service.
82+
83+
[`delete_tuned_model(...)`](../google/generativeai/delete_tuned_model.md): Calls the API to delete a specified tuned model
84+
85+
[`embed_content(...)`](../google/generativeai/embed_content.md): Calls the API to create embeddings for content passed in.
86+
87+
[`embed_content_async(...)`](../google/generativeai/embed_content_async.md): Calls the API to create async embeddings for content passed in.
88+
89+
[`generate_embeddings(...)`](../google/generativeai/generate_embeddings.md): Calls the API to create an embedding for the text passed in.
90+
91+
[`generate_text(...)`](../google/generativeai/generate_text.md): Calls the API to generate text based on the provided prompt.
92+
93+
[`get_base_model(...)`](../google/generativeai/get_base_model.md): Calls the API to fetch a base model by name.
94+
95+
[`get_file(...)`](../google/generativeai/get_file.md): Calls the API to retrieve a specified file using a supported file service.
96+
97+
[`get_model(...)`](../google/generativeai/get_model.md): Calls the API to fetch a model by name.
98+
99+
[`get_operation(...)`](../google/generativeai/get_operation.md): Calls the API to get a specific operation
100+
101+
[`get_tuned_model(...)`](../google/generativeai/get_tuned_model.md): Calls the API to fetch a tuned model by name.
102+
103+
[`list_files(...)`](../google/generativeai/list_files.md): Calls the API to list files using a supported file service.
104+
105+
[`list_models(...)`](../google/generativeai/list_models.md): Calls the API to list all available models.
106+
107+
[`list_operations(...)`](../google/generativeai/list_operations.md): Calls the API to list all operations
108+
109+
[`list_tuned_models(...)`](../google/generativeai/list_tuned_models.md): Calls the API to list all tuned models.
110+
111+
[`update_tuned_model(...)`](../google/generativeai/update_tuned_model.md): Calls the API to push updates to a specified tuned model where only certain attributes are updatable.
112+
113+
[`upload_file(...)`](../google/generativeai/upload_file.md): Calls the API to upload a file using a supported file service.
114+
115+
116+
117+
<!-- Tabular view -->
118+
<table class="responsive fixed orange">
119+
<colgroup><col width="214px"><col></colgroup>
120+
<tr><th colspan="2"><h2 class="add-link">Other Members</h2></th></tr>
121+
122+
<tr>
123+
<td>
124+
__version__<a id="__version__"></a>
125+
</td>
126+
<td>
127+
`'0.7.2'`
128+
</td>
129+
</tr><tr>
130+
<td>
131+
annotations<a id="annotations"></a>
132+
</td>
133+
<td>
134+
Instance of `__future__._Feature`
135+
</td>
136+
</tr>
137+
</table>
138+
Lines changed: 222 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,222 @@
1+
description: Contains an ongoing conversation with the model.
2+
3+
<div itemscope itemtype="http://developers.google.com/ReferenceObject">
4+
<meta itemprop="name" content="google.generativeai.ChatSession" />
5+
<meta itemprop="path" content="Stable" />
6+
<meta itemprop="property" content="__init__"/>
7+
<meta itemprop="property" content="rewind"/>
8+
<meta itemprop="property" content="send_message"/>
9+
<meta itemprop="property" content="send_message_async"/>
10+
</div>
11+
12+
# google.generativeai.ChatSession
13+
14+
<!-- Insert buttons and diff -->
15+
16+
<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
17+
<td>
18+
<a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L481-L875">
19+
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
20+
View source on GitHub
21+
</a>
22+
</td>
23+
</table>
24+
25+
26+
27+
Contains an ongoing conversation with the model.
28+
29+
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
30+
<code>google.generativeai.ChatSession(
31+
model: GenerativeModel,
32+
history: (Iterable[content_types.StrictContentType] | None) = None,
33+
enable_automatic_function_calling: bool = False
34+
)
35+
</code></pre>
36+
37+
38+
39+
<!-- Placeholder for "Used in" -->
40+
41+
```
42+
>>> model = genai.GenerativeModel('models/gemini-pro')
43+
>>> chat = model.start_chat()
44+
>>> response = chat.send_message("Hello")
45+
>>> print(response.text)
46+
>>> response = chat.send_message("Hello again")
47+
>>> print(response.text)
48+
>>> response = chat.send_message(...
49+
```
50+
51+
This `ChatSession` object collects the messages sent and received, in its
52+
<a href="../../google/generativeai/ChatSession.md#history"><code>ChatSession.history</code></a> attribute.
53+
54+
<!-- Tabular view -->
55+
<table class="responsive fixed orange">
56+
<colgroup><col width="214px"><col></colgroup>
57+
<tr><th colspan="2"><h2 class="add-link">Arguments</h2></th></tr>
58+
59+
<tr>
60+
<td>
61+
`model`<a id="model"></a>
62+
</td>
63+
<td>
64+
The model to use in the chat.
65+
</td>
66+
</tr><tr>
67+
<td>
68+
`history`<a id="history"></a>
69+
</td>
70+
<td>
71+
A chat history to initialize the object with.
72+
</td>
73+
</tr>
74+
</table>
75+
76+
77+
78+
79+
80+
<!-- Tabular view -->
81+
<table class="responsive fixed orange">
82+
<colgroup><col width="214px"><col></colgroup>
83+
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
84+
85+
<tr>
86+
<td>
87+
`history`<a id="history"></a>
88+
</td>
89+
<td>
90+
The chat history.
91+
</td>
92+
</tr><tr>
93+
<td>
94+
`last`<a id="last"></a>
95+
</td>
96+
<td>
97+
returns the last received `genai.GenerateContentResponse`
98+
</td>
99+
</tr>
100+
</table>
101+
102+
103+
104+
## Methods
105+
106+
<h3 id="rewind"><code>rewind</code></h3>
107+
108+
<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L785-L794">View source</a>
109+
110+
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
111+
<code>rewind() -> tuple[protos.Content, protos.Content]
112+
</code></pre>
113+
114+
Removes the last request/response pair from the chat history.
115+
116+
117+
<h3 id="send_message"><code>send_message</code></h3>
118+
119+
<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L512-L604">View source</a>
120+
121+
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
122+
<code>send_message(
123+
content: content_types.ContentType,
124+
*,
125+
generation_config: generation_types.GenerationConfigType = None,
126+
safety_settings: safety_types.SafetySettingOptions = None,
127+
stream: bool = False,
128+
tools: (content_types.FunctionLibraryType | None) = None,
129+
tool_config: (content_types.ToolConfigType | None) = None,
130+
request_options: (helper_types.RequestOptionsType | None) = None
131+
) -> generation_types.GenerateContentResponse
132+
</code></pre>
133+
134+
Sends the conversation history with the added message and returns the model's response.
135+
136+
Appends the request and response to the conversation history.
137+
138+
```
139+
>>> model = genai.GenerativeModel('models/gemini-pro')
140+
>>> chat = model.start_chat()
141+
>>> response = chat.send_message("Hello")
142+
>>> print(response.text)
143+
"Hello! How can I assist you today?"
144+
>>> len(chat.history)
145+
2
146+
```
147+
148+
Call it with `stream=True` to receive response chunks as they are generated:
149+
150+
```
151+
>>> chat = model.start_chat()
152+
>>> response = chat.send_message("Explain quantum physics", stream=True)
153+
>>> for chunk in response:
154+
... print(chunk.text, end='')
155+
```
156+
157+
Once iteration over chunks is complete, the `response` and `ChatSession` are in states identical to the
158+
`stream=False` case. Some properties are not available until iteration is complete.
159+
160+
Like <a href="../../google/generativeai/GenerativeModel.md#generate_content"><code>GenerativeModel.generate_content</code></a> this method lets you override the model's `generation_config` and
161+
`safety_settings`.
162+
163+
<!-- Tabular view -->
164+
<table class="responsive fixed orange">
165+
<colgroup><col width="214px"><col></colgroup>
166+
<tr><th colspan="2">Arguments</th></tr>
167+
168+
<tr>
169+
<td>
170+
`content`
171+
</td>
172+
<td>
173+
The message contents.
174+
</td>
175+
</tr><tr>
176+
<td>
177+
`generation_config`
178+
</td>
179+
<td>
180+
Overrides for the model's generation config.
181+
</td>
182+
</tr><tr>
183+
<td>
184+
`safety_settings`
185+
</td>
186+
<td>
187+
Overrides for the model's safety settings.
188+
</td>
189+
</tr><tr>
190+
<td>
191+
`stream`
192+
</td>
193+
<td>
194+
If True, yield response chunks as they are generated.
195+
</td>
196+
</tr>
197+
</table>
198+
199+
200+
201+
<h3 id="send_message_async"><code>send_message_async</code></h3>
202+
203+
<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L671-L733">View source</a>
204+
205+
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
206+
<code>send_message_async(
207+
content,
208+
*,
209+
generation_config=None,
210+
safety_settings=None,
211+
stream=False,
212+
tools=None,
213+
tool_config=None,
214+
request_options=None
215+
)
216+
</code></pre>
217+
218+
The async version of <a href="../../google/generativeai/ChatSession.md#send_message"><code>ChatSession.send_message</code></a>.
219+
220+
221+
222+

0 commit comments

Comments
 (0)