|
1 |
| -# Google Python SDK for the Gemini API |
| 1 | +# Google AI Python SDK for the Gemini API |
2 | 2 |
|
3 | 3 | [](https://badge.fury.io/py/google-generativeai)
|
4 | 4 | 
|
5 | 5 | 
|
6 | 6 |
|
7 |
| -The Google AI Python SDK enables developers to use Google's state-of-the-art generative AI |
8 |
| -models (like Gemini and PaLM) to build AI-powered features and applications. This SDK |
9 |
| -supports use cases like: |
| 7 | +The Google AI Python SDK is the easiest way for Python developers to build with the Gemini API. The Gemini API gives you access to Gemini [models](https://ai.google.dev/models/gemini) created by [Google DeepMind](https://deepmind.google/technologies/gemini/#introduction). Gemini models are built from the ground up to be multimodal, so you can reason seamlessly across text, images, and code. |
10 | 8 |
|
11 |
| -- Generate text from text-only input |
12 |
| -- Generate text from text-and-images input (multimodal) (for Gemini only) |
13 |
| -- Build multi-turn conversations (chat) |
14 |
| -- Embedding |
| 9 | +## Get started with the Gemini API |
| 10 | +1. Go to [Google AI Studio](https://aistudio.google.com/). |
| 11 | +2. Login with your Google account. |
| 12 | +3. [Create](https://aistudio.google.com/app/apikey) an API key. |
| 13 | +4. Try a Python SDK [quickstart](https://github.com/google-gemini/gemini-api-cookbook/blob/main/quickstarts/Prompting.ipynb) in the [Gemini API Cookbook](https://github.com/google-gemini/gemini-api-cookbook/). |
| 14 | +5. For detailed instructions, try the |
| 15 | +[Python SDK tutorial](https://ai.google.dev/tutorials/python_quickstart) on [ai.google.dev](https://ai.google.dev). |
15 | 16 |
|
16 |
| -For example, with just a few lines of code, you can access Gemini's multimodal |
17 |
| -capabilities to generate text from text-and-image input: |
| 17 | +## Usage example |
| 18 | +See the [Gemini API Cookbook](https://github.com/google-gemini/gemini-api-cookbook/) or [ai.google.dev](https://ai.google.dev) for complete code. |
18 | 19 |
|
19 |
| -```python |
20 |
| -model = genai.GenerativeModel('gemini-pro-vision') |
21 |
| - |
22 |
| -cookie_picture = { |
23 |
| - 'mime_type': 'image/png', |
24 |
| - 'data': Path('cookie.png').read_bytes() |
25 |
| -} |
26 |
| -prompt = "Give me a recipe for this:" |
27 |
| - |
28 |
| -response = model.generate_content( |
29 |
| - contents=[prompt, cookie_picture] |
30 |
| -) |
31 |
| -print(response.text) |
32 |
| -``` |
33 |
| - |
34 |
| - |
35 |
| -## Try out the API |
| 20 | +Install from [PyPI](https://pypi.org/project/google-generativeai). |
36 | 21 |
|
37 |
| -Install from PyPI. |
| 22 | +`pip install -U google-generativeai` |
38 | 23 |
|
39 |
| -`pip install google-generativeai` |
40 |
| - |
41 |
| -[Obtain an API key from AI Studio](https://makersuite.google.com/app/apikey), |
42 |
| -and add it as the `GOOGLE_API_KEY` environment variable. |
43 |
| - |
44 |
| -Then import the SDK and connect a model: |
| 24 | +Import the SDK and configure your API key. |
45 | 25 |
|
46 | 26 | ```python
|
47 | 27 | import google.generativeai as genai
|
48 | 28 | import os
|
49 | 29 |
|
50 | 30 | genai.configure(api_key=os.environ["GOOGLE_API_KEY"])
|
51 |
| - |
52 |
| -model = genai.GenerativeModel('gemini-pro') |
53 | 31 | ```
|
54 | 32 |
|
55 |
| -Use `GenerativeModel.generate_content` to have the model complete some initial text. |
| 33 | +Create a model and run a prompt. |
56 | 34 |
|
57 | 35 | ```python
|
| 36 | +model = genai.GenerativeModel('gemini-1.0-pro-latest') |
58 | 37 | response = model.generate_content("The opposite of hot is")
|
59 |
| -print(response.text) # cold. |
60 |
| -``` |
61 |
| - |
62 |
| -Use `GenerativeModel.start_chat` to have a discussion with a model. |
63 |
| - |
64 |
| -```python |
65 |
| -chat = model.start_chat() |
66 |
| -response = chat.send_message('Hello, what should I have for dinner?') |
67 |
| -print(response.text) # 'Here are some suggestions...' |
68 |
| -response = chat.send_message("How do I cook the first one?") |
| 38 | +print(response.text) |
69 | 39 | ```
|
70 | 40 |
|
71 |
| - |
72 |
| - |
73 |
| -## Installation and usage |
74 |
| - |
75 |
| -Run [`pip install google-generativeai`](https://pypi.org/project/google-generativeai). |
76 |
| - |
77 |
| -For detailed instructions, you can find a |
78 |
| -[quickstart](https://ai.google.dev/tutorials/python_quickstart) for the Google AI |
79 |
| -Python SDK in the Google documentation. |
80 |
| - |
81 |
| -This quickstart describes how to add your API key and install the SDK in your app, |
82 |
| -initialize the model, and then call the API to access the model. It also describes some |
83 |
| -additional use cases and features, like streaming, embedding, counting tokens, and |
84 |
| -controlling responses. |
85 |
| - |
86 |
| - |
87 | 41 | ## Documentation
|
88 | 42 |
|
89 |
| -Find complete documentation for the Google AI SDKs and the Gemini model in the Google |
90 |
| -documentation: https://ai.google.dev/docs |
91 |
| - |
| 43 | +See the [Gemini API Cookbook](https://github.com/google-gemini/gemini-api-cookbook/) or [ai.google.dev](https://ai.google.dev) for complete documentation. |
92 | 44 |
|
93 | 45 | ## Contributing
|
94 | 46 |
|
95 | 47 | See [Contributing](https://github.com/google/generative-ai-python/blob/main/CONTRIBUTING.md) for more information on contributing to the Google AI Python SDK.
|
96 | 48 |
|
97 |
| -## Developers who use the PaLM API |
98 |
| - |
99 |
| -### Migrate to use the Gemini API |
100 |
| - |
101 |
| -Check our [migration guide](https://ai.google.dev/docs/migration_guide) in the Google |
102 |
| -documentation. |
103 |
| - |
104 |
| -### Installation and usage for the PaLM API |
105 |
| - |
106 |
| -Install from PyPI. |
107 |
| - |
108 |
| -`pip install google-generativeai` |
109 |
| - |
110 |
| -[Obtain an API key from AI Studio](https://makersuite.google.com/app/apikey), then |
111 |
| -configure it here. |
112 |
| - |
113 |
| -```python |
114 |
| -import google.generativeai as palm |
115 |
| - |
116 |
| -palm.configure(api_key=os.environ["PALM_API_KEY"]) |
117 |
| -``` |
118 |
| - |
119 |
| -Use `palm.generate_text` to have the model complete some initial text. |
120 |
| - |
121 |
| -```python |
122 |
| -response = palm.generate_text(prompt="The opposite of hot is") |
123 |
| -print(response.result) # cold. |
124 |
| -``` |
125 |
| - |
126 |
| -Use `palm.chat` to have a discussion with a model. |
127 |
| - |
128 |
| -```python |
129 |
| -response = palm.chat(messages=["Hello."]) |
130 |
| -print(response.last) # 'Hello! What can I help you with?' |
131 |
| -response.reply("Can you tell me a joke?") |
132 |
| -``` |
133 |
| - |
134 |
| -### Documentation for the PaLM API |
135 |
| - |
136 |
| -- [General PaLM documentation](https://ai.google.dev/docs/palm_api_overview) |
137 |
| - |
138 |
| -- [Text quickstart](https://github.com/google/generative-ai-docs/blob/main/site/en/palm_docs/text_quickstart.ipynb) |
139 |
| - |
140 |
| -- [Chat quickstart](https://github.com/google/generative-ai-docs/blob/main/site/en/palm_docs/chat_quickstart.ipynb) |
141 |
| - |
142 |
| -- [Tuning quickstart](https://github.com/google/generative-ai-docs/blob/main/site/en/palm_docs/tuning_quickstart_python.ipynb) |
143 |
| - |
144 |
| -### Colab magics |
145 |
| - |
146 |
| -``` |
147 |
| -%pip install -q google-generativeai |
148 |
| -%load_ext google.generativeai.notebook |
149 |
| -``` |
150 |
| - |
151 |
| -Once installed, use the Python client via the `%%llm` Colab magic. Read the full guide [here](https://developers.generativeai.google/tools/notebook_magic). |
152 |
| - |
153 |
| -```python |
154 |
| -%%llm |
155 |
| -The best thing since sliced bread is |
156 |
| -``` |
157 |
| - |
158 | 49 | ## License
|
159 | 50 |
|
160 | 51 | The contents of this repository are licensed under the [Apache License, version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
|
0 commit comments