Skip to content
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 28 additions & 6 deletions .gemini/styleguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,36 @@ LOCATION = "global"
LOCATION = "us-central1"
```

- Don't restart the kernel or use `!pip`, use `%pip` when installing

**Correct**

```sh
%pip install
```

**Incorrect**

```sh
!pip install
```

```sh
!pip3 install
```

```py
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```

## Golden Rule: Use the Correct and Current SDK

Always use the **Google GenAI SDK** (`google-genai`), which is the unified
Always use the **Google Gen AI SDK** (`google-genai`), which is the unified
standard library for all Gemini API requests (AI Studio/Gemini Developer API
and Vertex AI) as of 2025. Do not use legacy libraries and SDKs.
and Vertex AI) as of 2026. Do not use legacy libraries and SDKs.

- **Library Name:** Google GenAI SDK
- **Library Name:** Google Gen AI SDK
- **Python Package:** `google-genai`
- **Legacy Library**: (`google-generativeai`) is deprecated.

Expand Down Expand Up @@ -110,11 +133,10 @@ and Vertex AI) as of 2025. Do not use legacy libraries and SDKs.

- It is also acceptable to use following models if explicitly requested by the
user:
- **Gemini 2.0 Series**: `gemini-2.0-flash`, `gemini-2.0-flash-lite`
- **Gemini 2.5 Series**: `gemini-2.5-flash`, `gemini-2.5-pro`

- Do not use the following deprecated models (or their variants like
`gemini-1.5-flash-latest`):
- **Prohibited:** `gemini-1.5-flash`
- **Prohibited:** `gemini-1.5-pro`
- **Gemini 2.0 Series**: `gemini-2.0-flash`, `gemini-2.0-flash-lite`
- **Gemini 1.5 Series**: `gemini-1.5-flash`, `gemini-1.5-pro`
- **Prohibited:** `gemini-pro`
3 changes: 3 additions & 0 deletions .github/actions/spelling/allow.txt
Original file line number Diff line number Diff line change
Expand Up @@ -458,6 +458,8 @@ flac
Flahs
Flatform
Flipkart
Flirble
Flirbles
floormat
FLX
fmeasure
Expand Down Expand Up @@ -1017,6 +1019,7 @@ nunique
nvidia
NVIDIA
NVL
NYT
oai
objc
ODb
Expand Down
1 change: 1 addition & 0 deletions .github/actions/spelling/excludes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -115,3 +115,4 @@ ignore$
py\.typed$
^\Qaudio/speech/use-cases/storytelling/macbeth_the_sitcom.json\E$
.ruff.toml
^\Q.gemini/styleguide.md\E$
7 changes: 2 additions & 5 deletions .github/actions/spelling/line_forbidden.patterns
Original file line number Diff line number Diff line change
Expand Up @@ -211,10 +211,10 @@
# Should be Gemini
\sgemini\s\w

# Should be `Gemini Version Size` (e.g. `Gemini 2.0 Flash`)
# Should be `Gemini Version Size` (e.g. `Gemini 3 Flash`)
\bGemini\s(Pro|Flash|Ultra)\s?\d\.\d\b

# Gemini Size should be capitalized (e.g. `Gemini 2.0 Flash`)
# Gemini Size should be capitalized (e.g. `Gemini 3 Flash`)
\bGemini\s?\d\.\d\s(pro|flash|ultra)\b

# Don't say "Google Gemini" or "Google Gemini"
Expand Down Expand Up @@ -325,6 +325,3 @@ gemini-1\.[05]
# Use the Google Gen AI SDK `google-genai`
google-generativeai
from google import generativeai

# Don't restart the kernel, use `%pip` when installing
app\.kernel\.do_shutdown\(True\)
12 changes: 7 additions & 5 deletions audio/speech/use-cases/podcast/multi-speaker-podcast.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
"id": "JAPoU8Sm5E6e"
},
"source": [
"# Create a Multi-Speaker Podcast with Gemini 2.0 & Text-to-Speech\n",
"# Create a Multi-Speaker Podcast with Gemini 3 & Text-to-Speech\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
Expand Down Expand Up @@ -111,7 +111,7 @@
"The steps performed include:\n",
"\n",
"- Load a PDF file from a Google Cloud Storage bucket or public URL\n",
"- Summarize the content using Gemini 2.0 Flash\n",
"- Summarize the content using Gemini 3 Flash\n",
"- Return a pre-defined JSON schema using [Controlled Generation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output)\n",
"- Create a multi speaker conversation from the JSON script using Text-to-Speech.\n",
"- Generate the audio as MP3 file.\n",
Expand Down Expand Up @@ -246,7 +246,9 @@
"\n",
"from google import genai\n",
"\n",
"# fmt: off\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"# fmt: on\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
Expand Down Expand Up @@ -300,7 +302,7 @@
"id": "ac0c378bb46a"
},
"source": [
"### Load the Gemini 2.0 Flash model\n",
"### Load the Gemini 3 Flash model\n",
"\n",
"To learn more about all [Gemini models on Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models)."
]
Expand All @@ -313,7 +315,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-2.0-flash-001\" # @param {type: \"string\"}"
"MODEL_ID = \"gemini-3-flash-preview\" # @param {type: \"string\"}"
]
},
{
Expand Down Expand Up @@ -404,7 +406,7 @@
" return []\n",
"\n",
"\n",
"def synthesize_podcast(dialogue: list[dict], output_file: str):\n",
"def synthesize_podcast(dialogue: list[dict], output_file: str) -> None:\n",
" \"\"\"Synthesizes speech for a podcast using MultiSpeakerMarkup.\"\"\"\n",
" tts_client = texttospeech.TextToSpeechClient(\n",
" client_options=ClientOptions(\n",
Expand Down
6 changes: 3 additions & 3 deletions gemini/agent-engine/intro_agent_engine.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@
"\n",
"<img width=\"40%\" src=\"https://storage.googleapis.com/github-repo/generative-ai/gemini/agent-engine/images/agent-stack-1.png\" alt=\"Components of an agent in Agent Engine on Vertex AI\" />\n",
"\n",
"Here you'll use the Gemini 2.0 model:"
"Here you'll use the Gemini 3 model:"
]
},
{
Expand All @@ -344,7 +344,7 @@
},
"outputs": [],
"source": [
"model = \"gemini-2.0-flash\""
"model = \"gemini-3-flash-preview\""
]
},
{
Expand Down Expand Up @@ -729,7 +729,7 @@
"outputs": [],
"source": [
"## Model variant and version\n",
"model = \"gemini-2.0-flash\"\n",
"model = \"gemini-3-flash-preview\"\n",
"\n",
"## Model safety settings\n",
"from langchain_google_vertexai import HarmBlockThreshold, HarmCategory\n",
Expand Down
38 changes: 17 additions & 21 deletions gemini/batch-prediction/intro_batch_prediction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -186,24 +186,6 @@
"### Import libraries\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fe00fa0b8bb7"
},
"outputs": [],
"source": [
"from datetime import datetime\n",
"import time\n",
"\n",
"import fsspec\n",
"from google import genai\n",
"from google.cloud import bigquery\n",
"from google.genai.types import CreateBatchJobConfig\n",
"import pandas as pd"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand All @@ -226,8 +208,18 @@
"outputs": [],
"source": [
"import os\n",
"import time\n",
"from datetime import datetime\n",
"\n",
"import fsspec\n",
"import pandas as pd\n",
"from google import genai\n",
"from google.cloud import bigquery\n",
"from google.genai.types import CreateBatchJobConfig\n",
"\n",
"# fmt: off\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"# fmt: on\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
Expand Down Expand Up @@ -312,7 +304,9 @@
},
"outputs": [],
"source": [
"INPUT_DATA = \"gs://cloud-samples-data/generative-ai/batch/batch_requests_for_multimodal_input_2.jsonl\" # @param {type:\"string\"}"
"# fmt: off\n",
"INPUT_DATA = \"gs://cloud-samples-data/generative-ai/batch/batch_requests_for_multimodal_input_2.jsonl\" # @param {type:\"string\"}\n",
"# fmt: on"
]
},
{
Expand Down Expand Up @@ -471,7 +465,7 @@
"Example output:\n",
"\n",
"```json\n",
"{\"status\": \"\", \"processed_time\": \"2024-11-13T14:04:28.376+00:00\", \"request\": {\"contents\": [{\"parts\": [{\"file_data\": null, \"text\": \"List objects in this image.\"}, {\"file_data\": {\"file_uri\": \"gs://cloud-samples-data/generative-ai/image/gardening-tools.jpeg\", \"mime_type\": \"image/jpeg\"}, \"text\": null}], \"role\": \"user\"}], \"generationConfig\": {\"temperature\": 0.4}}, \"response\": {\"candidates\": [{\"avgLogprobs\": -0.10394711927934126, \"content\": {\"parts\": [{\"text\": \"Here's a list of the objects in the image:\\n\\n* **Watering can:** A green plastic watering can with a white rose head.\\n* **Plant:** A small plant (possibly oregano) in a terracotta pot.\\n* **Terracotta pots:** Two terracotta pots, one containing the plant and another empty, stacked on top of each other.\\n* **Gardening gloves:** A pair of striped gardening gloves.\\n* **Gardening tools:** A small trowel and a hand cultivator (hoe). Both are green with black handles.\"}], \"role\": \"model\"}, \"finishReason\": \"STOP\"}], \"modelVersion\": \"gemini-2.0-flash@default\", \"usageMetadata\": {\"candidatesTokenCount\": 110, \"promptTokenCount\": 264, \"totalTokenCount\": 374}}}\n",
"{\"status\": \"\", \"processed_time\": \"2024-11-13T14:04:28.376+00:00\", \"request\": {\"contents\": [{\"parts\": [{\"file_data\": null, \"text\": \"List objects in this image.\"}, {\"file_data\": {\"file_uri\": \"gs://cloud-samples-data/generative-ai/image/gardening-tools.jpeg\", \"mime_type\": \"image/jpeg\"}, \"text\": null}], \"role\": \"user\"}], \"generationConfig\": {\"temperature\": 0.4}}, \"response\": {\"candidates\": [{\"avgLogprobs\": -0.10394711927934126, \"content\": {\"parts\": [{\"text\": \"Here's a list of the objects in the image:\\n\\n* **Watering can:** A green plastic watering can with a white rose head.\\n* **Plant:** A small plant (possibly oregano) in a terracotta pot.\\n* **Terracotta pots:** Two terracotta pots, one containing the plant and another empty, stacked on top of each other.\\n* **Gardening gloves:** A pair of striped gardening gloves.\\n* **Gardening tools:** A small trowel and a hand cultivator (hoe). Both are green with black handles.\"}], \"role\": \"model\"}, \"finishReason\": \"STOP\"}], \"modelVersion\": \"gemini-3-flash-preview@default\", \"usageMetadata\": {\"candidatesTokenCount\": 110, \"promptTokenCount\": 264, \"totalTokenCount\": 374}}}\n",
"```\n"
]
},
Expand Down Expand Up @@ -550,7 +544,9 @@
},
"outputs": [],
"source": [
"INPUT_DATA = \"bq://storage-samples.generative_ai.batch_requests_for_multimodal_input_2\" # @param {type:\"string\"}"
"# fmt: off\n",
"INPUT_DATA = \"bq://storage-samples.generative_ai.batch_requests_for_multimodal_input_2\" # @param {type:\"string\"}\n",
"# fmt: on"
]
},
{
Expand Down
14 changes: 8 additions & 6 deletions gemini/code-execution/intro_code_execution.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
"id": "JAPoU8Sm5E6e"
},
"source": [
"# Intro to Generating and Executing Python Code with Gemini 2.0\n",
"# Intro to Generating and Executing Python Code with Gemini 3\n",
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
Expand Down Expand Up @@ -98,7 +98,7 @@
"source": [
"## Overview\n",
"\n",
"This notebook introduces the code execution capabilities of the [Gemini 2.0 Flash model](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2), a new multimodal generative AI model from Google [DeepMind](https://deepmind.google/). Gemini 2.0 Flash offers improvements in speed, quality, and advanced reasoning capabilities including enhanced understanding, coding, and instruction following.\n",
"This notebook introduces the code execution capabilities of the [Gemini 3 Flash model](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2), a new multimodal generative AI model from Google [DeepMind](https://deepmind.google/). Gemini 3 Flash offers improvements in speed, quality, and advanced reasoning capabilities including enhanced understanding, coding, and instruction following.\n",
"\n",
"## Code Execution\n",
"\n",
Expand All @@ -108,7 +108,7 @@
"\n",
"## Objectives\n",
"\n",
"In this tutorial, you will learn how to generate and execute code using the Gemini API in Vertex AI and the Google Gen AI SDK for Python with the Gemini 2.0 Flash model.\n",
"In this tutorial, you will learn how to generate and execute code using the Gemini API in Vertex AI and the Google Gen AI SDK for Python with the Gemini 3 Flash model.\n",
"\n",
"You will complete the following tasks:\n",
"\n",
Expand Down Expand Up @@ -235,7 +235,9 @@
},
"outputs": [],
"source": [
"# fmt: off\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"# fmt: on\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
Expand All @@ -259,11 +261,11 @@
"id": "x1vpnyk-q-fz"
},
"source": [
"## Working with code execution in Gemini 2.0\n",
"## Working with code execution in Gemini 3\n",
"\n",
"### Load the Gemini model\n",
"\n",
"The following code loads the Gemini 2.0 Flash model. You can learn about all Gemini models on Vertex AI by visiting the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models):"
"The following code loads the Gemini 3 Flash model. You can learn about all Gemini models on Vertex AI by visiting the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models):"
]
},
{
Expand All @@ -274,7 +276,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-2.0-flash-001\" # @param {type: \"string\"}"
"MODEL_ID = \"gemini-3-flash-preview\" # @param {type: \"string\"}"
]
},
{
Expand Down
8 changes: 6 additions & 2 deletions gemini/context-caching/intro_context_caching.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,9 @@
"source": [
"import os\n",
"\n",
"# fmt: off\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"# fmt: on\n",
"LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
"\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
Expand Down Expand Up @@ -291,7 +293,9 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-2.5-flash\" # @param [\"gemini-2.0-flash-001\", \"gemini-2.5-flash\", \"gemini-2.5-pro\"] {\"allow-input\":true, isTemplate: true}"
"# fmt: off\n",
"MODEL_ID = \"gemini-2.5-flash\" # @param [\"gemini-3-flash-preview\", \"gemini-2.5-flash\", \"gemini-2.5-pro\"] {\"allow-input\":true, isTemplate: true}\n",
"# fmt: on"
]
},
{
Expand All @@ -304,7 +308,7 @@
"\n",
"Implicit caching directly passes cache cost savings to developers without the need to create an explicit cache. Now, when you send a request to one of the Gemini 2.5 models, if the request shares a common prefix as one of previous requests, then it's eligible for a cache hit.\n",
"\n",
"**Note** that implicit caching is enabled by default for all Gemini 2.0 and 2.5 models but cost savings only apply to Gemini 2.5 models. The minimum input token count for implicit caching is 2,048 for 2.5 Flash and 2.5 Pro."
"**Note** that implicit caching is enabled by default for all Gemini 3 and 2.5 models but cost savings only apply to Gemini 2.5 models. The minimum input token count for implicit caching is 2,048 for 2.5 Flash and 2.5 Pro."
]
},
{
Expand Down
8 changes: 5 additions & 3 deletions gemini/function-calling/forced_function_calling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,9 @@
"source": [
"import os\n",
"\n",
"# fmt: off\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"# fmt: on\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
Expand Down Expand Up @@ -269,8 +271,8 @@
},
"outputs": [],
"source": [
"from IPython.display import Markdown, display\n",
"import arxiv\n",
"from IPython.display import Markdown, display\n",
"from google.genai.types import (\n",
" FunctionCallingConfig,\n",
" FunctionCallingConfigMode,\n",
Expand Down Expand Up @@ -305,7 +307,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-2.0-flash-001\" # @param {type: \"string\"}"
"MODEL_ID = \"gemini-3-flash-preview\" # @param {type: \"string\"}"
]
},
{
Expand Down Expand Up @@ -599,7 +601,7 @@
" )\n",
"\n",
" results = arxiv_client.results(search)\n",
" results = str([r for r in results])"
" results = str(list(results))"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,9 @@
"source": [
"import os\n",
"\n",
"# fmt: off\n",
"PROJECT_ID = \"[your-project-id]\" # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
"# fmt: on\n",
"if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
Expand Down Expand Up @@ -277,7 +279,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-2.0-flash\""
"MODEL_ID = \"gemini-3-flash-preview\""
]
},
{
Expand Down
Loading
Loading