Skip to content

Commit 89835cd

Browse files
authored
gemini 3 updates (#1519)
1 parent 05b57ed commit 89835cd

File tree

1 file changed

+93
-31
lines changed

1 file changed

+93
-31
lines changed

src/oss/python/integrations/chat/google_generative_ai.mdx

Lines changed: 93 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,20 @@ print(ai_msg.content)
101101
J'adore la programmation.
102102
```
103103

104+
<Note>
105+
Gemini 3 series models will always return a list of content blocks to capture [thought signatures](#thought-signatures). Use the `.text` property to recover string content.
106+
107+
```python
108+
from langchain_google_genai import ChatGoogleGenerativeAI
109+
110+
llm = ChatGoogleGenerativeAI(model="gemini-3-pro-preview")
111+
response = llm.invoke("Hello")
112+
113+
response.content # [{"type": "text", "text": "Hello!", "extras": {"signature": "EpQFCp...lKx64r"}}]
114+
response.text # "Hello!"
115+
```
116+
</Note>
117+
104118
## Multimodal usage
105119

106120
Gemini models can accept multimodal inputs (text, images, audio, video) and, for some models, generate multimodal outputs.
@@ -390,49 +404,73 @@ Usage Metadata:
390404

391405
## Built-in tools
392406

393-
Google Gemini supports a variety of built-in tools ([google search](https://ai.google.dev/gemini-api/docs/grounding/search-suggestions), [code execution](https://ai.google.dev/gemini-api/docs/code-execution?lang=python)), which can be bound to the model in the usual way.
407+
Google Gemini supports a variety of built-in tools, which can be bound to the model in the usual way.
408+
409+
### Google search
410+
411+
See [Gemini docs](https://ai.google.dev/gemini-api/docs/grounding/search-suggestions) for detail.
394412

395413
```python
396-
from google.ai.generativelanguage_v1beta.types import Tool as GenAITool
414+
from langchain_google_genai import ChatGoogleGenerativeAI
397415

398-
resp = llm.invoke(
399-
"When is the next total solar eclipse in US?",
400-
tools=[GenAITool(google_search={})],
401-
)
416+
model = ChatGoogleGenerativeAI(model="gemini-2.5-flash-lite")
402417

403-
print(resp.content)
404-
```
418+
model_with_search = model.bind_tools([{"google_search": {}}])
419+
response = model_with_search.invoke("When is the next total solar eclipse in US?")
405420

421+
response.content_blocks
422+
```
406423
```output
407-
The next total solar eclipse visible in the United States will occur on August 23, 2044. However, the path of totality will only pass through Montana, North Dakota, and South Dakota.
408-
409-
For a total solar eclipse that crosses a significant portion of the continental U.S., you'll have to wait until August 12, 2045. This eclipse will start in California and end in Florida.
424+
[{'type': 'text',
425+
'text': 'The next total solar eclipse visible in the contiguous United States will occur on...',
426+
'annotations': [{'type': 'citation',
427+
'id': 'abc123',
428+
'url': '<url for source 1>',
429+
'title': '<source 1 title>',
430+
'start_index': 0,
431+
'end_index': 99,
432+
'cited_text': 'The next total solar eclipse...',
433+
'extras': {'google_ai_metadata': {'web_search_queries': ['next total solar eclipse in US'],
434+
'grounding_chunk_index': 0,
435+
'confidence_scores': []}}},
436+
{'type': 'citation',
437+
'id': 'abc234',
438+
'url': '<url for source 2>',
439+
'title': '<source 2 title>',
440+
'start_index': 0,
441+
'end_index': 99,
442+
'cited_text': 'The next total solar eclipse...',
443+
'extras': {'google_ai_metadata': {'web_search_queries': ['next total solar eclipse in US'],
444+
'grounding_chunk_index': 1,
445+
'confidence_scores': []}}}]}]
410446
```
411447

412-
```python
413-
from google.ai.generativelanguage_v1beta.types import Tool as GenAITool
448+
### Code execution
414449

415-
resp = llm.invoke(
416-
"What is 2*2, use python",
417-
tools=[GenAITool(code_execution={})],
418-
)
450+
See [Gemini docs](https://ai.google.dev/gemini-api/docs/code-execution?lang=python) for detail.
419451

420-
for c in resp.content:
421-
if isinstance(c, dict):
422-
if c["type"] == "code_execution_result":
423-
print(f"Code execution result: {c['code_execution_result']}")
424-
elif c["type"] == "executable_code":
425-
print(f"Executable code: {c['executable_code']}")
426-
else:
427-
print(c)
428-
```
452+
```python
453+
from langchain_google_genai import ChatGoogleGenerativeAI
429454

430-
```output
431-
Executable code: print(2*2)
455+
model = ChatGoogleGenerativeAI(model="gemini-2.5-flash-lite")
432456

433-
Code execution result: 4
457+
model_with_code_interpreter = model.bind_tools([{"code_execution": {}}])
458+
response = model_with_code_interpreter.invoke("Use Python to calculate 3^3.")
434459

435-
2*2 is 4.
460+
response.content_blocks
461+
```
462+
```output
463+
[{'type': 'server_tool_call',
464+
'name': 'code_interpreter',
465+
'args': {'code': 'print(3**3)', 'language': <Language.PYTHON: 1>},
466+
'id': '...'},
467+
{'type': 'server_tool_result',
468+
'tool_call_id': '',
469+
'status': 'success',
470+
'output': '27\n',
471+
'extras': {'block_type': 'code_execution_result',
472+
'outcome': <Outcome.OUTCOME_OK: 1>}},
473+
{'type': 'text', 'text': 'The calculation of 3 to the power of 3 is 27.'}]
436474
```
437475

438476
## Thinking Support
@@ -444,7 +482,8 @@ from langchain_google_genai import ChatGoogleGenerativeAI
444482

445483
llm = ChatGoogleGenerativeAI(
446484
model="models/gemini-2.5-flash",
447-
thinking_budget=1024
485+
thinking_budget=1024,
486+
include_thoughts=True,
448487
)
449488

450489
response = llm.invoke("How many O's are in Google? How did you verify your answer?")
@@ -454,6 +493,29 @@ print("Response:", response.content)
454493
print("Reasoning tokens used:", reasoning_score)
455494
```
456495

496+
### Thought signatures
497+
498+
[Thought signatures](https://ai.google.dev/gemini-api/docs/thought-signatures) are encrypted representations of the model's reasoning processes. Gemini 2.5 and 3 series models may return thought signatures in their responses.
499+
500+
<Note>
501+
Gemini 3 may raise 4xx errors if thought signatures are not passed back with tool call responses. Upgrade to `langchain-google-genai >= 3.1.0` to ensure this is handled correctly.
502+
</Note>
503+
504+
```python
505+
from langchain_google_genai import ChatGoogleGenerativeAI
506+
507+
llm = ChatGoogleGenerativeAI(
508+
model="models/gemini-3-pro-preview",
509+
thinking_budget=1024,
510+
include_thoughts=True,
511+
)
512+
513+
response = llm.invoke("How many O's are in Google? How did you verify your answer?")
514+
515+
response.content_blocks[-1]
516+
# {"type": "text", "text": "...", "extras": {"signature": "EtgVCt...mc0w=="}}
517+
```
518+
457519
## Safety settings
458520

459521
Gemini models have default safety settings that can be overridden. If you are receiving lots of "Safety Warnings" from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows:

0 commit comments

Comments
 (0)