Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ console.log(aiMsg.content)
J'adore programmer.
```

## Safety Settings
## Safety settings

Gemini models have default safety settings that can be overridden. If you are receiving lots of "Safety Warnings" from your models, you can try tweaking the safety_settings attribute of the model. For example, to turn off safety blocking for dangerous content, you can import enums from the `@google/generative-ai` package, then construct your LLM as follows:

Expand Down Expand Up @@ -441,7 +441,7 @@ console.dir(searchRetrievalResult.response_metadata?.groundingMetadata, { depth:
}
```

### Code Execution
### Code execution

Google Generative AI also supports code execution. Using the built in `CodeExecutionTool`, you can make the model generate code, execute it, and use the results in a final completion:

Expand Down Expand Up @@ -540,7 +540,7 @@ The output of the code was:
Therefore, the answer to your question is 21.
```

## Context Caching
## Context caching

Context caching allows you to pass some content to the model once, cache the input tokens, and then refer to the cached tokens for subsequent requests to reduce cost. You can create a `CachedContent` object using `GoogleAICacheManager` class and then pass the `CachedContent` object to your `ChatGoogleGenerativeAIModel` with `enableCachedContent()` method.

Expand Down Expand Up @@ -596,7 +596,7 @@ await model.invoke("Summarize the video");

- The minimum input token count for context caching is 32,768, and the maximum is the same as the maximum for the given model.

## Gemini Prompting FAQs
## Gemini prompting FAQs

As of the time this doc was written (2023/12/12), Gemini has some restrictions on the types and structure of prompts it accepts. Specifically:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,7 @@ Code execution result: 4
2*2 is 4.
```

## Thinking Support
## Thinking support

See the [Gemini API docs](https://ai.google.dev/gemini-api/docs/thinking) for more info.

Expand Down
2 changes: 1 addition & 1 deletion src/oss/python/integrations/llms/google_ai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ For in their embrace, we find a peace profound,
A frozen world, with magic all around.
```

### Safety Settings
### Safety settings

Gemini models have default safety settings that can be overridden. If you are receiving lots of "Safety Warnings" from your models, you can try tweaking the `safety_settings` attribute of the model. For example, to turn off safety blocking for dangerous content, you can construct your LLM as follows:

Expand Down
Loading