Skip to content

Backfill fixes for hybrid docs #9197

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs-devsite/ai.chromeadapter.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,15 @@ export interface ChromeAdapter

| Method | Description |
| --- | --- |
| [generateContent(request)](./ai.chromeadapter.md#chromeadaptergeneratecontent) | Generates content using on-device inference.<p>This is comparable to [GenerativeModel.generateContent()](./ai.generativemodel.md#generativemodelgeneratecontent) for generating content using in-cloud inference.</p> |
| [generateContentStream(request)](./ai.chromeadapter.md#chromeadaptergeneratecontentstream) | Generates a content stream using on-device inference.<p>This is comparable to [GenerativeModel.generateContentStream()](./ai.generativemodel.md#generativemodelgeneratecontentstream) for generating a content stream using in-cloud inference.</p> |
| [generateContent(request)](./ai.chromeadapter.md#chromeadaptergeneratecontent) | Generates content using on-device inference. |
| [generateContentStream(request)](./ai.chromeadapter.md#chromeadaptergeneratecontentstream) | Generates a content stream using on-device inference. |
| [isAvailable(request)](./ai.chromeadapter.md#chromeadapterisavailable) | Checks if the on-device model is capable of handling a given request. |

## ChromeAdapter.generateContent()

Generates content using on-device inference.

<p>This is comparable to [GenerativeModel.generateContent()](./ai.generativemodel.md#generativemodelgeneratecontent) for generating content using in-cloud inference.</p>
This is comparable to [GenerativeModel.generateContent()](./ai.generativemodel.md#generativemodelgeneratecontent) for generating content using in-cloud inference.

<b>Signature:</b>

Expand All @@ -54,7 +54,7 @@ Promise&lt;Response&gt;

Generates a content stream using on-device inference.

<p>This is comparable to [GenerativeModel.generateContentStream()](./ai.generativemodel.md#generativemodelgeneratecontentstream) for generating a content stream using in-cloud inference.</p>
This is comparable to [GenerativeModel.generateContentStream()](./ai.generativemodel.md#generativemodelgeneratecontentstream) for generating a content stream using in-cloud inference.

<b>Signature:</b>

Expand Down
4 changes: 2 additions & 2 deletions docs-devsite/ai.requestoptions.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ export interface RequestOptions

| Property | Type | Description |
| --- | --- | --- |
| [baseUrl](./ai.requestoptions.md#requestoptionsbaseurl) | string | Base url for endpoint. Defaults to https://firebasevertexai.googleapis.com |
| [baseUrl](./ai.requestoptions.md#requestoptionsbaseurl) | string | Base url for endpoint. Defaults to https://firebasevertexai.googleapis.com, which is the [Firebase AI Logic API](https://console.cloud.google.com/apis/library/firebasevertexai.googleapis.com?project=_) (used regardless of your chosen Gemini API provider). |
| [timeout](./ai.requestoptions.md#requestoptionstimeout) | number | Request timeout in milliseconds. Defaults to 180 seconds (180000ms). |

## RequestOptions.baseUrl

Base url for endpoint. Defaults to https://firebasevertexai.googleapis.com
Base url for endpoint. Defaults to https://firebasevertexai.googleapis.com, which is the [Firebase AI Logic API](https://console.cloud.google.com/apis/library/firebasevertexai.googleapis.com?project=_) (used regardless of your chosen Gemini API provider).

<b>Signature:</b>

Expand Down
44 changes: 23 additions & 21 deletions packages/ai/src/methods/chrome-adapter.ts
Original file line number Diff line number Diff line change
Expand Up @@ -61,17 +61,17 @@ export class ChromeAdapterImpl implements ChromeAdapter {
/**
* Checks if a given request can be made on-device.
*
* <ol>Encapsulates a few concerns:
* <li>the mode</li>
* <li>API existence</li>
* <li>prompt formatting</li>
* <li>model availability, including triggering download if necessary</li>
* </ol>
* Encapsulates a few concerns:
* the mode
* API existence
* prompt formatting
* model availability, including triggering download if necessary
*
* <p>Pros: callers needn't be concerned with details of on-device availability.</p>
* <p>Cons: this method spans a few concerns and splits request validation from usage.
*
* Pros: callers needn't be concerned with details of on-device availability.</p>
* Cons: this method spans a few concerns and splits request validation from usage.
* If instance variables weren't already part of the API, we could consider a better
* separation of concerns.</p>
* separation of concerns.
*/
async isAvailable(request: GenerateContentRequest): Promise<boolean> {
if (!this.mode) {
Expand Down Expand Up @@ -129,8 +129,9 @@ export class ChromeAdapterImpl implements ChromeAdapter {
/**
* Generates content on device.
*
* <p>This is comparable to {@link GenerativeModel.generateContent} for generating content in
* Cloud.</p>
* @remarks
* This is comparable to {@link GenerativeModel.generateContent} for generating content in
* Cloud.
* @param request - a standard Firebase AI {@link GenerateContentRequest}
* @returns {@link Response}, so we can reuse common response formatting.
*/
Expand All @@ -149,8 +150,9 @@ export class ChromeAdapterImpl implements ChromeAdapter {
/**
* Generates content stream on device.
*
* <p>This is comparable to {@link GenerativeModel.generateContentStream} for generating content in
* Cloud.</p>
* @remarks
* This is comparable to {@link GenerativeModel.generateContentStream} for generating content in
* Cloud.
* @param request - a standard Firebase AI {@link GenerateContentRequest}
* @returns {@link Response}, so we can reuse common response formatting.
*/
Expand Down Expand Up @@ -228,11 +230,11 @@ export class ChromeAdapterImpl implements ChromeAdapter {
/**
* Triggers out-of-band download of an on-device model.
*
* <p>Chrome only downloads models as needed. Chrome knows a model is needed when code calls
* LanguageModel.create.</p>
* Chrome only downloads models as needed. Chrome knows a model is needed when code calls
* LanguageModel.create.
*
* <p>Since Chrome manages the download, the SDK can only avoid redundant download requests by
* tracking if a download has previously been requested.</p>
* Since Chrome manages the download, the SDK can only avoid redundant download requests by
* tracking if a download has previously been requested.
*/
private download(): void {
if (this.isDownloading) {
Expand Down Expand Up @@ -302,12 +304,12 @@ export class ChromeAdapterImpl implements ChromeAdapter {
/**
* Abstracts Chrome session creation.
*
* <p>Chrome uses a multi-turn session for all inference. Firebase AI uses single-turn for all
* Chrome uses a multi-turn session for all inference. Firebase AI uses single-turn for all
* inference. To map the Firebase AI API to Chrome's API, the SDK creates a new session for all
* inference.</p>
* inference.
*
* <p>Chrome will remove a model from memory if it's no longer in use, so this method ensures a
* new session is created before an old session is destroyed.</p>
* Chrome will remove a model from memory if it's no longer in use, so this method ensures a
* new session is created before an old session is destroyed.
*/
private async createSession(): Promise<LanguageModel> {
if (!this.languageModelProvider) {
Expand Down
10 changes: 6 additions & 4 deletions packages/ai/src/types/chrome-adapter.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,17 +37,19 @@ export interface ChromeAdapter {
/**
* Generates content using on-device inference.
*
* <p>This is comparable to {@link GenerativeModel.generateContent} for generating
* content using in-cloud inference.</p>
* @remarks
* This is comparable to {@link GenerativeModel.generateContent} for generating
* content using in-cloud inference.
* @param request - a standard Firebase AI {@link GenerateContentRequest}
*/
generateContent(request: GenerateContentRequest): Promise<Response>;

/**
* Generates a content stream using on-device inference.
*
* <p>This is comparable to {@link GenerativeModel.generateContentStream} for generating
* a content stream using in-cloud inference.</p>
* @remarks
* This is comparable to {@link GenerativeModel.generateContentStream} for generating
* a content stream using in-cloud inference.
* @param request - a standard Firebase AI {@link GenerateContentRequest}
*/
generateContentStream(request: GenerateContentRequest): Promise<Response>;
Expand Down
5 changes: 4 additions & 1 deletion packages/ai/src/types/requests.ts
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,10 @@ export interface RequestOptions {
*/
timeout?: number;
/**
* Base url for endpoint. Defaults to https://firebasevertexai.googleapis.com
* Base url for endpoint. Defaults to
* https://firebasevertexai.googleapis.com, which is the
* {@link https://console.cloud.google.com/apis/library/firebasevertexai.googleapis.com?project=_ | Firebase AI Logic API}
* (used regardless of your chosen Gemini API provider).
*/
baseUrl?: string;
}
Expand Down
Loading