You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs-devsite/ai.generativemodel.md
-11Lines changed: 0 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,6 @@ export declare class GenerativeModel extends AIModel
29
29
30
30
| Property | Modifiers | Type | Description |
31
31
| --- | --- | --- | --- |
32
-
| [DEFAULT\_HYBRID\_IN\_CLOUD\_MODEL](./ai.generativemodel.md#generativemodeldefault_hybrid_in_cloud_model) | <code>static</code> | string | Defines the name of the default in-cloud model to use for hybrid inference. |
Copy file name to clipboardExpand all lines: docs-devsite/ai.md
+18-15Lines changed: 18 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,24 +82,24 @@ The Firebase AI Web SDK.
82
82
|[GroundingChunk](./ai.groundingchunk.md#groundingchunk_interface)| Represents a chunk of retrieved data that supports a claim in the model's response. This is part of the grounding information provided when grounding is enabled. |
83
83
|[GroundingMetadata](./ai.groundingmetadata.md#groundingmetadata_interface)| Metadata returned when grounding is enabled.<!---->Currently, only Grounding with Google Search is supported (see [GoogleSearchTool](./ai.googlesearchtool.md#googlesearchtool_interface)<!---->).<!---->Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider: [Gemini Developer API](https://ai.google.dev/gemini-api/terms#grounding-with-google-search) or Vertex AI Gemini API (see [Service Terms](https://cloud.google.com/terms/service-terms) section within the Service Specific Terms). |
84
84
|[GroundingSupport](./ai.groundingsupport.md#groundingsupport_interface)| Provides information about how a specific segment of the model's response is supported by the retrieved grounding chunks. |
|[ImagenGCSImage](./ai.imagengcsimage.md#imagengcsimage_interface)| <b><i>(Public Preview)</i></b> An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.<!---->This feature is not available yet. |
87
87
|[ImagenGenerationConfig](./ai.imagengenerationconfig.md#imagengenerationconfig_interface)| <b><i>(Public Preview)</i></b> Configuration options for generating images with Imagen.<!---->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images-imagen) for more details. |
88
88
|[ImagenGenerationResponse](./ai.imagengenerationresponse.md#imagengenerationresponse_interface)| <b><i>(Public Preview)</i></b> The response from a request to generate images with Imagen. |
89
89
|[ImagenInlineImage](./ai.imageninlineimage.md#imageninlineimage_interface)| <b><i>(Public Preview)</i></b> An image generated by Imagen, represented as inline data. |
90
90
|[ImagenModelParams](./ai.imagenmodelparams.md#imagenmodelparams_interface)| <b><i>(Public Preview)</i></b> Parameters for configuring an [ImagenModel](./ai.imagenmodel.md#imagenmodel_class)<!---->. |
91
91
|[ImagenSafetySettings](./ai.imagensafetysettings.md#imagensafetysettings_interface)| <b><i>(Public Preview)</i></b> Settings for controlling the aggressiveness of filtering out sensitive content.<!---->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) for more details. |
92
92
|[InlineDataPart](./ai.inlinedatapart.md#inlinedatapart_interface)| Content part interface if the part represents an image. |
|[LanguageModelCreateCoreOptions](./ai.languagemodelcreatecoreoptions.md#languagemodelcreatecoreoptions_interface)|(EXPERIMENTAL) Used to configure the creation of an on-device language model session.|
94
+
|[LanguageModelCreateOptions](./ai.languagemodelcreateoptions.md#languagemodelcreateoptions_interface)|(EXPERIMENTAL) Used to configure the creation of an on-device language model session.|
95
+
|[LanguageModelExpected](./ai.languagemodelexpected.md#languagemodelexpected_interface)|(EXPERIMENTAL) Options for an on-device language model expected inputs.|
96
+
|[LanguageModelMessage](./ai.languagemodelmessage.md#languagemodelmessage_interface)|(EXPERIMENTAL) An on-device language model message.|
97
+
|[LanguageModelMessageContent](./ai.languagemodelmessagecontent.md#languagemodelmessagecontent_interface)|(EXPERIMENTAL) An on-device language model content object.|
98
+
|[LanguageModelPromptOptions](./ai.languagemodelpromptoptions.md#languagemodelpromptoptions_interface)|(EXPERIMENTAL) Options for an on-device language model prompt.|
99
99
|[ModalityTokenCount](./ai.modalitytokencount.md#modalitytokencount_interface)| Represents token counting info for a single modality. |
100
100
|[ModelParams](./ai.modelparams.md#modelparams_interface)| Params passed to [getGenerativeModel()](./ai.md#getgenerativemodel_c63f46a)<!---->. |
101
101
|[ObjectSchemaRequest](./ai.objectschemarequest.md#objectschemarequest_interface)| Interface for JSON parameters in a schema of [SchemaType](./ai.md#schematype) "object" when not using the <code>Schema.object()</code> helper. |
102
-
|[OnDeviceParams](./ai.ondeviceparams.md#ondeviceparams_interface)| Encapsulates configuration for on-device inference. |
102
+
|[OnDeviceParams](./ai.ondeviceparams.md#ondeviceparams_interface)|(EXPERIMENTAL) Encapsulates configuration for on-device inference. |
103
103
|[PromptFeedback](./ai.promptfeedback.md#promptfeedback_interface)| If the prompt was blocked, this will be populated with <code>blockReason</code> and the relevant <code>safetyRatings</code>. |
104
104
|[RequestOptions](./ai.requestoptions.md#requestoptions_interface)| Params passed to [getGenerativeModel()](./ai.md#getgenerativemodel_c63f46a)<!---->. |
|[ImagenAspectRatio](./ai.md#imagenaspectratio)| <b><i>(Public Preview)</i></b> Aspect ratios for Imagen images.<!---->To specify an aspect ratio for generated images, set the <code>aspectRatio</code> property in your [ImagenGenerationConfig](./ai.imagengenerationconfig.md#imagengenerationconfig_interface)<!---->.<!---->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) for more details and examples of the supported aspect ratios. |
138
138
|[ImagenPersonFilterLevel](./ai.md#imagenpersonfilterlevel)| <b><i>(Public Preview)</i></b> A filter level controlling whether generation of images containing people or faces is allowed.<!---->See the <ahref="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details. |
139
139
|[ImagenSafetyFilterLevel](./ai.md#imagensafetyfilterlevel)| <b><i>(Public Preview)</i></b> A filter level controlling how aggressively to filter sensitive content.<!---->Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, <code>violence</code>, <code>sexual</code>, <code>derogatory</code>, and <code>toxic</code>). This filter level controls how aggressively to filter out potentially harmful content from responses. See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) and the [Responsible AI and usage guidelines](https://cloud.google.com/vertex-ai/generative-ai/docs/image/responsible-ai-imagen#safety-filters) for more details. |
|[InferenceMode](./ai.md#inferencemode)|(EXPERIMENTAL) Determines whether inference happens on-device or in-cloud. |
141
141
|[Modality](./ai.md#modality)| Content part modality. |
142
142
|[POSSIBLE\_ROLES](./ai.md#possible_roles)| Possible roles. |
143
143
|[ResponseModality](./ai.md#responsemodality)| <b><i>(Public Preview)</i></b> Generation modalities to be returned in generation responses. |
@@ -160,10 +160,10 @@ The Firebase AI Web SDK.
160
160
|[ImagenAspectRatio](./ai.md#imagenaspectratio)| <b><i>(Public Preview)</i></b> Aspect ratios for Imagen images.<!---->To specify an aspect ratio for generated images, set the <code>aspectRatio</code> property in your [ImagenGenerationConfig](./ai.imagengenerationconfig.md#imagengenerationconfig_interface)<!---->.<!---->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) for more details and examples of the supported aspect ratios. |
161
161
|[ImagenPersonFilterLevel](./ai.md#imagenpersonfilterlevel)| <b><i>(Public Preview)</i></b> A filter level controlling whether generation of images containing people or faces is allowed.<!---->See the <ahref="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details. |
162
162
|[ImagenSafetyFilterLevel](./ai.md#imagensafetyfilterlevel)| <b><i>(Public Preview)</i></b> A filter level controlling how aggressively to filter sensitive content.<!---->Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, <code>violence</code>, <code>sexual</code>, <code>derogatory</code>, and <code>toxic</code>). This filter level controls how aggressively to filter out potentially harmful content from responses. See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) and the [Responsible AI and usage guidelines](https://cloud.google.com/vertex-ai/generative-ai/docs/image/responsible-ai-imagen#safety-filters) for more details. |
|[InferenceMode](./ai.md#inferencemode)|(EXPERIMENTAL) Determines whether inference happens on-device or in-cloud. |
164
+
|[LanguageModelMessageContentValue](./ai.md#languagemodelmessagecontentvalue)|(EXPERIMENTAL) Content formats that can be provided as on-device message content.|
165
+
|[LanguageModelMessageRole](./ai.md#languagemodelmessagerole)|(EXPERIMENTAL) Allowable roles for on-device language model usage.|
166
+
|[LanguageModelMessageType](./ai.md#languagemodelmessagetype)|(EXPERIMENTAL) Allowable types for on-device language model messages.|
167
167
|[Modality](./ai.md#modality)| Content part modality. |
168
168
|[Part](./ai.md#part)| Content part - includes text, image/video, or function call/response part types. |
169
169
|[ResponseModality](./ai.md#responsemodality)| <b><i>(Public Preview)</i></b> Generation modalities to be returned in generation responses. |
@@ -504,7 +504,7 @@ ImagenSafetyFilterLevel: {
504
504
505
505
## InferenceMode
506
506
507
-
EXPERIMENTAL FEATURE Determines whether inference happens on-device or in-cloud.
507
+
(EXPERIMENTAL) Determines whether inference happens on-device or in-cloud.
508
508
509
509
<b>Signature:</b>
510
510
@@ -724,7 +724,7 @@ export type ImagenSafetyFilterLevel = (typeof ImagenSafetyFilterLevel)[keyof typ
724
724
725
725
## InferenceMode
726
726
727
-
EXPERIMENTAL FEATURE Determines whether inference happens on-device or in-cloud.
727
+
(EXPERIMENTAL) Determines whether inference happens on-device or in-cloud.
0 commit comments