Skip to content

Commit 52bca1b

Browse files
authored
Add File API quickstart example for Gemini API (#861)
1 parent 1e02ac7 commit 52bca1b

File tree

2 files changed

+368
-1
lines changed

2 files changed

+368
-1
lines changed

quickstarts-js/File_API.js

Lines changed: 367 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,367 @@
1+
/*
2+
* Copyright 2025 Google LLC
3+
*
4+
* Licensed under the Apache License, Version 2.0 (the "License");
5+
* you may not use this file except in compliance with the License.
6+
* You may obtain a copy of the License at
7+
*
8+
* http://www.apache.org/licenses/LICENSE-2.0
9+
*
10+
* Unless required by applicable law or agreed to in writing, software
11+
* distributed under the License is distributed on an "AS IS" BASIS,
12+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
* See the License for the specific language governing permissions and
14+
* limitations under the License.
15+
*/
16+
17+
/* Markdown (render)
18+
# Gemini API: File API Quickstart
19+
20+
The Gemini API supports prompting with text, image, and audio data, also known as *multimodal* prompting. You can include text, image,
21+
and audio in your prompts. For small images, you can point the Gemini model
22+
directly to a local file when providing a prompt. For larger text files, images, videos, and audio, upload the files with the [File
23+
API](https://ai.google.dev/api/rest/v1beta/files) before including them in
24+
prompts.
25+
26+
The File API lets you store up to 20GB of files per project, with each file not
27+
exceeding 2GB in size. Files are stored for 48 hours and can be accessed with
28+
your API key for generation within that time period. It is available at no cost in all regions where the [Gemini API is
29+
available](https://ai.google.dev/available_regions).
30+
31+
For information on valid file formats (MIME types) and supported models, see the documentation on
32+
[supported file formats](https://ai.google.dev/tutorials/prompting_with_media#supported_file_formats)
33+
and view the text examples at the end of this guide.
34+
35+
This guide shows how to use the File API to upload a media file and include it in a `GenerateContent` call to the Gemini API.
36+
37+
*/
38+
39+
/* Markdown (render)
40+
## Setup
41+
### Install SDK and set-up the client
42+
43+
### API Key Configuration
44+
45+
To ensure security, avoid hardcoding the API key in frontend code. Instead, set it as an environment variable on the server or local machine.
46+
47+
When using the Gemini API client libraries, the key will be automatically detected if set as either `GEMINI_API_KEY` or `GOOGLE_API_KEY`. If both are set, `GOOGLE_API_KEY` takes precedence.
48+
49+
For instructions on setting environment variables across different operating systems, refer to the official documentation: [Set API Key as Environment Variable](https://ai.google.dev/gemini-api/docs/api-key#set-api-env-var)
50+
51+
In code, the key can then be accessed as:
52+
53+
```js
54+
ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
55+
```
56+
*/
57+
58+
// [CODE STARTS]
59+
module = await import("https://esm.sh/@google/[email protected]");
60+
GoogleGenAI = module.GoogleGenAI;
61+
ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
62+
// [CODE ENDS]
63+
64+
/* Markdown (render)
65+
### Choose a model
66+
67+
Now select the model you want to use in this guide, either by selecting one in the list or writing it down. Keep in mind that some models, like the 2.5 ones are thinking models and thus take slightly more time to respond (cf. [thinking notebook](https://github.com/google-gemini/cookbook/blob/main/quickstarts-js/Get_started_thinking.ipynb) for more details and in particular learn how to switch the thinking off).
68+
69+
For more information about all Gemini models, check the [documentation](https://ai.google.dev/gemini-api/docs/models/gemini) for extended information on each of them.
70+
*/
71+
72+
// [CODE STARTS]
73+
MODEL_ID="gemini-2.5-flash" // "gemini-2.5-flash", "gemini-2.5-pro", "gemini-2.0-flash", "gemini-2.5-flash-lite-preview-06-17
74+
// [CODE ENDS]
75+
76+
/* Markdown (render)
77+
## Upload file
78+
79+
The File API lets you upload a variety of multimodal MIME types, including images and audio formats. The File API handles inputs that can be used to generate content with [`model.generateContent`](https://ai.google.dev/api/generate-content#method:-models.generatecontent) or [`model.streamGenerateContent`](https://ai.google.dev/api/generate-content#method:-models.streamgeneratecontent).
80+
81+
The File API accepts files under 2GB in size and can store up to 20GB of files per project. Files last for 2 days and cannot be downloaded from the API.
82+
83+
First, you will prepare a sample image to upload to the API.
84+
*/
85+
86+
// [CODE STARTS]
87+
imageFile = await fetch("https://storage.googleapis.com/generativeai-downloads/images/jetpack.jpg")
88+
.then(res => res.blob());
89+
90+
imageDataUrl = await new Promise((resolve) => {
91+
reader = new FileReader();
92+
reader.onloadend = () => resolve(reader.result.split(',')[1]); // Get only base64 string
93+
reader.readAsDataURL(imageFile);
94+
});
95+
console.image(imageDataUrl)
96+
// [CODE ENDS]
97+
98+
/* Output Sample
99+
100+
<img src="https://storage.googleapis.com/generativeai-downloads/images/jetpack.jpg" style="height:auto; width:100%;" />
101+
102+
*/
103+
104+
/* Markdown (render)
105+
Next, you will upload that file to the File API.
106+
*/
107+
108+
// [CODE STARTS]
109+
uploadedFile = await ai.files.upload({ file: imageFile });
110+
111+
console.log(`Uploaded file '${uploadedFile.name}' as: ${uploadedFile.uri}`);
112+
// [CODE ENDS]
113+
114+
/* Output Sample
115+
116+
Uploaded file &#x27;files/wrcset8sfug1&#x27; as: https://generativelanguage.googleapis.com/v1main/files/wrcset8sfug1
117+
118+
*/
119+
120+
/* Markdown (render)
121+
122+
The `response` object confirms that the File API stored the specified `displayName` for the uploaded file along with a `uri` that can be used to reference the file in future Gemini API calls. You can use the `response` to track how uploaded files are associated with their URIs.
123+
124+
Depending on your use case, you might want to store the URIs in structures like a plain JavaScript object or a database.
125+
126+
*/
127+
128+
/* Markdown (render)
129+
## Get file
130+
131+
After uploading the file, you can verify the API has successfully received the files by calling `files.get`.
132+
133+
It lets you get the file metadata that have been uploaded to the File API that are associated with the Cloud project your API key belongs to. Only the `name` (and by extension, the `uri`) are unique. Only use the `displayName` to identify files if you manage uniqueness yourself.
134+
*/
135+
136+
// [CODE STARTS]
137+
const retrievedFile = await ai.files.get({ name: uploadedFile.name });
138+
console.log(`Retrieved file '${retrievedFile.name}' as: ${retrievedFile.uri}`);
139+
// [CODE ENDS]
140+
141+
/* Output Sample
142+
143+
Retrieved file &#x27;files/wrcset8sfug1&#x27; as: https://generativelanguage.googleapis.com/v1main/files/wrcset8sfug1
144+
145+
*/
146+
147+
/* Markdown (render)
148+
## Generate content
149+
150+
After uploading the file, you can make `GenerateContent` requests that reference the file by providing the URI. In the Python SDK you can pass the returned object directly.
151+
152+
Here you create a prompt that starts with text and includes the uploaded image.
153+
*/
154+
155+
// [CODE STARTS]
156+
response = await ai.models.generateContent({
157+
model: MODEL_ID,
158+
contents: [
159+
{
160+
fileData: {
161+
fileUri: uploadedFile.uri,
162+
mimeType: "image/jpeg"
163+
}
164+
},
165+
"Describe the image with a creative description."
166+
]
167+
});
168+
169+
console.log(response.text);
170+
171+
// [CODE ENDS]
172+
173+
/* Output Sample
174+
175+
Behold, a visionary concept scribbled with bold blue ink on classic ruled notebook paper: the &quot;JETPACK BACKPACK.&quot;
176+
177+
This ingenious invention, proudly titled at the top, presents itself as an unassuming rounded backpack, sleek in its sketched simplicity. But the arrows pointing to its various features tell a tale of discreet power and futuristic practicality.
178+
179+
On the left, we learn of its ergonomic design, promising &quot;PADDED STRAP SUPPORT&quot; for comfort, even when soaring above the daily commute. Below, the future of charging is here, with &quot;USB-C CHARGING,&quot; though adventurers will need to plan their flights carefully, as it offers a &quot;15-MIN BATTERY LIFE.&quot;
180+
181+
To the right, the backpack&#x27;s true magic is revealed. It&#x27;s designed to be &quot;LIGHTWEIGHT&quot; and, crucially, &quot;LOOKS LIKE A NORMAL BACKPACK,&quot; allowing its wearer to blend seamlessly into any crowd before making a grand exit. It&#x27;s capacious too, designed to fit an &quot;18&quot; LAPTOP.&quot; The core of its power lies beneath, with &quot;RETRACTABLE BOOSTERS&quot; ready to emerge when needed, unleashing whimsical, swirling plumes of &quot;STEAM-POWERED&quot; propulsion – a promise of &quot;GREEN/CLEAN&quot; flight that leaves nothing but a charming vapor trail in its wake.
182+
183+
This handwritten blueprint captures the dream of everyday flight, balancing the fantastical with thoughtful, if ambitious, specifications.
184+
185+
*/
186+
187+
/* Markdown (render)
188+
## Delete files
189+
190+
Files are automatically deleted after 2 days or you can manually delete them using `files.delete()`.
191+
*/
192+
193+
// [CODE STARTS]
194+
await ai.files.delete({ name: uploadedFile.name });
195+
console.log(`Deleted ${uploadedFile.name}.`);
196+
// [CODE ENDS]
197+
198+
/* Output Sample
199+
200+
Deleted files/wrcset8sfug1.
201+
202+
*/
203+
204+
/* Markdown (render)
205+
## Supported text types
206+
207+
As well as supporting media uploads, the File API can be used to embed text files, such as Python code, or Markdown files, into your prompts.
208+
209+
This example shows you how to load a markdown file into a prompt using the File API.
210+
*/
211+
212+
// [CODE STARTS]
213+
fileUrl = "https://raw.githubusercontent.com/google-gemini/cookbook/main/CONTRIBUTING.md";
214+
215+
response = await fetch(fileUrl);
216+
blob = await response.blob();
217+
file = new File([blob], "contrib.md", { type: "text/markdown" });
218+
219+
uploadedFile = await ai.files.upload({
220+
file,
221+
config: {
222+
displayName: "CONTRIBUTING.md",
223+
mimeType: "text/markdown"
224+
}
225+
});
226+
console.log("Uploaded:", uploadedFile.uri);
227+
228+
modelResponse = await ai.models.generateContent({
229+
model: MODEL_ID,
230+
contents: [
231+
{
232+
text: "What should I do before I start writing, when following these guidelines?"
233+
},
234+
{
235+
fileData: {
236+
fileUri: uploadedFile.uri,
237+
mimeType: "text/markdown"
238+
}
239+
}
240+
]
241+
});
242+
243+
console.log(modelResponse.text);
244+
245+
await ai.files.delete({ name: uploadedFile.name });
246+
console.log(`Deleted ${uploadedFile.name}`);
247+
248+
// [CODE ENDS]
249+
250+
/* Output Sample
251+
252+
Uploaded: https://generativelanguage.googleapis.com/v1main/files/qvq94l4c5jhz
253+
254+
Before you start writing, when contributing to the Gemini API Cookbook, you should do the following:
255+
256+
1. **Sign the Contributor License Agreement (CLA):** All contributions require you (or your employer) to sign a CLA. Visit [https://cla.developers.google.com/](https://cla.developers.google.com/) to check your current agreements or sign a new one.
257+
2. **Review Style Guides:**
258+
* Take a look at the [technical writing style guide](https://developers.google.com/style), specifically reading the [highlights](https://developers.google.com/style/highlights) to understand common feedback points.
259+
* Check out the relevant [style guide](https://google.github.io/styleguide/) for the language you will be using (e.g., Python).
260+
3. **File an Issue:** For any new content submission (beyond small fixes), you **must** file an [issue](https://github.com/google-gemini/cookbook/issues) on GitHub. This allows for discussion of your idea, guidance on structure, and ensures your concept has support before you invest time in writing.
261+
262+
Deleted files/qvq94l4c5jhz
263+
264+
*/
265+
266+
/* Markdown (render)
267+
Some common text formats are automatically detected, such as `text/x-python`, `text/html` and `text/markdown`. If you are using a file that you know is text, but is not automatically detected by the API as such, you can specify the MIME type as `text/plain` explicitly.
268+
*/
269+
270+
// [CODE STARTS]
271+
fileUrl = "https://raw.githubusercontent.com/google/gemma.cpp/main/examples/hello_world/run.cc";
272+
273+
response = await fetch(fileUrl);
274+
blob = await response.blob();
275+
cppFile = new File([blob], "gemma.cpp", { type: "text/plain" }); // Forced MIME
276+
277+
uploadedCpp = await ai.files.upload({
278+
file: cppFile,
279+
config: {
280+
displayName: "gemma.cpp",
281+
mimeType: "text/plain"
282+
}
283+
});
284+
console.log("Uploaded:", uploadedCpp.uri);
285+
286+
modelResponse = await ai.models.generateContent({
287+
model: MODEL_ID,
288+
contents: [
289+
{ text: "What does this program do?" },
290+
{
291+
fileData: {
292+
fileUri: uploadedCpp.uri,
293+
mimeType: "text/plain"
294+
}
295+
}
296+
]
297+
});
298+
299+
console.log(modelResponse.text);
300+
301+
await ai.files.delete({ name: uploadedCpp.name });
302+
console.log(`Deleted ${uploadedCpp.name}`);
303+
304+
// [CODE ENDS]
305+
306+
/* Output Sample
307+
308+
Uploaded: https://generativelanguage.googleapis.com/v1main/files/ih9sjrj7h21x
309+
310+
This C++ program is a minimal example demonstrating how to load and use Google&#x27;s **Gemma large language model** for text generation, with a specific focus on **constrained decoding**.
311+
312+
Here&#x27;s a breakdown of what it does:
313+
314+
1. **Argument Parsing:**
315+
* It processes command-line arguments using `gcpp::LoaderArgs`, `gcpp::InferenceArgs`, and `gcpp::AppArgs`. These likely configure aspects like the model path, batch size, number of threads, etc.
316+
* It specifically looks for a `&quot;--reject&quot;` flag. Any integer arguments following `--reject` are interpreted as token IDs that should *never* be generated by the model. This is the core of its constrained decoding demonstration.
317+
318+
2. **Model Initialization:**
319+
* It sets up the necessary environment for the Gemma model, including:
320+
* `gcpp::BoundedTopology` and `gcpp::NestedPools`: For managing threading and parallel computations (likely using Highway, a SIMD library).
321+
* `gcpp::MatMulEnv`: An environment for matrix multiplication, essential for neural networks.
322+
* `gcpp::Gemma model`: The actual Gemma model instance, loaded based on `loader` arguments.
323+
* `gcpp::KVCache kv_cache`: A Key-Value Cache, crucial for efficient LLM inference by storing past activations, avoiding recomputation.
324+
325+
3. **Prompt Definition and Tokenization:**
326+
* It defines a fixed input prompt: `&quot;Write a greeting to the world.&quot;`
327+
* It then uses the model&#x27;s tokenizer (`model.Tokenizer()`) to convert this string prompt into a sequence of integer token IDs.
328+
329+
4. **Text Generation with Callbacks:**
330+
* **Random Number Generator:** Initializes a `std::mt19937` (Mersenne Twister) random number generator, seeded by `std::random_device`, which is used for probabilistic token sampling during generation (e.g., when `temperature` &gt; 0).
331+
* **`stream_token` Callback:** A lambda function is defined to be called every time the model generates a new token.
332+
* If the token is part of the initial prompt (before generation truly starts), it currently does nothing (the placeholder comment `// print feedback` suggests it *could* be used for that).
333+
* Once actual generation begins, it decodes the integer token ID back into human-readable text using `model.Tokenizer().Decode()` and prints it immediately to `std::cout` using `std::flush`. This provides a streaming output experience.
334+
* **`accept_token` Callback (Constrained Decoding):** Another lambda function is defined. This is the core of the `--reject` functionality.
335+
* Before the model chooses a token, this callback is invoked for potential candidate tokens.
336+
* It checks if the candidate token is present in the `reject_tokens` set (populated from the `--reject` command-line argument).
337+
* If the token *is* in `reject_tokens`, it returns `false`, effectively preventing the model from ever outputting that specific token. This &quot;constrains&quot; the output.
338+
* **`model.Generate()`:** The program then calls `model.Generate()`, passing in:
339+
* `runtime_config`: Containing generation parameters like `max_generated_tokens` (1024), `temperature` (1.0), the random generator, the `stream_token` callback, and crucially, the `accept_token` callback.
340+
* `tokens`: The initial prompt tokens.
341+
* `kv_cache`: For efficient inference.
342+
343+
**In Summary:**
344+
345+
This program loads the Gemma large language model, initializes it with various parameters (some configurable via command-line), tokenizes a hardcoded prompt (&quot;Write a greeting to the world.&quot;), and then generates a response. Its primary demonstration feature is **constrained decoding**, allowing the user to specify (via the `--reject` flag) certain token IDs that the model must *never* output during the generation process. The generated text is streamed to standard output token by token.
346+
347+
Deleted files/ih9sjrj7h21x
348+
349+
*/
350+
351+
/* Markdown (render)
352+
## Next Steps
353+
### Useful API references:
354+
355+
For more information about the File API, check its [API reference](https://ai.google.dev/api/files). You will also find more code samples [in this folder](https://github.com/google-gemini/cookbook/blob/main/quickstarts-js/).
356+
357+
### Related examples
358+
359+
Check those examples using the File API to give you more ideas on how to use that very useful feature:
360+
* Share [Voice memos](https://github.com/google-gemini/cookbook/blob/main/examples/Voice_memos.ipynb) with Gemini API and brainstorm ideas
361+
* Analyze videos to [classify](https://github.com/google-gemini/cookbook/blob/main/examples/Analyze_a_Video_Classification.ipynb) or [summarize](https://github.com/google-gemini/cookbook/blob/main/examples/Analyze_a_Video_Summarization.ipynb) them
362+
363+
### Continue your discovery of the Gemini API
364+
365+
If you're not already familiar with it, learn how [tokens are counted](https://github.com/google-gemini/cookbook/blob/main/quickstarts-js/Counting_Tokens.js). Then check how to use the File API to use [Audio](https://github.com/google-gemini/cookbook/blob/main/quickstarts-js/Audio.js) or [Video_understanding](https://github.com/google-gemini/cookbook/blob/main/quickstarts-js/Video_understanding.js) files with the Gemini API.
366+
367+
*/

quickstarts-js/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,5 +18,5 @@ Stay tuned, more JavaScript notebooks are on the way!
1818
| --- | --- | --- | --- | --- |
1919
| Get Started | A comprehensive introduction to the Gemini JS/TS SDK, demonstrating features such as text and multimodal prompting, token counting, system instructions, safety filters, multi-turn chat, output control, function calling, content streaming, file uploads, and using URL or YouTube video context. | Explore core Gemini capabilities in JS/TS | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/get_started?showPreview=true) | <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/javascript/javascript-original.svg" alt="JS" width="20"/> [Get_Started.js](./Get_Started.js) |
2020
| Image Output | Generate and iterate on images using Gemini’s multimodal capabilities. Learn to use text+image responses, edit images mid-conversation, and handle multiple image outputs with chat-style prompting. | Image generation, multimodal output, image editing, iterative refinement | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/get_started_image_out?showPreview=true) | <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/javascript/javascript-original.svg" alt="JS" width="20"/> [ImageOutput.js](./ImageOutput.js) |
21-
21+
| File API | Learn how to upload, use, retrieve, and delete files (text, image, audio, code) with the Gemini File API for multimodal prompts. | File upload, multimodal prompts, text/code/media files | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/file_api?showPreview=true) | <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/javascript/javascript-original.svg" alt="JS" width="20"/> [File_API.js](./File_API.js) |
2222

0 commit comments

Comments
 (0)