|
108 | 108 | "- Try-on multiple clothing items from locally stored images\n", |
109 | 109 | "- Try-on a clothing item in Cloud Storage with an Imagen generated person\n", |
110 | 110 | "\n", |
111 | | - "Learn more about [quotas](https://cloud.google.com/vertex-ai/generative-ai/docs/models/imagen/virtual-try-on-preview-08-04) and [pricing](https://cloud.google.com/vertex-ai/generative-ai/pricing#imagen-models) for Virtual Try-On in the product documentation. " |
| 111 | + "Learn more about [quotas](https://cloud.google.com/vertex-ai/generative-ai/docs/models/imagen/virtual-try-on-preview-08-04) and [pricing](https://cloud.google.com/vertex-ai/generative-ai/pricing#imagen-models) for Virtual Try-On in the product documentation." |
112 | 112 | ] |
113 | 113 | }, |
114 | 114 | { |
|
418 | 418 | "### Send the request\n", |
419 | 419 | "\n", |
420 | 420 | "With the Virtual Try-On model, you can only specify one clothing item to try on at a time. Since this example has two clothing items, you'll need to make two separate requests. In each call, you can specify the following parameters in addition to the `person_image` and `product_images`:\n", |
421 | | - " - **Base steps:** An integer that controls image generation, with higher steps trading higher quality for increased latency.\n", |
422 | 421 | " - **Number of images:** 1 - 4\n", |
423 | 422 | "\n", |
424 | 423 | "You'll save the output image locally so that it can be referenced in the next step." |
|
441 | 440 | " ],\n", |
442 | 441 | " ),\n", |
443 | 442 | " config=RecontextImageConfig(\n", |
444 | | - " base_steps=32,\n", |
| 443 | + " output_mime_type=\"image/jpeg\",\n", |
445 | 444 | " number_of_images=1,\n", |
446 | 445 | " safety_filter_level=\"BLOCK_LOW_AND_ABOVE\",\n", |
447 | | - " person_generation=\"ALLOW_ADULT\",\n", |
448 | 446 | " ),\n", |
449 | 447 | ")\n", |
450 | 448 | "\n", |
|
458 | 456 | "id": "cP_gkDUVx-3Z" |
459 | 457 | }, |
460 | 458 | "source": [ |
461 | | - "When generating images of people you can also set the `safety_filter_level` and `person_generation` parameters accordingly:\n", |
462 | | - "\n", |
463 | | - "- `person_generation`\n", |
464 | | - " - `DONT_ALLOW`\n", |
465 | | - " - `ALLOW_ADULT`\n", |
466 | | - " - `ALLOW_ALL`\n", |
| 459 | + "When generating images you can also set the `safety_filter_level` parameter accordingly:\n", |
467 | 460 | "- `safety_filter_level`\n", |
468 | 461 | " - `BLOCK_LOW_AND_ABOVE`\n", |
469 | 462 | " - `BLOCK_MEDIUM_AND_ABOVE`\n", |
|
488 | 481 | " ],\n", |
489 | 482 | " ),\n", |
490 | 483 | " config=RecontextImageConfig(\n", |
491 | | - " base_steps=32,\n", |
| 484 | + " output_mime_type=\"image/jpeg\",\n", |
492 | 485 | " number_of_images=1,\n", |
493 | 486 | " safety_filter_level=\"BLOCK_LOW_AND_ABOVE\",\n", |
494 | | - " person_generation=\"ALLOW_ADULT\",\n", |
495 | 487 | " ),\n", |
496 | 488 | ")\n", |
497 | 489 | "display_image(response.generated_images[0].image)" |
|
535 | 527 | " model=image_generation,\n", |
536 | 528 | " prompt=prompt,\n", |
537 | 529 | " config=GenerateImagesConfig(\n", |
| 530 | + " output_mime_type=\"image/jpeg\",\n", |
538 | 531 | " number_of_images=1,\n", |
539 | 532 | " image_size=\"2K\",\n", |
540 | 533 | " safety_filter_level=\"BLOCK_MEDIUM_AND_ABOVE\",\n", |
541 | | - " person_generation=\"ALLOW_ADULT\",\n", |
542 | 534 | " ),\n", |
543 | 535 | ")\n", |
544 | 536 | "display_image(image.generated_images[0].image)" |
|
609 | 601 | " ],\n", |
610 | 602 | " ),\n", |
611 | 603 | " config=RecontextImageConfig(\n", |
612 | | - " base_steps=32,\n", |
| 604 | + " output_mime_type=\"image/jpeg\",\n", |
613 | 605 | " number_of_images=1,\n", |
614 | 606 | " safety_filter_level=\"BLOCK_LOW_AND_ABOVE\",\n", |
615 | | - " person_generation=\"ALLOW_ADULT\",\n", |
616 | 607 | " ),\n", |
617 | 608 | ")\n", |
618 | 609 | "display_image(response.generated_images[0].image)" |
|
0 commit comments