|
40 | 40 | "# Evaluating content safety with ShieldGemma 2 and Hugging Face Transformers" |
41 | 41 | ] |
42 | 42 | }, |
| 43 | + { |
| 44 | + "cell_type": "markdown", |
| 45 | + "metadata": { |
| 46 | + "id": "2b40722aa1a9" |
| 47 | + }, |
| 48 | + "source": [ |
| 49 | + "<table class=\"tfo-notebook-buttons\" align=\"left\">\n", |
| 50 | + " <td>\n", |
| 51 | + " <a target=\"_blank\" href=\"https://ai.google.dev/responsible/docs/safeguards/shieldgemma2_on_huggingface\"><img src=\"https://ai.google.dev/static/site-assets/images/docs/notebook-site-button.png\" height=\"32\" width=\"32\" />View on ai.google.dev</a>\n", |
| 52 | + " </td>\n", |
| 53 | + " <td>\n", |
| 54 | + " <a target=\"_blank\" href=\"https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/responsible/docs/safeguards/shieldgemma2_on_huggingface.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n", |
| 55 | + " </td>\n", |
| 56 | + " <td>\n", |
| 57 | + " <a target=\"_blank\" href=\"https://github.com/google/generative-ai-docs/blob/main/site/en/responsible/docs/safeguards/shieldgemma2_on_huggingface.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n", |
| 58 | + " </td>\n", |
| 59 | + "</table>" |
| 60 | + ] |
| 61 | + }, |
43 | 62 | { |
44 | 63 | "cell_type": "markdown", |
45 | 64 | "metadata": { |
|
78 | 97 | "source": [ |
79 | 98 | "# Supported Use Case\n", |
80 | 99 | "\n", |
81 | | - "**We recommend using `ShieldGemma 2` as an input filter to vision language models or as an output filter of image generation systems or both.** ShieldGemma 2 offers the following key advantages:\n", |
| 100 | + "ShieldGemma 2 is should be used as an input filter to vision language models or as an output filter of image generation systems or both.** ShieldGemma 2 offers the following key advantages:\n", |
82 | 101 | "\n", |
83 | 102 | "* **Policy-Aware Classification**: ShieldGemma 2 accepts both a user-defined safety policy and an image as input, providing classifications for both real and generated images, tailored to the specific policy guidelines.\n", |
84 | 103 | "* **Probability-Based Output and Thresholding**: ShieldGemma 2 outputs a probability score for its predictions, allowing downstream users to flexibly tune the classification threshold based on their specific use cases and risk tolerance. This enables a more nuanced and adaptable approach to safety classification.\n", |
85 | 104 | "\n", |
86 | 105 | "The input/output format are as follows:\n", |
87 | 106 | "* **Input**: Image + Prompt Instruction with policy definition\n", |
88 | | - "* **Output**: Probability of 'Yes'/'No' tokens, 'Yes' meaning that the image violated the specific policy. The higher the score, the higher the model's confidence that the image violates the specified policy." |
| 107 | + "* **Output**: Probability of 'Yes'/'No' tokens, 'Yes' meaning that the image violated the specific policy. The higher the score for the 'Yes' token, the higher the model's confidence that the image violates the specified policy." |
89 | 108 | ] |
90 | 109 | }, |
91 | 110 | { |
|
153 | 172 | { |
154 | 173 | "cell_type": "code", |
155 | 174 | "execution_count": null, |
156 | | - "metadata": {}, |
| 175 | + "metadata": { |
| 176 | + "id": "a436de5a4e95" |
| 177 | + }, |
157 | 178 | "outputs": [], |
158 | 179 | "source": [ |
159 | 180 | "from PIL import Image\n", |
|
209 | 230 | "metadata": { |
210 | 231 | "accelerator": "GPU", |
211 | 232 | "colab": { |
212 | | - "gpuType": "A100", |
213 | | - "machine_shape": "hm", |
214 | | - "provenance": [] |
| 233 | + "name": "shieldgemma2_on_huggingface.ipynb", |
| 234 | + "toc_visible": true |
215 | 235 | }, |
216 | 236 | "kernelspec": { |
217 | 237 | "display_name": "Python 3", |
218 | 238 | "name": "python3" |
219 | | - }, |
220 | | - "language_info": { |
221 | | - "name": "python" |
222 | 239 | } |
223 | 240 | }, |
224 | 241 | "nbformat": 4, |
|
0 commit comments