|
48 | 48 | "source": [ |
49 | 49 | "<table class=\"tfo-notebook-buttons\" align=\"left\">\n", |
50 | 50 | " <td>\n", |
51 | | - " <a target=\"_blank\" href=\"https://developers.generativeai.google/examples/chat_calculator\"><img src=\"https://developers.generativeai.google/static/site-assets/images/docs/notebook-site-button.png\" height=\"32\" width=\"32\" />View on Generative AI</a>\n", |
| 51 | + " <a target=\"_blank\" href=\"https://ai.google.dev/examples/chat_calculator\"><img src=\"https://ai.google.dev/static/site-assets/images/docs/notebook-site-button.png\" height=\"32\" width=\"32\" />View on Generative AI</a>\n", |
52 | 52 | " </td>\n", |
53 | 53 | " <td>\n", |
54 | 54 | " <a target=\"_blank\" href=\"https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/examples/chat_calculator.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n", |
|
66 | 66 | }, |
67 | 67 | "source": [ |
68 | 68 | "For some use cases, you may want to stop the generation from a model to insert specific results. For example, language models may have trouble with complicated arithmetic problems like word problems.\n", |
69 | | - "This tutorial shows an example of using an external tool with the `palm.chat` method to output the correct answer to a word problem.\n", |
| 69 | + "This tutorial shows an example of using an external tool with the `genai.chat` method to output the correct answer to a word problem.\n", |
70 | 70 | "\n", |
71 | 71 | "This particular example uses the [`numexpr`](https://github.com/pydata/numexpr) tool to perform the arithmetic but you can use this same procedure to integrate other tools specific to your use case. The following is an outline of the steps:\n", |
72 | 72 | "\n", |
73 | 73 | "1. Determine a `start` and `end` tag to demarcate the text to send the tool.\n", |
74 | 74 | "1. Create a prompt instructing the model how to use the tags in its response.\n", |
75 | 75 | "1. From the model response, take the text between the `start` and `end` tags as input to the tool.\n", |
76 | 76 | "1. Drop everything after the `end` tag.\n", |
77 | | - "1. Run the tool and add it's output as your reply.\n", |
| 77 | + "1. Run the tool and add its output as your reply.\n", |
78 | 78 | "1. The model will take into account the tools's output in its reply." |
79 | 79 | ] |
80 | 80 | }, |
|
84 | 84 | "metadata": { |
85 | 85 | "id": "v8d0FtO2KJ3O" |
86 | 86 | }, |
87 | | - "outputs": [ |
88 | | - { |
89 | | - "name": "stdout", |
90 | | - "output_type": "stream", |
91 | | - "text": [ |
92 | | - "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m122.2/122.2 kB\u001b[0m \u001b[31m2.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", |
93 | | - "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m113.3/113.3 kB\u001b[0m \u001b[31m5.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", |
94 | | - "\u001b[?25h" |
95 | | - ] |
96 | | - } |
97 | | - ], |
| 87 | + "outputs": [], |
98 | 88 | "source": [ |
99 | 89 | "pip install -q google.generativeai" |
100 | 90 | ] |
|
122 | 112 | "\n", |
123 | 113 | "@retry.Retry()\n", |
124 | 114 | "def retry_chat(**kwargs):\n", |
125 | | - " return palm.chat(**kwargs)\n", |
| 115 | + " return genai.chat(**kwargs)\n", |
126 | 116 | "\n", |
127 | 117 | "@retry.Retry()\n", |
128 | 118 | "def retry_reply(self, arg):\n", |
|
137 | 127 | }, |
138 | 128 | "outputs": [], |
139 | 129 | "source": [ |
140 | | - "import google.generativeai as palm\n", |
141 | | - "palm.configure(api_key=\"YOUR API KEY\")" |
| 130 | + "import google.generativeai as genai\n", |
| 131 | + "genai.configure(api_key=\"YOUR API KEY\")" |
142 | 132 | ] |
143 | 133 | }, |
144 | 134 | { |
|
149 | 139 | }, |
150 | 140 | "outputs": [], |
151 | 141 | "source": [ |
152 | | - "models = [m for m in palm.list_models() if 'generateMessage' in m.supported_generation_methods]\n", |
| 142 | + "models = [m for m in genai.list_models() if 'generateMessage' in m.supported_generation_methods]\n", |
153 | 143 | "model = models[0].name\n", |
154 | 144 | "print(model)" |
155 | 145 | ] |
|
0 commit comments