Skip to content

Commit 6475319

Browse files
authored
Merge branch 'google:main' into main
2 parents 14b57cf + d560fa9 commit 6475319

21 files changed

+8445
-1493
lines changed

.github/workflows/notebooks.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ jobs:
6565
python3 -m tensorflow_docs.tools.nblint \
6666
--styles=google,tensorflow \
6767
--arg=repo:google/generative-ai-docs --arg=branch:main \
68-
--arg=base_url:https://developers.generativeai.google/ \
68+
--arg=base_url:https://ai.google.dev/ \
6969
--exclude_lint=tensorflow::button_download \
7070
"${changed_notebooks[@]}"
7171
fi

demos/palm/python/docs-agent/README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -603,6 +603,10 @@ To launch the Docs Agent chat app, do the following:
603603
already running on port 5000 on your host machine, you can use the `-p` flag to specify
604604
a different port (for example, `poetry run ./chatbot/launch.sh -p 5050`).
605605

606+
**Note**: If this `poetry run ./chatbot/launch.sh` command fails to run, check the `HOSTNAME` environment
607+
variable on your host machine (for example, `echo $HOSTNAME`). If this variable is unset, try setting it to
608+
`localhost` by running `export HOSTNAME=localhost`.
609+
606610
Once the app starts running, this command prints output similar to the following:
607611

608612
```

site/en/docs/semantic_retriever.ipynb

Lines changed: 998 additions & 0 deletions
Large diffs are not rendered by default.

site/en/examples/anomaly_detection.ipynb

Lines changed: 1439 additions & 687 deletions
Large diffs are not rendered by default.

site/en/examples/chat_calculator.ipynb

Lines changed: 8 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@
4848
"source": [
4949
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
5050
" <td>\n",
51-
" <a target=\"_blank\" href=\"https://developers.generativeai.google/examples/chat_calculator\"><img src=\"https://developers.generativeai.google/static/site-assets/images/docs/notebook-site-button.png\" height=\"32\" width=\"32\" />View on Generative AI</a>\n",
51+
" <a target=\"_blank\" href=\"https://ai.google.dev/examples/chat_calculator\"><img src=\"https://ai.google.dev/static/site-assets/images/docs/notebook-site-button.png\" height=\"32\" width=\"32\" />View on Generative AI</a>\n",
5252
" </td>\n",
5353
" <td>\n",
5454
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/examples/chat_calculator.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
@@ -66,15 +66,15 @@
6666
},
6767
"source": [
6868
"For some use cases, you may want to stop the generation from a model to insert specific results. For example, language models may have trouble with complicated arithmetic problems like word problems.\n",
69-
"This tutorial shows an example of using an external tool with the `palm.chat` method to output the correct answer to a word problem.\n",
69+
"This tutorial shows an example of using an external tool with the `genai.chat` method to output the correct answer to a word problem.\n",
7070
"\n",
7171
"This particular example uses the [`numexpr`](https://github.com/pydata/numexpr) tool to perform the arithmetic but you can use this same procedure to integrate other tools specific to your use case. The following is an outline of the steps:\n",
7272
"\n",
7373
"1. Determine a `start` and `end` tag to demarcate the text to send the tool.\n",
7474
"1. Create a prompt instructing the model how to use the tags in its response.\n",
7575
"1. From the model response, take the text between the `start` and `end` tags as input to the tool.\n",
7676
"1. Drop everything after the `end` tag.\n",
77-
"1. Run the tool and add it's output as your reply.\n",
77+
"1. Run the tool and add its output as your reply.\n",
7878
"1. The model will take into account the tools's output in its reply."
7979
]
8080
},
@@ -84,17 +84,7 @@
8484
"metadata": {
8585
"id": "v8d0FtO2KJ3O"
8686
},
87-
"outputs": [
88-
{
89-
"name": "stdout",
90-
"output_type": "stream",
91-
"text": [
92-
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m122.2/122.2 kB\u001b[0m \u001b[31m2.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
93-
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m113.3/113.3 kB\u001b[0m \u001b[31m5.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
94-
"\u001b[?25h"
95-
]
96-
}
97-
],
87+
"outputs": [],
9888
"source": [
9989
"pip install -q google.generativeai"
10090
]
@@ -122,7 +112,7 @@
122112
"\n",
123113
"@retry.Retry()\n",
124114
"def retry_chat(**kwargs):\n",
125-
" return palm.chat(**kwargs)\n",
115+
" return genai.chat(**kwargs)\n",
126116
"\n",
127117
"@retry.Retry()\n",
128118
"def retry_reply(self, arg):\n",
@@ -137,8 +127,8 @@
137127
},
138128
"outputs": [],
139129
"source": [
140-
"import google.generativeai as palm\n",
141-
"palm.configure(api_key=\"YOUR API KEY\")"
130+
"import google.generativeai as genai\n",
131+
"genai.configure(api_key=\"YOUR API KEY\")"
142132
]
143133
},
144134
{
@@ -149,7 +139,7 @@
149139
},
150140
"outputs": [],
151141
"source": [
152-
"models = [m for m in palm.list_models() if 'generateMessage' in m.supported_generation_methods]\n",
142+
"models = [m for m in genai.list_models() if 'generateMessage' in m.supported_generation_methods]\n",
153143
"model = models[0].name\n",
154144
"print(model)"
155145
]

0 commit comments

Comments
 (0)