|
41 | 41 | "\n", |
42 | 42 | "This quickstart assumes authentication and authorization using Microsoft Entra ID and role assignments. It also assumes that you run this code from your local device.\n", |
43 | 43 | "\n", |
44 | | - "- On Azure AI Search, create a role assignment for the Azure OpenAI system managed identity. Required roles: **Search Index Data Reader**, **Search Service Contributor**.\n", |
| 44 | + "1. To create and load the sample index on Azure AI Search, you must have role assignments for: **Search Index Data Reader**, **Search Index Data Contributor**, **Search Service Contributor**.\n", |
45 | 45 | "\n", |
46 | | - "- Make sure you also have a role assignment that gives you permissions to create and query objects: Required roles: **Search Index Data Reader**, **Search Index Data Contributor**, **Search Service Contributor**.\n", |
47 | | - "\n", |
48 | | - "- On Azure OpenAI, create a role assigment for yourself to send requests from your local device: Required role: **Cognitive Services OpenAI User**.\n", |
| 46 | + "1. To send the query and results from the search engine to Azure OpenAI, the search system identity must have **Cognitive Services OpenAI User** permissions on Azure OpenAI.\n", |
49 | 47 | "\n", |
50 | 48 | "## Create the sample index\n", |
51 | 49 | "\n", |
|
72 | 70 | "Now that you have your Azure resources, an index, and model in place, you can run the script to chat with the index." |
73 | 71 | ] |
74 | 72 | }, |
| 73 | + { |
| 74 | + "cell_type": "markdown", |
| 75 | + "metadata": {}, |
| 76 | + "source": [ |
| 77 | + "## Run the code\n", |
| 78 | + "\n", |
| 79 | + "1. Create a virtual environment. In Visual Studio Code, press Ctrl-shift-P to open the command palette, search for \"Python: Create Environment\", and then select `Venv` to create a virtual environment in the current workspace.\n", |
| 80 | + "\n", |
| 81 | + "1. Select Quickstart-RAG\\requirements.txt for the dependencies.\n", |
| 82 | + "\n", |
| 83 | + "It takes several minutes to create the environment. When the environment is ready, continue to the next step." |
| 84 | + ] |
| 85 | + }, |
75 | 86 | { |
76 | 87 | "cell_type": "code", |
77 | | - "execution_count": null, |
| 88 | + "execution_count": 6, |
78 | 89 | "metadata": {}, |
79 | 90 | "outputs": [], |
80 | 91 | "source": [ |
81 | 92 | "# Package install for quickstart\n", |
82 | | - "! pip install azure-search-documents==11.6.0b4 --quiet\n", |
83 | | - "! pip install azure-identity==1.16.0 --quiet\n", |
84 | | - "! pip install openai --quiet" |
| 93 | + "! pip install -r requirements.txt --quiet" |
85 | 94 | ] |
86 | 95 | }, |
87 | 96 | { |
88 | 97 | "cell_type": "code", |
89 | | - "execution_count": null, |
| 98 | + "execution_count": 7, |
90 | 99 | "metadata": {}, |
91 | 100 | "outputs": [], |
92 | 101 | "source": [ |
93 | | - "# Set endpoints and deployment model\n", |
94 | | - "AZURE_SEARCH_SERVICE: str = \"PUT YOUR SEARCH SERVICE ENDPOINT HERE\"\n", |
95 | | - "AZURE_OPENAI_ACCOUNT: str = \"PUT YOUR AZURE OPENAI ENDPOINT HERE\"\n", |
96 | | - "AZURE_DEPLOYMENT_MODEL: str = \"gpt-35-turbo\"" |
| 102 | + "# Set endpoints and deployment model (provide the name of the deployment)\n", |
| 103 | + " AZURE_SEARCH_SERVICE: str = \"PUT YOUR SEARCH SERVICE ENDPOINT HERE\"\n", |
| 104 | + " AZURE_OPENAI_ACCOUNT: str = \"PUT YOUR AZURE OPENAI ENDPOINT HERE\"\n", |
| 105 | + " AZURE_DEPLOYMENT_MODEL: str = \"gpt-35-turbo\"" |
97 | 106 | ] |
98 | 107 | }, |
99 | 108 | { |
100 | 109 | "cell_type": "code", |
101 | | - "execution_count": null, |
| 110 | + "execution_count": 8, |
102 | 111 | "metadata": {}, |
103 | 112 | "outputs": [], |
104 | 113 | "source": [ |
|
110 | 119 | }, |
111 | 120 | { |
112 | 121 | "cell_type": "code", |
113 | | - "execution_count": null, |
| 122 | + "execution_count": 16, |
114 | 123 | "metadata": {}, |
115 | 124 | "outputs": [], |
116 | 125 | "source": [ |
|
203 | 212 | }, |
204 | 213 | { |
205 | 214 | "cell_type": "code", |
206 | | - "execution_count": null, |
| 215 | + "execution_count": 19, |
207 | 216 | "metadata": {}, |
208 | | - "outputs": [], |
| 217 | + "outputs": [ |
| 218 | + { |
| 219 | + "name": "stdout", |
| 220 | + "output_type": "stream", |
| 221 | + "text": [ |
| 222 | + "Based on your preferences, I recommend the following hotels: \n", |
| 223 | + "\n", |
| 224 | + "- Ocean Air Motel: This hotel is oceanfront with beach access, two pools, and a private balcony with ocean views. \n", |
| 225 | + "- Marquis Plaza & Suites: This hotel has a view of the ocean and amenities like free Wi-Fi, a full kitchen, and a free breakfast buffet. \n", |
| 226 | + "- Trails End Motel: Though not directly on the ocean, this hotel does have a view and is only 8 miles from downtown. Amenities include an on-site bar/restaurant, free hot breakfast buffet, and free Wi-Fi.\n" |
| 227 | + ] |
| 228 | + } |
| 229 | + ], |
209 | 230 | "source": [ |
210 | 231 | "# Instantiate the chat thread and run the conversation\n", |
211 | 232 | "import azure.identity.aio\n", |
|
219 | 240 | " query=\"Can you recommend a few hotels near the ocean with beach access and good views\",\n", |
220 | 241 | " search_type=SearchType(search_type),\n", |
221 | 242 | " use_semantic_reranker=use_semantic_reranker,\n", |
222 | | - " sources_to_include=sources_to_include,\n", |
223 | | - " k=k)\n", |
| 243 | + " sources_to_include=sources_to_include\n", |
| 244 | + " )\n", |
224 | 245 | " await chat_thread.get_openai_response(openai_client=openai_client, model=chat_deployment)\n", |
225 | 246 | "\n", |
226 | 247 | "print(chat_thread.get_last_message()[\"content\"])" |
227 | 248 | ] |
| 249 | + }, |
| 250 | + { |
| 251 | + "cell_type": "markdown", |
| 252 | + "metadata": {}, |
| 253 | + "source": [ |
| 254 | + "If you get an authorization error message, wait a few minutes and try again. It can take several minutes for role assignments to become operational." |
| 255 | + ] |
228 | 256 | } |
229 | 257 | ], |
230 | 258 | "metadata": { |
|
0 commit comments