You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/postgresql/flexible-server/generative-ai-azure-overview.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Azure AI Extension
3
3
description: Azure AI Extension in Azure Database for PostgreSQL - Flexible Server.
4
4
author: mulander
5
5
ms.author: adamwolk
6
-
ms.date: 01/02/2024
6
+
ms.date: 02/02/2024
7
7
ms.service: postgresql
8
8
ms.subservice: flexible-server
9
9
ms.custom:
@@ -45,6 +45,15 @@ The extension also allows calling Azure OpenAI and Azure Cognitive Services.
45
45
46
46
Configuring the extension requires you to provide the endpoints to connect to the Azure AI services and the API keys required for authentication. Service settings are stored using following functions:
47
47
48
+
### permissions
49
+
50
+
Your Azure AI access keys are similar to a root password for your account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely.
51
+
To manage service keys used by the extension, users require the `azure_ai_settings_manager` role granted to them. The following functions require the role:
52
+
* azure_ai.set_setting
53
+
* azure_ai.get_setting
54
+
55
+
The `azure_ai_settings_manager` role is by default granted to the `azure_pg_admin` role.
The [pg_azure_storage extension](./concepts-storage-extension.md) allows you to import or export data in multiple file formats directly between Azure blob storage and your Azure Database for PostgreSQL flexible server instance. Containers with access level "Private" or "Blob" requires adding private access key.
20
-
You can create the extension by running:
20
+
21
+
Before you can enable `azure_storage` on your Azure Database for PostgreSQL flexible server instance, you need to add the extension to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
22
+
23
+
Then you can install the extension, by connecting to your target database and running the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command. You need to repeat the command separately for every database you want the extension to be available in.
21
24
22
25
```sql
23
-
SELECT create_extension('azure_storage');
26
+
CREATE EXTENSION azure_storage;
24
27
```
25
28
29
+
## Permissions
30
+
31
+
Your Azure blob storage (ABS) access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. The account key is stored in a table that is accessible only by the superuser.
32
+
33
+
Users granted the `azure_storage_admin` role can interact with this table using the following functions:
34
+
* account_add
35
+
* account_list
36
+
* account_remove
37
+
* account_user_add
38
+
* account_user_remove
39
+
40
+
The `azure_storage_admin` role is by default granted to the `azure_pg_admin` role.
41
+
26
42
## azure_storage.account_add
27
43
28
44
Function allows adding access to a storage account.
@@ -41,7 +57,7 @@ An Azure blob storage (ABS) account contains all of your ABS objects: blobs, fil
41
57
42
58
#### account_key_p
43
59
44
-
Your Azure blob storage (ABS) access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. The account key is stored in a table that is accessible by the postgres superuser, azure_storage_admin and all roles granted those admin permissions. To see which storage accounts exist, use the function account_list.
60
+
Your Azure blob storage (ABS) access keys are similar to a root password for your storage account. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. The account key is stored in a table that is accessible only by the superuser. Users granted the `azure_storage_admin` role can interact with this table via functions. To see which storage accounts exist, use the function account_list.
45
61
46
62
## azure_storage.account_remove
47
63
@@ -175,7 +191,7 @@ Size of file object in bytes.
175
191
176
192
#### last_modified
177
193
178
-
When was the file content last modified?
194
+
Describes when the file content was last modified.
179
195
180
196
#### etag
181
197
@@ -185,13 +201,13 @@ An ETag property is used for optimistic concurrency during updates. It isn't a t
185
201
186
202
The Blob object represents a blob, which is a file-like object of immutable, raw data. They can be read as text or binary data, or converted into a ReadableStream so its methods can be used for processing the data. Blobs can represent data that isn't necessarily in a JavaScript-native format.
187
203
188
-
#### content_encode
204
+
#### content_encoding
189
205
190
206
Azure Storage allows you to define Content-Encoding property on a blob. For compressed content, you could set the property to be GZIP. When the browser accesses the content, it automatically decompresses the content.
191
207
192
208
#### content_hash
193
209
194
-
This hash is used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the hash that has arrived with the one that was sent. If the two hashes don't match, the operation fails with error code 400 (Bad Request).
210
+
This hash is used to verify the integrity of the blob during transport. When this header is specified, the storage service checks the provided hash with one computed from content. If the two hashes don't match, the operation fails with error code 400 (Bad Request).
> There are four utility functions, called as a parameter within blob_get that help building values for it. Each utility function is designated for the decoder matching its name.
@@ -298,33 +314,33 @@ Returns jsonb;
298
314
299
315
#### delimiter
300
316
301
-
Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single one-byte character.
317
+
Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single 1-byte character.
302
318
303
-
#### null_str
319
+
#### null_string
304
320
305
321
Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings.
306
322
307
323
#### header
308
324
309
-
Specifies that the file contains a header line with the names of each column in the file. On output, the frontline contains the column names from the table.
325
+
Specifies that the file contains a header line with the names of each column in the file. On output, the initial line contains the column names from the table.
310
326
311
327
#### quote
312
328
313
-
Specifies the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single one-byte character.
329
+
Specifies the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single 1-byte character.
314
330
315
331
#### escape
316
332
317
-
Specifies the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single one-byte character.
333
+
Specifies the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single 1-byte character.
318
334
319
335
#### force_not_null
320
336
321
337
Don't match the specified columns' values against the null string. In the default case where the null string is empty, it means that empty values are read as zero-length strings rather than nulls, even when they aren't quoted.
322
338
323
339
#### force_null
324
340
325
-
Match the specified columns' values against the null string, even if it has been quoted, and if a match is found set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL.
341
+
Match the specified columns' values against the null string, even if quoted, and if a match is found, set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL.
326
342
327
-
#### content_encode
343
+
#### content_encoding
328
344
329
345
Specifies that the file is encoded in the encoding_name. If the option is omitted, the current client encoding is used.
330
346
@@ -355,23 +371,23 @@ Returns jsonb;
355
371
356
372
#### delimiter
357
373
358
-
Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single one-byte character.
374
+
Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single 1-byte character.
359
375
360
-
#### null_str
376
+
#### null_string
361
377
362
378
Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings.
363
379
364
380
#### header
365
381
366
-
Specifies that the file contains a header line with the names of each column in the file. On output, the frontline contains the column names from the table.
382
+
Specifies that the file contains a header line with the names of each column in the file. On output, the initial line contains the column names from the table.
367
383
368
384
#### quote
369
385
370
-
Specifies the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single one-byte character.
386
+
Specifies the quoting character to be used when a data value is quoted. The default is double-quote. It must be a single 1-byte character.
371
387
372
388
#### escape
373
389
374
-
Specifies the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single one-byte character.
390
+
Specifies the character that should appear before a data character that matches the QUOTE value. The default is the same as the QUOTE value (so that the quoting character is doubled if it appears in the data). It must be a single 1-byte character.
375
391
376
392
#### force_quote
377
393
@@ -383,9 +399,9 @@ Don't match the specified columns' values against the null string. In the defaul
383
399
384
400
#### force_null
385
401
386
-
Match the specified columns' values against the null string, even if it has been quoted, and if a match is found set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL.
402
+
Match the specified columns' values against the null string, even if quoted, and if a match is found, set the value to NULL. In the default case where the null string is empty, it converts a quoted empty string into NULL.
387
403
388
-
#### content_encode
404
+
#### content_encoding
389
405
390
406
Specifies that the file is encoded in the encoding_name. If the option is omitted, the current client encoding is used.
391
407
@@ -410,13 +426,13 @@ Returns jsonb;
410
426
411
427
#### delimiter
412
428
413
-
Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single one-byte character.
429
+
Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. It must be a single 1-byte character.
414
430
415
-
#### null_str
431
+
#### null_string
416
432
417
433
Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings.
418
434
419
-
#### content_encode
435
+
#### content_encoding
420
436
421
437
Specifies that the file is encoded in the encoding_name. If the option is omitted, the current client encoding is used.
422
438
@@ -436,22 +452,21 @@ Returns jsonb;
436
452
437
453
### Arguments
438
454
439
-
#### content_encode
455
+
#### content_encoding
440
456
441
457
Specifies that the file is encoded in the encoding_name. If this option is omitted, the current client encoding is used.
442
458
443
459
### Return Type
444
460
445
461
jsonb
446
462
447
-
> [!NOTE]
448
-
**Permissions**
449
-
Now you can list containers set to Private and Blob access levels for that storage but only as the `citus user`, which has the `azure_storage_admin` role granted to it. If you create a new user named support, it won't be allowed to access container contents by default.
450
-
451
463
## Examples
452
464
453
465
The examples used make use of sample Azure storage account `(pgquickstart)` with custom files uploaded for adding to coverage of different use cases. We can start by creating table used across the set of example used.
454
466
467
+
> [!NOTE]
468
+
> You can list containers set to Private and Blob access levels for a storage but only as a user with the `azure_storage_admin` role granted to it. If you create a new user named support, it won't be allowed to access container contents by default.
0 commit comments