You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/includes/reference/sdk/javascript.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ ms.author: lajanuar
15
15
| Service | Description | Reference documentation |
16
16
| --- | --- | --- |
17
17
|[Azure AI Search](/azure/search/)| Bring AI-powered cloud search to your mobile and web apps. |• [Azure AI Search SDK for JavaScript](/javascript/api/overview/azure/search-documents-readme?view=azure-node-latest&preserve-view=true) <br><br>• [Azure AI Search npm package](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0?activeTab=readme)|
18
-
|[Azure OpenAI](../../../openai/index.yml)| Perform a wide variety of natural language tasks. |• [Azure OpenAI SDK for JavaScript](/javascript/api/@azure/openai/?view=azure-node-preview&preserve-view=true&branch=main)<br><br>• [Azure OpenAI npm package](https://www.npmjs.com/package/@azure/openai/v/1.0.0-beta.11)|
18
+
|[Azure OpenAI](../../../openai/index.yml)| Perform a wide variety of natural language tasks. |• [Azure OpenAI SDK for JavaScript](/javascript/api/overview/azure/openai?view=azure-node-latest&preserve-view=true)<br><br>• [Azure OpenAI npm package](https://www.npmjs.com/package/@azure/openai/v/1.0.0-beta.11)|
19
19
|[Bot Service](/composer/)| Create bots and connect them across channels. |• [Bot Service SDK for JavaScript](https://github.com/Microsoft/botbuilder-js?tab=readme-ov-file)<br><br>• [Bot Builder npm package)](https://github.com/Microsoft/botbuilder-js#packages)|
20
20
|[Content Safety](../../../content-safety/index.yml)| Detect harmful content in applications and services.|• [Content Safety SDK for JavaScript](/javascript/api/%40azure-rest/ai-content-safety/?view=azure-node-latest&preserve-view=true)<br><br>• [Content Safety npm package](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0-beta.1)|
21
21
|[Custom Vision](../../../custom-vision-service/index.yml)| Customize image recognition for your applications and models. |• [Custom Vision SDK for JavaScript (prediction)](/javascript/api/%40azure/cognitiveservices-customvision-prediction/?view=azure-node-latest&preserve-view=true) <br><br>• [Custom Vision npm package (prediction)](https://www.npmjs.com/package/@azure/cognitiveservices-customvision-prediction) <br><br>• [Custom Vision SDK for JavaScript (training)](/javascript/api/%40azure/cognitiveservices-customvision-training/?view=azure-node-latest&preserve-view=true)<br><br>• [Custom Vision npm package (training)](https://www.npmjs.com/package/@azure/cognitiveservices-customvision-training)|
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/configure-private-link.md
+22-9Lines changed: 22 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ You get several hub default resources in your resource group. You need to config
23
23
24
24
- Disable public network access of hub default resources such as Azure Storage, Azure Key Vault, and Azure Container Registry.
25
25
- Establish private endpoint connection to hub default resources. You need to have both a blob and file private endpoint for the default storage account.
26
-
-[Managed identity configurations](#managed-identity-configuration) to allow hubs access to your storage account if it's private.
26
+
-If your storage account is private, [assign roles](#private-storage-configuration) to allow access.
27
27
28
28
29
29
## Prerequisites
@@ -234,15 +234,28 @@ az extension add --name ml
234
234
235
235
---
236
236
237
-
## Managed identity configuration
238
237
239
-
A manged identity configuration is required if you make your storage account private. Our services need to read/write data in your private storage account using [Allow Azure services on the trusted services list to access this storage account](/azure/storage/common/storage-network-security#grant-access-to-trusted-azure-services) with following managed identity configurations. Enable the system assigned managed identity of Azure AI Service and Azure AI Search, then configure role-based access control for each managed identity.
|`Storage File Data Privileged Contributor`| Azure AI Foundry project | Storage Account | Read/Write prompt flow data. |[Prompt flow doc](/azure/machine-learning/prompt-flow/how-to-secure-prompt-flow#secure-prompt-flow-with-workspace-managed-virtual-network)|
244
-
|`Storage Blob Data Contributor`| Azure AI Service | Storage Account | Read from input container, write to pre-process result to output container. |[Azure OpenAI Doc](../../ai-services/openai/how-to/managed-identity.md)|
245
-
|`Storage Blob Data Contributor`| Azure AI Search | Storage Account | Read blob and write knowledge store |[Search doc](/azure/search/search-howto-managed-identities-data-sources). |
240
+
If your storage account is private (uses a private endpoint to communicate with your project), you perform the following steps:
241
+
242
+
1. Our services need to read/write data in your private storage account using [Allow Azure services on the trusted services list to access this storage account](/azure/storage/common/storage-network-security#grant-access-to-trusted-azure-services) with following managed identity configurations. Enable the system assigned managed identity of Azure AI Service and Azure AI Search, then configure role-based access control for each managed identity.
|`Reader`| Azure AI Foundry project | Private endpoint of the storage account | Read data from the private storage account. |
247
+
|`Storage File Data Privileged Contributor`| Azure AI Foundry project | Storage Account | Read/Write prompt flow data. |[Prompt flow doc](/azure/machine-learning/prompt-flow/how-to-secure-prompt-flow#secure-prompt-flow-with-workspace-managed-virtual-network)|
248
+
|`Storage Blob Data Contributor`| Azure AI Service | Storage Account | Read from input container, write to preprocess result to output container. |[Azure OpenAI Doc](../../ai-services/openai/how-to/managed-identity.md)|
249
+
|`Storage Blob Data Contributor`| Azure AI Search | Storage Account | Read blob and write knowledge store |[Search doc](/azure/search/search-howto-managed-identities-data-sources). |
250
+
251
+
> [!TIP]
252
+
> Your storage account may have multiple private endpoints. You need to assign the `Reader` role to each private endpoint.
253
+
254
+
1. Assign the `Storage Blob Data reader` role to your developers. This role allows them to read data from the storage account.
255
+
256
+
1. Verify that the project's connection to the storage account uses Microsoft Entra ID for authentication. To view the connection information, go to the __Management center__, select __Connected resources__, and then select the storage account connections. If the credential type isn't Entra ID, select the pencil icon to update the connection and set the __Authentication method__ to __Microsoft Entra ID__.
257
+
258
+
For information on securing playground chat, see [Securely use playground chat](secure-data-playground.md).
246
259
247
260
## Custom DNS configuration
248
261
@@ -265,7 +278,7 @@ If you need to configure custom DNS server without DNS forwarding, use the follo
265
278
> * Compute instances can be accessed only from within the virtual network.
266
279
> * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
267
280
268
-
*`<instance-name>.<region>.instances.azureml.ms` - Only used by the `az ml compute connect-ssh` command to connect to computers in a managed virtual network. Not needed if you are not using a managed network or SSH connections.
281
+
*`<instance-name>.<region>.instances.azureml.ms` - Only used by the `az ml compute connect-ssh` command to connect to computers in a managed virtual network. Not needed if you aren't using a managed network or SSH connections.
269
282
270
283
*`<managed online endpoint name>.<region>.inference.ml.azure.com` - Used by managed online endpoints
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-timegen-1.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -134,12 +134,12 @@ For more information about use of the APIs, visit the [reference](#reference-for
134
134
135
135
#### Forecast API
136
136
137
-
Use the method `POST` to send the request to the `/forecast_multi_series` route:
137
+
Use the method `POST` to send the request to the `/forecast` route:
138
138
139
139
__Request__
140
140
141
141
```rest
142
-
POST /forecast_multi_series HTTP/1.1
142
+
POST /forecast HTTP/1.1
143
143
Host: <DEPLOYMENT_URI>
144
144
Authorization: Bearer <TOKEN>
145
145
Content-type: application/json
@@ -151,8 +151,7 @@ The Payload JSON formatted string contains these parameters:
151
151
152
152
| Key | Type | Default | Description |
153
153
|-----|-----|-----|-----|
154
-
|**DataFrame (`df`)**|`DataFrame`| No default. This value must be specified. | The DataFrame on which the function operates. Expected to contain at least these columns:<br><br>`time_col`: Column name in `df` that contains the time indices of the time series. This column is typically a datetime column with regular intervals - for example, hourly, daily, monthly data points.<br><br>`target_col`: Column name in `df` that contains the target variable of the time series, in other words, the variable we wish to predict or analyze.<br><br>Additionally, you can pass multiple time series (stacked in the dataframe) considering another column:<br><br>`id_col`: Column name in `df` that identifies unique time series. Each unique value in this column corresponds to a unique time series.|
155
-
|**Forecast Horizon (`h`)**|`int`| No default. This value must be specified. | Forecast horizon |
154
+
|**Forecast Horizon (`fh`)**|`int`| No default. This value must be specified. | Forecast horizon |
156
155
|**Frequency (`freq`)**|`str`| None |Frequency of the data. By default, the frequency is inferred automatically. For more information, visit [pandas available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |
157
156
|**Identifying Column (`id_col`)**|`str`|`unique_id`| Column that identifies each series.|
158
157
|**Time Column (`time_col`)**|`str`|`ds`| Column that identifies each timestep; its values can be timestamps or integers. |
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-develop-feature-set-with-custom-source.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ ms.subservice: core
9
9
ms.topic: tutorial
10
10
author: fbsolo-ms1
11
11
ms.author: franksolomon
12
-
ms.date: 11/28/2023
12
+
ms.date: 11/21/2024
13
13
ms.reviewer: yogipandey
14
14
ms.custom:
15
15
- sdkv2
@@ -20,7 +20,7 @@ ms.custom:
20
20
21
21
# Tutorial 5: Develop a feature set with a custom source
22
22
23
-
An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and inference steps look up the feature data. For more information about feature stores, see [feature store concepts](./concept-what-is-managed-feature-store.md).
23
+
An Azure Machine Learning managed feature store lets you discover, create, and operationalize features. Features serve as the connective tissue in the machine learning lifecycle, starting from the prototyping phase, where you experiment with various features. That lifecycle continues to the operationalization phase, where you deploy your models, and inference steps look up the feature data. For more information about feature stores, visit the [feature store concepts](./concept-what-is-managed-feature-store.md) resource.
24
24
25
25
Part 1 of this tutorial series showed how to create a feature set specification with custom transformations, enable materialization and perform a backfill. Part 2 showed how to experiment with features in the experimentation and training flows. Part 3 explained recurrent materialization for the `transactions` feature set, and showed how to run a batch inference pipeline on the registered model. Part 4 described how to run batch inference.
26
26
@@ -36,27 +36,27 @@ In this tutorial, you'll
36
36
> [!NOTE]
37
37
> This tutorial uses an Azure Machine Learning notebook with **Serverless Spark Compute**.
38
38
39
-
*Make sure you complete the previous tutorials in this series. This tutorial reuses feature store and other resources created in those earlier tutorials.
39
+
*Be sure to complete the previous tutorials in this series. This tutorial reuses the feature store and other resources created in those earlier tutorials.
40
40
41
41
## Set up
42
42
43
-
This tutorial uses the Python feature store core SDK (`azureml-featurestore`). The Python SDK is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities.
43
+
This tutorial uses the Python feature store core SDK (`azureml-featurestore`). The Python SDK is used for create, read, update, and delete (CRUD) operations on feature stores, feature sets, and feature store entities.
44
44
45
45
You don't need to explicitly install these resources for this tutorial, because in the set-up instructions shown here, the `conda.yml` file covers them.
46
46
47
47
### Configure the Azure Machine Learning Spark notebook
48
48
49
-
You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook *featurestore_sample/notebooks/sdk_only/5.Develop-feature-set-custom-source.ipynb*. Keep this tutorial open and refer to it for documentation links and more explanation.
49
+
You can create a new notebook and execute the instructions in this tutorial, step by step. You can also open and run the existing notebook *featurestore_sample/notebooks/sdk_only/5.Develop-feature-set-custom-source.ipynb*. Keep this tutorial open and refer to it for documentation links and more explanation.
50
50
51
51
1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
52
52
53
-
2. Configure the session:
53
+
1. Configure the session:
54
54
55
-
1. Select **Configure session** in the top status bar.
56
-
2. Select the **Python packages** tab, s
57
-
3. Select **Upload Conda file**.
58
-
4. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
59
-
5. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
55
+
1. Select **Configure session** in the top status bar
56
+
1. Select the **Python packages** tab, select **Upload Conda file**
57
+
1. Select **Upload Conda file**
58
+
1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment)
59
+
1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns
60
60
61
61
## Set up the root directory for the samples
62
62
This code cell sets up the root directory for the samples. It needs about 10 minutes to install all dependencies and start the Spark session.
@@ -118,14 +118,14 @@ Next, define a feature window, and display the feature values in this feature wi
To register the feature set specification with the feature store, first save that specification in a specific format. Review the generated `transactions_custom_source` feature set specification. Open this file from the file tree to see the specification: `featurestore/featuresets/transactions_custom_source/spec/FeaturesetSpec.yaml`.
121
+
To register the feature set specification with the feature store, first save that specification in a specific format. Review the generated `transactions_custom_source` feature set specification. Open this file from the file tree to view the specification: `featurestore/featuresets/transactions_custom_source/spec/FeaturesetSpec.yaml`.
122
122
123
123
The specification has these elements:
124
124
125
125
-`features`: A list of features and their datatypes.
126
126
-`index_columns`: The join keys required to access values from the feature set.
127
127
128
-
To learn more about the specification, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and [CLI (v2) feature set YAML schema](./reference-yaml-feature-set.md).
128
+
For more information about the specification, visit the [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and [CLI (v2) feature set YAML schema](./reference-yaml-feature-set.md) resources.
129
129
130
130
Feature set specification persistence offers another benefit: the feature set specification can be source controlled.
0 commit comments