You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/personalizer/whats-new.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Learn what's new in Azure AI Personalizer. These items may include release notes
19
19
20
20
### September 2022
21
21
* Personalizer Inference Explainabiltiy is now available as a Public Preview. Enabling inference explainability returns feature scores on every Rank API call, providing insight into how influential each feature is to the actions chosen by your Personalizer model. [Learn more about Inference Explainability](how-to-inference-explainability.md).
22
-
* Personalizer SDK now available in [Java](https://search.maven.org/artifact/com.azure/azure-ai-personalizer/1.0.0-beta.1/jar) and [Javascript](https://www.npmjs.com/package/@azure-rest/ai-personalizer).
22
+
* Personalizer SDK now available in [Java](https://search.maven.org/artifact/com.azure/azure-ai-personalizer/1.0.0-beta.1/jar) and [JavaScript](https://www.npmjs.com/package/@azure-rest/ai-personalizer).
23
23
24
24
### April 2022
25
25
* Local inference SDK (Preview): Personalizer now supports near-realtime (sub-10ms) inference without the need to wait for network API calls. Your Personalizer models can be used locally for lightning fast Rank calls using the [C# SDK (Preview)](https://www.nuget.org/packages/Azure.AI.Personalizer/2.0.0-beta.2), empowering your applications to personalize quickly and efficiently. Your model continues to train in Azure while your local model is seamlessly updated.
Copy file name to clipboardExpand all lines: articles/machine-learning/component-reference/component-reference.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,7 +63,7 @@ For help with choosing algorithms, see
63
63
64
64
| Functionality | Description | component |
65
65
| --- |--- | --- |
66
-
| Model Training | Run data through the algorithm. |[Train Clustering Model](train-clustering-model.md) <br/> [Train Model](train-model.md) <br/> [Train Pytorch Model](train-pytorch-model.md) <br/> [Tune Model Hyperparameters](tune-model-hyperparameters.md)|
66
+
| Model Training | Run data through the algorithm. |[Train Clustering Model](train-clustering-model.md) <br/> [Train Model](train-model.md) <br/> [Train PyTorch Model](train-pytorch-model.md) <br/> [Tune Model Hyperparameters](tune-model-hyperparameters.md)|
67
67
| Model Scoring and Evaluation | Measure the accuracy of the trained model. |[Apply Transformation](apply-transformation.md) <br/> [Assign Data to Clusters](assign-data-to-clusters.md) <br/> [Cross Validate Model](cross-validate-model.md) <br/> [Evaluate Model](evaluate-model.md) <br/> [Score Image Model](score-image-model.md) <br/> [Score Model](score-model.md)|
68
68
| Python Language | Write code and embed it in a component to integrate Python with your pipeline. |[Create Python Model](create-python-model.md) <br/> [Execute Python Script](execute-python-script.md)|
69
69
| R Language | Write code and embed it in a component to integrate R with your pipeline. |[Execute R Script](execute-r-script.md)|
Copy file name to clipboardExpand all lines: articles/machine-learning/component-reference/densenet.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ This classification algorithm is a supervised learning method, and requires a la
21
21
> [!NOTE]
22
22
> This component does not support labeled dataset generated from *Data Labeling* in the studio, but only support labeled image directory generated from [Convert to Image Directory](convert-to-image-directory.md) component.
23
23
24
-
You can train the model by providing the model and the labeled image directory as inputs to [Train Pytorch Model](train-pytorch-model.md). The trained model can then be used to predict values for the new input examples using [Score Image Model](score-image-model.md).
24
+
You can train the model by providing the model and the labeled image directory as inputs to [Train PyTorch Model](train-pytorch-model.md). The trained model can then be used to predict values for the new input examples using [Score Image Model](score-image-model.md).
25
25
26
26
### More about DenseNet
27
27
@@ -37,14 +37,14 @@ For more information on DenseNet, see the research paper, [Densely Connected Con
37
37
38
38
4. For **Memory efficient**, specify whether to use checkpointing, which is much more memory-efficient but slower. For more information, see the research paper, [Memory-Efficient Implementation of DenseNets](https://arxiv.org/pdf/1707.06990.pdf).
39
39
40
-
5. Connect the output of **DenseNet** component, training, and validation image dataset component to the [Train Pytorch Model](train-pytorch-model.md).
40
+
5. Connect the output of **DenseNet** component, training, and validation image dataset component to the [Train PyTorch Model](train-pytorch-model.md).
41
41
42
42
6. Submit the pipeline.
43
43
44
44
45
45
## Results
46
46
47
-
After pipeline run is completed, to use the model for scoring, connect the [Train Pytorch Model](train-pytorch-model.md) to [Score Image Model](score-image-model.md), to predict values for new input examples.
47
+
After pipeline run is completed, to use the model for scoring, connect the [Train PyTorch Model](train-pytorch-model.md) to [Score Image Model](score-image-model.md), to predict values for new input examples.
48
48
49
49
## Technical notes
50
50
@@ -60,7 +60,7 @@ After pipeline run is completed, to use the model for scoring, connect the [Trai
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-compute-target.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -202,7 +202,7 @@ If you use the GPU-enabled compute targets, it's important to ensure that the co
202
202
203
203
In addition to ensuring the CUDA version and hardware are compatible, also ensure that the CUDA version is compatible with the version of the machine learning framework you're using:
204
204
205
-
- For PyTorch, you can check the compatibility by visiting [Pytorch's previous versions page](https://pytorch.org/get-started/previous-versions/).
205
+
- For PyTorch, you can check the compatibility by visiting [PyTorch's previous versions page](https://pytorch.org/get-started/previous-versions/).
206
206
- For Tensorflow, you can check the compatibility by visiting [Tensorflow's build from source page](https://www.tensorflow.org/install/source#gpu).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-convert-custom-model-to-mlflow.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ With Azure Machine Learning, MLflow models get the added benefits of:
24
24
- Portability as an open source standard format
25
25
- Ability to deploy both locally and on cloud
26
26
27
-
MLflow provides support for various [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors), such as scikit-learn, Keras, and Pytorch. MLflow might not cover every use case. For example, you might want to create an MLflow model with a framework that MLflow doesn't natively support. You might want to change the way your model does preprocessing or post-processing when running jobs. To learn more about MLflow models, see [From artifacts to models in MLflow](concept-mlflow-models.md).
27
+
MLflow provides support for various [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors), such as scikit-learn, Keras, and PyTorch. MLflow might not cover every use case. For example, you might want to create an MLflow model with a framework that MLflow doesn't natively support. You might want to change the way your model does preprocessing or post-processing when running jobs. To learn more about MLflow models, see [From artifacts to models in MLflow](concept-mlflow-models.md).
28
28
29
29
If you didn't train your model with MLFlow and want to use Azure Machine Learning's MLflow no-code deployment offering, you need to convert your custom model to MLFLow. For more information, see [Custom Python Models](https://mlflow.org/docs/latest/models.html#custom-python-models).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-interactive-jobs.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ Interactive training is supported on **Azure Machine Learning Compute Clusters**
26
26
- Review [getting started with training on Azure Machine Learning](./how-to-train-model.md).
27
27
- For more information, see this link for [VS Code](how-to-setup-vs-code.md) to set up the Azure Machine Learning extension.
28
28
- Make sure your job environment has the `openssh-server` and `ipykernel ~=6.0` packages installed (all Azure Machine Learning curated training environments have these packages installed by default).
29
-
- Interactive applications can't be enabled on distributed training runs where the distribution type is anything other than Pytorch, TensorFlow, or MPI. Custom distributed training setup (configuring multi-node training without using the above distribution frameworks) isn't currently supported.
29
+
- Interactive applications can't be enabled on distributed training runs where the distribution type is anything other than PyTorch, TensorFlow, or MPI. Custom distributed training setup (configuring multi-node training without using the above distribution frameworks) isn't currently supported.
30
30
- To use SSH, you need an SSH key pair. You can use the `ssh-keygen -f "<filepath>"` command to generate a public and private key pair.
* For the full notebook to run the Pytorch example, see [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb).
73
+
* For the full notebook to run the PyTorch example, see [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb).
DeepSpeed can be enabled using either Pytorch distribution or MPI for running distributed training. Azure Machine Learning supports the DeepSpeed launcher to launch distributed training as well as autotuning to get optimal `ds` configuration.
82
+
DeepSpeed can be enabled using either PyTorch distribution or MPI for running distributed training. Azure Machine Learning supports the DeepSpeed launcher to launch distributed training as well as autotuning to get optimal `ds` configuration.
83
83
84
-
You can use a [curated environment](resource-curated-environments.md) for an out of the box environment with the latest state of art technologies including DeepSpeed, ORT, MSSCCL, and Pytorch for your DeepSpeed training jobs.
84
+
You can use a [curated environment](resource-curated-environments.md) for an out of the box environment with the latest state of art technologies including DeepSpeed, ORT, MSSCCL, and PyTorch for your DeepSpeed training jobs.
0 commit comments