Skip to content

Commit 76208e3

Browse files
authored
Merge pull request #101231 from marktab/master
Acrolinx improvements; some required changes
2 parents cfe3995 + 03daf76 commit 76208e3

File tree

63 files changed

+741
-717
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+741
-717
lines changed

articles/machine-learning/team-data-science-process/agile-development.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22
title: Agile development of data science projects - Team Data Science Process
33
description: Execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the Team Data Science Process.
44
author: marktab
5-
manager: cgronlun
6-
editor: cgronlun
5+
manager: marktab
6+
editor: marktab
77
ms.service: machine-learning
88
ms.subservice: team-data-science-process
99
ms.topic: article
10-
ms.date: 09/05/2019
10+
ms.date: 01/10/2020
1111
ms.author: tdsp
1212
ms.custom: seodec18, previous-author=deguhath, previous-ms.author=deguhath
1313
---
@@ -82,13 +82,13 @@ After your project and project code repository are created, you can add a Featur
8282

8383
You can also link the Feature to the project's Azure Repos code repository by selecting **Add link** under the **Development** section.
8484

85-
After you finish editing the Feature, select **Save & Close**.
85+
After you edit the Feature, select **Save & Close**.
8686

8787
![Edit Feature and select Save & Close](./media/agile-development/3a-add-link-repo.png)
8888

8989
## <a name='AddStoryunderfeature-4'></a>Add a User Story to the Feature
9090

91-
Under the Feature, you can add User Stories to describe major steps needed to finish the project.
91+
Under the Feature, you can add User Stories to describe major steps needed to complete the project.
9292

9393
To add a new User Story to a Feature:
9494

articles/machine-learning/team-data-science-process/apps-anomaly-detection-api.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,12 @@ title: Azure Machine Learning Anomaly Detection API - Team Data Science Process
33
description: Anomaly Detection API is an example built with Microsoft Azure Machine Learning that detects anomalies in time series data with numerical values that are uniformly spaced in time.
44
services: machine-learning
55
author: marktab
6-
manager: cgronlun
7-
editor: cgronlun
6+
manager: marktab
7+
editor: marktab
88
ms.service: machine-learning
99
ms.subservice: team-data-science-process
1010
ms.topic: article
11-
ms.date: 06/05/2017
11+
ms.date: 01/10/2020
1212
ms.author: tdsp
1313
ms.custom: seodec18, previous-author=alokkirpal, previous-ms.author=alok
1414
---
@@ -44,7 +44,7 @@ The Anomaly Detection offering comes with useful tools to get you started.
4444
In order to use the API, you must deploy it to your Azure subscription where it will be hosted as an Azure Machine Learning web service. You can do this from the [Azure AI Gallery](https://gallery.cortanaintelligence.com/MachineLearningAPI/Anomaly-Detection-2). This will deploy two Azure Machine Learning Studio (classic) Web Services (and their related resources) to your Azure subscription - one for anomaly detection with seasonality detection, and one without seasonality detection. Once the deployment has completed, you will be able to manage your APIs from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. From this page, you will be able to find your endpoint locations, API keys, as well as sample code for calling the API. More detailed instructions are available [here](https://docs.microsoft.com/azure/machine-learning/machine-learning-manage-new-webservice).
4545

4646
## Scaling the API
47-
By default, your deployment will have a free Dev/Test billing plan which includes 1,000 transactions/month and 2 compute hours/month. You can upgrade to another plan as per your needs. Details on the pricing of different plans are available [here](https://azure.microsoft.com/pricing/details/machine-learning/) under "Production Web API pricing".
47+
By default, your deployment will have a free Dev/Test billing plan that includes 1,000 transactions/month and 2 compute hours/month. You can upgrade to another plan as per your needs. Details on the pricing of different plans are available [here](https://azure.microsoft.com/pricing/details/machine-learning/) under "Production Web API pricing".
4848

4949
## Managing AML Plans
5050
You can manage your billing plan [here](https://services.azureml.net/plans/). The plan name will be based on the resource group name you chose when deploying the API, plus a string that is unique to your subscription. Instructions on how to upgrade your plan are available [here](https://docs.microsoft.com/azure/machine-learning/machine-learning-manage-new-webservice) under the "Managing billing plans" section.
@@ -53,7 +53,7 @@ You can manage your billing plan [here](https://services.azureml.net/plans/). T
5353
The web service provides a REST-based API over HTTPS that can be consumed in different ways including a web or mobile application, R, Python, Excel, etc. You send your time series data to this service via a REST API call, and it runs a combination of the three anomaly types described below.
5454

5555
## Calling the API
56-
In order to call the API, you will need to know the endpoint location and API key. Both of these, along with sample code for calling the API, are available from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. Navigate to the desired API, and then click the "Consume" tab to find them. Note that you can call the API as a Swagger API (i.e. with the URL parameter `format=swagger`) or as a non-Swagger API (i.e. without the `format` URL parameter). The sample code uses the Swagger format. Below is an example request and response in non-Swagger format. These examples are to the seasonality endpoint. The non-seasonality endpoint is similar.
56+
In order to call the API, you will need to know the endpoint location and API key. These two requirements, along with sample code for calling the API, are available from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. Navigate to the desired API, and then click the "Consume" tab to find them. You can call the API as a Swagger API (that is, with the URL parameter `format=swagger`) or as a non-Swagger API (that is, without the `format` URL parameter). The sample code uses the Swagger format. Below is an example request and response in non-Swagger format. These examples are to the seasonality endpoint. The non-seasonality endpoint is similar.
5757

5858
### Sample Request Body
5959
The request contains two objects: `Inputs` and `GlobalParameters`. In the example request below, some parameters are sent explicitly while others are not (scroll down for a full list of parameters for each endpoint). Parameters that are not sent explicitly in the request will use the default values given below.
@@ -78,7 +78,7 @@ The request contains two objects: `Inputs` and `GlobalParameters`. In the examp
7878
}
7979

8080
### Sample Response
81-
Note that, in order to see the `ColumnNames` field, you must include `details=true` as a URL parameter in your request. See the tables below for the meaning behind each of these fields.
81+
In order to see the `ColumnNames` field, you must include `details=true` as a URL parameter in your request. See the tables below for the meaning behind each of these fields.
8282

8383
{
8484
"Results": {
@@ -100,18 +100,18 @@ Note that, in order to see the `ColumnNames` field, you must include `details=tr
100100

101101
## Score API
102102
The Score API is used for running anomaly detection on non-seasonal time series data. The API runs a number of anomaly detectors on the data and returns their anomaly scores.
103-
The figure below shows an example of anomalies that the Score API can detect. This time series has 2 distinct level changes, and 3 spikes. The red dots show the time at which the level change is detected, while the black dots show the detected spikes.
103+
The figure below shows an example of anomalies that the Score API can detect. This time series has two distinct level changes, and three spikes. The red dots show the time at which the level change is detected, while the black dots show the detected spikes.
104104
![Score API][1]
105105

106106
### Detectors
107-
The anomaly detection API supports detectors in 3 broad categories. Details on specific input parameters and outputs for each detector can be found in the following table.
107+
The anomaly detection API supports detectors in three broad categories. Details on specific input parameters and outputs for each detector can be found in the following table.
108108

109109
| Detector Category | Detector | Description | Input Parameters | Outputs |
110110
| --- | --- | --- | --- | --- |
111111
| Spike Detectors |TSpike Detector |Detect spikes and dips based on far the values are from first and third quartiles |*tspikedetector.sensitivity:* takes integer value in the range 1-10, default: 3; Higher values will catch more extreme values thus making it less sensitive |TSpike: binary values – ‘1’ if a spike/dip is detected, ‘0’ otherwise |
112112
| Spike Detectors | ZSpike Detector |Detect spikes and dips based on how far the datapoints are from their mean |*zspikedetector.sensitivity:* take integer value in the range 1-10, default: 3; Higher values will catch more extreme values making it less sensitive |ZSpike: binary values – ‘1’ if a spike/dip is detected, ‘0’ otherwise |
113-
| Slow Trend Detector |Slow Trend Detector |Detect slow positive trend as per the set sensitivity |*trenddetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select this from; The higher the less sensitive) |tscore: floating number representing anomaly score on trend |
114-
| Level Change Detectors | Bidirectional Level Change Detector |Detect both upward and downward level change as per the set sensitivity |*bileveldetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select this from; The higher the less sensitive) |rpscore: floating number representing anomaly score on upward and downward level change |
113+
| Slow Trend Detector |Slow Trend Detector |Detect slow positive trend as per the set sensitivity |*trenddetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select from; The higher the less sensitive) |tscore: floating number representing anomaly score on trend |
114+
| Level Change Detectors | Bidirectional Level Change Detector |Detect both upward and downward level change as per the set sensitivity |*bileveldetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select from; The higher the less sensitive) |rpscore: floating number representing anomaly score on upward and downward level change |
115115

116116
### Parameters
117117
More detailed information on these input parameters is listed in the table below:
@@ -142,7 +142,7 @@ The API runs all detectors on your time series data and returns anomaly scores a
142142

143143
## ScoreWithSeasonality API
144144
The ScoreWithSeasonality API is used for running anomaly detection on time series that have seasonal patterns. This API is useful to detect deviations in seasonal patterns.
145-
The following figure shows an example of anomalies detected in a seasonal time series. The time series has one spike (the 1st black dot), two dips (the 2nd black dot and one at the end), and one level change (red dot). Note that both the dip in the middle of the time series and the level change are only discernable after seasonal components are removed from the series.
145+
The following figure shows an example of anomalies detected in a seasonal time series. The time series has one spike (the first black dot), two dips (the second black dot and one at the end), and one level change (red dot). Both the dip in the middle of the time series and the level change are only discernable after seasonal components are removed from the series.
146146
![Seasonality API][2]
147147

148148
### Detectors
@@ -176,7 +176,7 @@ The API runs all detectors on your time series data and returns anomaly scores a
176176
| --- | --- |
177177
| Time |Timestamps from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
178178
| OriginalData |Values from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
179-
| ProcessedData |Either of the following: <ul><li>Seasonally adjusted time series if significant seasonality has been detected and deseason option selected;</li><li>seasonally adjusted and detrended time series if significant seasonality has been detected and deseasontrend option selected</li><li>otherwise, this is the same as OriginalData</li> |
179+
| ProcessedData |Either of the following options: <ul><li>Seasonally adjusted time series if significant seasonality has been detected and deseason option selected;</li><li>seasonally adjusted and detrended time series if significant seasonality has been detected and deseasontrend option selected</li><li>otherwise, this option is the same as OriginalData</li> |
180180
| TSpike |Binary indicator to indicate whether a spike is detected by TSpike Detector |
181181
| ZSpike |Binary indicator to indicate whether a spike is detected by ZSpike Detector |
182182
| BiLevelChangeScore |A floating number representing anomaly score on level change |

articles/machine-learning/team-data-science-process/automated-data-pipeline-cheat-sheet.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,20 +3,20 @@ title: Azure Machine Learning data pipeline cheat sheet - Team Data Science Proc
33
description: A printable cheat sheet that shows you how to set up an automated data pipeline to your Azure Machine Learning web service whether your data is on-premises, streaming, in Azure, or in a third-party cloud service.
44
services: machine-learning
55
author: marktab
6-
manager: cgronlun
7-
editor: cgronlun
6+
manager: marktab
7+
editor: marktab
88
ms.service: machine-learning
99
ms.subservice: team-data-science-process
1010
ms.topic: article
11-
ms.date: 03/14/2017
11+
ms.date: 01/10/2020
1212
ms.author: tdsp
1313
ms.custom: seodec18, previous-author=garyericson, previous-ms.author=garye
1414
---
1515
# Cheat sheet for an automated data pipeline for Azure Machine Learning predictions
1616
The **Microsoft Azure Machine Learning automated data pipeline cheat sheet** helps you navigate through the
1717
technology you can use to get your data to your Machine Learning web service where it can be scored by your predictive analytics model.
1818

19-
Depending on whether your data is on-premises, in the cloud, or streaming real-time, there are different mechanisms available to move the data to your web service endpoint for scoring.
19+
Depending on whether your data is on-premises, in the cloud, or real-time streaming, there are different mechanisms available to move the data to your web service endpoint for scoring.
2020
This cheat sheet walks you through the decisions you need to make, and it offers links to articles that can help you develop your solution.
2121

2222
## Download the Machine Learning automated data pipeline cheat sheet

articles/machine-learning/team-data-science-process/ci-cd-flask.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,18 +3,18 @@ title: Create a CI/CD pipeline with Azure Pipelines - Team Data Science Process
33
description: "Create a continuous integration and continuous delivery pipeline for Artificial Intelligence (AI) applications using Docker and Kubernetes."
44
services: machine-learning
55
author: marktab
6-
manager: cgronlun
7-
editor: cgronlun
6+
manager: marktab
7+
editor: marktab
88
ms.service: machine-learning
99
ms.subservice: team-data-science-process
1010
ms.topic: article
11-
ms.date: 09/06/2019
11+
ms.date: 01/10/2020
1212
ms.author: tdsp
1313
ms.custom: seodec18, previous-author=jainr, previous-ms.author=jainr
1414
---
1515
# Create CI/CD pipelines for AI apps using Azure Pipelines, Docker, and Kubernetes
1616

17-
An Artificial Intelligence (AI) application is application code embedded with a pretrained machine learning (ML) model. There are always two streams of work for an AI application: Data scientists build the ML model, and app developers build the app and expose it to end users to consume. This article describes how to implement a continuous integration and continuous delivery (CI/CD) pipeline for an AI application that embeds the ML model into the app source code. The sample code and tutorial use a simple Python Flask web application, and fetch a pretrained model from a private Azure blob storage account. You could also use an AWS S3 storage account.
17+
An Artificial Intelligence (AI) application is application code embedded with a pretrained machine learning (ML) model. There are always two streams of work for an AI application: Data scientists build the ML model, and app developers build the app and expose it to end users to consume. This article describes how to implement a continuous integration and continuous delivery (CI/CD) pipeline for an AI application that embeds the ML model into the app source code. The sample code and tutorial use a Python Flask web application, and fetch a pretrained model from a private Azure blob storage account. You could also use an AWS S3 storage account.
1818

1919
> [!NOTE]
2020
> The following process is one of several ways to do CI/CD. There are alternatives to this tooling and the prerequisites.

0 commit comments

Comments
 (0)