You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/team-data-science-process/agile-development.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,12 @@
2
2
title: Agile development of data science projects - Team Data Science Process
3
3
description: Execute a data science project in a systematic, version controlled, and collaborative way within a project team by using the Team Data Science Process.
Copy file name to clipboardExpand all lines: articles/machine-learning/team-data-science-process/apps-anomaly-detection-api.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,12 +3,12 @@ title: Azure Machine Learning Anomaly Detection API - Team Data Science Process
3
3
description: Anomaly Detection API is an example built with Microsoft Azure Machine Learning that detects anomalies in time series data with numerical values that are uniformly spaced in time.
@@ -44,7 +44,7 @@ The Anomaly Detection offering comes with useful tools to get you started.
44
44
In order to use the API, you must deploy it to your Azure subscription where it will be hosted as an Azure Machine Learning web service. You can do this from the [Azure AI Gallery](https://gallery.cortanaintelligence.com/MachineLearningAPI/Anomaly-Detection-2). This will deploy two Azure Machine Learning Studio (classic) Web Services (and their related resources) to your Azure subscription - one for anomaly detection with seasonality detection, and one without seasonality detection. Once the deployment has completed, you will be able to manage your APIs from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. From this page, you will be able to find your endpoint locations, API keys, as well as sample code for calling the API. More detailed instructions are available [here](https://docs.microsoft.com/azure/machine-learning/machine-learning-manage-new-webservice).
45
45
46
46
## Scaling the API
47
-
By default, your deployment will have a free Dev/Test billing plan which includes 1,000 transactions/month and 2 compute hours/month. You can upgrade to another plan as per your needs. Details on the pricing of different plans are available [here](https://azure.microsoft.com/pricing/details/machine-learning/) under "Production Web API pricing".
47
+
By default, your deployment will have a free Dev/Test billing plan that includes 1,000 transactions/month and 2 compute hours/month. You can upgrade to another plan as per your needs. Details on the pricing of different plans are available [here](https://azure.microsoft.com/pricing/details/machine-learning/) under "Production Web API pricing".
48
48
49
49
## Managing AML Plans
50
50
You can manage your billing plan [here](https://services.azureml.net/plans/). The plan name will be based on the resource group name you chose when deploying the API, plus a string that is unique to your subscription. Instructions on how to upgrade your plan are available [here](https://docs.microsoft.com/azure/machine-learning/machine-learning-manage-new-webservice) under the "Managing billing plans" section.
@@ -53,7 +53,7 @@ You can manage your billing plan [here](https://services.azureml.net/plans/). T
53
53
The web service provides a REST-based API over HTTPS that can be consumed in different ways including a web or mobile application, R, Python, Excel, etc. You send your time series data to this service via a REST API call, and it runs a combination of the three anomaly types described below.
54
54
55
55
## Calling the API
56
-
In order to call the API, you will need to know the endpoint location and API key. Both of these, along with sample code for calling the API, are available from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. Navigate to the desired API, and then click the "Consume" tab to find them. Note that you can call the API as a Swagger API (i.e. with the URL parameter `format=swagger`) or as a non-Swagger API (i.e. without the `format` URL parameter). The sample code uses the Swagger format. Below is an example request and response in non-Swagger format. These examples are to the seasonality endpoint. The non-seasonality endpoint is similar.
56
+
In order to call the API, you will need to know the endpoint location and API key. These two requirements, along with sample code for calling the API, are available from the [Azure Machine Learning Studio (classic) web services](https://services.azureml.net/webservices/) page. Navigate to the desired API, and then click the "Consume" tab to find them. You can call the API as a Swagger API (that is, with the URL parameter `format=swagger`) or as a non-Swagger API (that is, without the `format` URL parameter). The sample code uses the Swagger format. Below is an example request and response in non-Swagger format. These examples are to the seasonality endpoint. The non-seasonality endpoint is similar.
57
57
58
58
### Sample Request Body
59
59
The request contains two objects: `Inputs` and `GlobalParameters`. In the example request below, some parameters are sent explicitly while others are not (scroll down for a full list of parameters for each endpoint). Parameters that are not sent explicitly in the request will use the default values given below.
@@ -78,7 +78,7 @@ The request contains two objects: `Inputs` and `GlobalParameters`. In the examp
78
78
}
79
79
80
80
### Sample Response
81
-
Note that, in order to see the `ColumnNames` field, you must include `details=true` as a URL parameter in your request. See the tables below for the meaning behind each of these fields.
81
+
In order to see the `ColumnNames` field, you must include `details=true` as a URL parameter in your request. See the tables below for the meaning behind each of these fields.
82
82
83
83
{
84
84
"Results": {
@@ -100,18 +100,18 @@ Note that, in order to see the `ColumnNames` field, you must include `details=tr
100
100
101
101
## Score API
102
102
The Score API is used for running anomaly detection on non-seasonal time series data. The API runs a number of anomaly detectors on the data and returns their anomaly scores.
103
-
The figure below shows an example of anomalies that the Score API can detect. This time series has 2 distinct level changes, and 3 spikes. The red dots show the time at which the level change is detected, while the black dots show the detected spikes.
103
+
The figure below shows an example of anomalies that the Score API can detect. This time series has two distinct level changes, and three spikes. The red dots show the time at which the level change is detected, while the black dots show the detected spikes.
104
104
![Score API][1]
105
105
106
106
### Detectors
107
-
The anomaly detection API supports detectors in 3 broad categories. Details on specific input parameters and outputs for each detector can be found in the following table.
107
+
The anomaly detection API supports detectors in three broad categories. Details on specific input parameters and outputs for each detector can be found in the following table.
| Spike Detectors |TSpike Detector |Detect spikes and dips based on far the values are from first and third quartiles |*tspikedetector.sensitivity:* takes integer value in the range 1-10, default: 3; Higher values will catch more extreme values thus making it less sensitive |TSpike: binary values – ‘1’ if a spike/dip is detected, ‘0’ otherwise |
112
112
| Spike Detectors | ZSpike Detector |Detect spikes and dips based on how far the datapoints are from their mean |*zspikedetector.sensitivity:* take integer value in the range 1-10, default: 3; Higher values will catch more extreme values making it less sensitive |ZSpike: binary values – ‘1’ if a spike/dip is detected, ‘0’ otherwise |
113
-
| Slow Trend Detector |Slow Trend Detector |Detect slow positive trend as per the set sensitivity |*trenddetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select this from; The higher the less sensitive) |tscore: floating number representing anomaly score on trend |
114
-
| Level Change Detectors | Bidirectional Level Change Detector |Detect both upward and downward level change as per the set sensitivity |*bileveldetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select this from; The higher the less sensitive) |rpscore: floating number representing anomaly score on upward and downward level change |
113
+
| Slow Trend Detector |Slow Trend Detector |Detect slow positive trend as per the set sensitivity |*trenddetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select from; The higher the less sensitive) |tscore: floating number representing anomaly score on trend |
114
+
| Level Change Detectors | Bidirectional Level Change Detector |Detect both upward and downward level change as per the set sensitivity |*bileveldetector.sensitivity:* threshold on detector score (default: 3.25, 3.25 – 5 is a reasonable range to select from; The higher the less sensitive) |rpscore: floating number representing anomaly score on upward and downward level change |
115
115
116
116
### Parameters
117
117
More detailed information on these input parameters is listed in the table below:
@@ -142,7 +142,7 @@ The API runs all detectors on your time series data and returns anomaly scores a
142
142
143
143
## ScoreWithSeasonality API
144
144
The ScoreWithSeasonality API is used for running anomaly detection on time series that have seasonal patterns. This API is useful to detect deviations in seasonal patterns.
145
-
The following figure shows an example of anomalies detected in a seasonal time series. The time series has one spike (the 1st black dot), two dips (the 2nd black dot and one at the end), and one level change (red dot). Note that both the dip in the middle of the time series and the level change are only discernable after seasonal components are removed from the series.
145
+
The following figure shows an example of anomalies detected in a seasonal time series. The time series has one spike (the first black dot), two dips (the second black dot and one at the end), and one level change (red dot). Both the dip in the middle of the time series and the level change are only discernable after seasonal components are removed from the series.
146
146
![Seasonality API][2]
147
147
148
148
### Detectors
@@ -176,7 +176,7 @@ The API runs all detectors on your time series data and returns anomaly scores a
176
176
| --- | --- |
177
177
| Time |Timestamps from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
178
178
| OriginalData |Values from raw data, or aggregated (and/or) imputed data if aggregation (and/or) missing data imputation is applied |
179
-
| ProcessedData |Either of the following: <ul><li>Seasonally adjusted time series if significant seasonality has been detected and deseason option selected;</li><li>seasonally adjusted and detrended time series if significant seasonality has been detected and deseasontrend option selected</li><li>otherwise, this is the same as OriginalData</li> |
179
+
| ProcessedData |Either of the following options: <ul><li>Seasonally adjusted time series if significant seasonality has been detected and deseason option selected;</li><li>seasonally adjusted and detrended time series if significant seasonality has been detected and deseasontrend option selected</li><li>otherwise, this option is the same as OriginalData</li> |
180
180
| TSpike |Binary indicator to indicate whether a spike is detected by TSpike Detector |
181
181
| ZSpike |Binary indicator to indicate whether a spike is detected by ZSpike Detector |
182
182
| BiLevelChangeScore |A floating number representing anomaly score on level change |
Copy file name to clipboardExpand all lines: articles/machine-learning/team-data-science-process/automated-data-pipeline-cheat-sheet.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,20 +3,20 @@ title: Azure Machine Learning data pipeline cheat sheet - Team Data Science Proc
3
3
description: A printable cheat sheet that shows you how to set up an automated data pipeline to your Azure Machine Learning web service whether your data is on-premises, streaming, in Azure, or in a third-party cloud service.
# Cheat sheet for an automated data pipeline for Azure Machine Learning predictions
16
16
The **Microsoft Azure Machine Learning automated data pipeline cheat sheet** helps you navigate through the
17
17
technology you can use to get your data to your Machine Learning web service where it can be scored by your predictive analytics model.
18
18
19
-
Depending on whether your data is on-premises, in the cloud, or streaming real-time, there are different mechanisms available to move the data to your web service endpoint for scoring.
19
+
Depending on whether your data is on-premises, in the cloud, or real-time streaming, there are different mechanisms available to move the data to your web service endpoint for scoring.
20
20
This cheat sheet walks you through the decisions you need to make, and it offers links to articles that can help you develop your solution.
21
21
22
22
## Download the Machine Learning automated data pipeline cheat sheet
Copy file name to clipboardExpand all lines: articles/machine-learning/team-data-science-process/ci-cd-flask.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,18 +3,18 @@ title: Create a CI/CD pipeline with Azure Pipelines - Team Data Science Process
3
3
description: "Create a continuous integration and continuous delivery pipeline for Artificial Intelligence (AI) applications using Docker and Kubernetes."
# Create CI/CD pipelines for AI apps using Azure Pipelines, Docker, and Kubernetes
16
16
17
-
An Artificial Intelligence (AI) application is application code embedded with a pretrained machine learning (ML) model. There are always two streams of work for an AI application: Data scientists build the ML model, and app developers build the app and expose it to end users to consume. This article describes how to implement a continuous integration and continuous delivery (CI/CD) pipeline for an AI application that embeds the ML model into the app source code. The sample code and tutorial use a simple Python Flask web application, and fetch a pretrained model from a private Azure blob storage account. You could also use an AWS S3 storage account.
17
+
An Artificial Intelligence (AI) application is application code embedded with a pretrained machine learning (ML) model. There are always two streams of work for an AI application: Data scientists build the ML model, and app developers build the app and expose it to end users to consume. This article describes how to implement a continuous integration and continuous delivery (CI/CD) pipeline for an AI application that embeds the ML model into the app source code. The sample code and tutorial use a Python Flask web application, and fetch a pretrained model from a private Azure blob storage account. You could also use an AWS S3 storage account.
18
18
19
19
> [!NOTE]
20
20
> The following process is one of several ways to do CI/CD. There are alternatives to this tooling and the prerequisites.
0 commit comments