You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/big-data/cognitive-services-for-big-data.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
-
title: "Cognitive Services for Big Data"
3
-
description: Learn how to leverage Azure Cognitive Services on large datasets using Python, Java, and Scala. With Cognitive Services for Big Data you can embed continuously improving, intelligent models directly into Apache Spark™ and SQL computations.
2
+
title: "Cognitive Services for big data"
3
+
description: Learn how to leverage Azure Cognitive Services on large datasets using Python, Java, and Scala. With Cognitive Services for big data you can embed continuously improving, intelligent models directly into Apache Spark™ and SQL computations.
4
4
services: cognitive-services
5
5
author: mhamilton723
6
6
manager: nitinme
@@ -10,17 +10,17 @@ ms.date: 10/28/2021
10
10
ms.author: marhamil
11
11
---
12
12
13
-
# Azure Cognitive Services for Big Data
13
+
# Azure Cognitive Services for big data
14
14
15
-

15
+

16
16
17
-
The Azure Cognitive Services for Big Data lets users channel terabytes of data through Cognitive Services using [Apache Spark™](/dotnet/spark/what-is-spark). With the Cognitive Services for Big Data, it's easy to create large-scale intelligent applications with any datastore.
17
+
The Azure Cognitive Services for big data lets users channel terabytes of data through Cognitive Services using [Apache Spark™](/dotnet/spark/what-is-spark). With the Cognitive Services for big data, it's easy to create large-scale intelligent applications with any datastore.
18
18
19
-
With Cognitive Services for Big Data you can embed continuously improving, intelligent models directly into Apache Spark™ and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications.
19
+
With Cognitive Services for big data you can embed continuously improving, intelligent models directly into Apache Spark™ and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications.
20
20
21
21
## Features and benefits
22
22
23
-
Cognitive Services for Big Data can use services from any region in the world, as well as [containerized Cognitive Services](../cognitive-services-container-support.md). Containers support low or no connectivity deployments with ultra-low latency responses. Containerized Cognitive Services can be run locally, directly on the worker nodes of your Spark cluster, or on an external orchestrator like Kubernetes.
23
+
Cognitive Services for big data can use services from any region in the world, as well as [containerized Cognitive Services](../cognitive-services-container-support.md). Containers support low or no connectivity deployments with ultra-low latency responses. Containerized Cognitive Services can be run locally, directly on the worker nodes of your Spark cluster, or on an external orchestrator like Kubernetes.
24
24
25
25
## Supported services
26
26
@@ -57,9 +57,9 @@ Cognitive Services for Big Data can use services from any region in the world, a
57
57
|:-----------|:------------------|
58
58
|[Bing Image Search](/azure/cognitive-services/bing-image-search"Bing Image Search")|The Bing Image Search service returns a display of images determined to be relevant to the user's query.|
59
59
60
-
## Supported programming languages for Cognitive Services for Big Data
60
+
## Supported programming languages for Cognitive Services for big data
61
61
62
-
The Cognitive Services for Big Data are built on Apache Spark. Apache Spark is a distributed computing library that supports Java, Scala, Python, R, and many other languages. These languages are currently supported.
62
+
The Cognitive Services for big data are built on Apache Spark. Apache Spark is a distributed computing library that supports Java, Scala, Python, R, and many other languages. These languages are currently supported.
63
63
64
64
### Python
65
65
@@ -71,7 +71,7 @@ We provide a Scala and Java-based Spark API in the `com.microsoft.ml.spark.cogni
71
71
72
72
## Supported platforms and connectors
73
73
74
-
The Cognitive Services for Big Data requires Apache Spark. There are several Apache Spark platforms that support the Cognitive Services for Big Data.
74
+
The Cognitive Services for big data requires Apache Spark. There are several Apache Spark platforms that support the Cognitive Services for big data.
75
75
76
76
### Azure Databricks
77
77
@@ -100,15 +100,15 @@ The basis of Spark is the DataFrame: a tabular collection of data distributed ac
100
100
- Do SQL-style computations such as join and filter tables.
101
101
- Apply functions to large datasets using MapReduce style parallelism.
102
102
- Apply Distributed Machine Learning using Microsoft Machine Learning for Apache Spark.
103
-
- Use the Cognitive Services for Big Data to enrich your data with ready-to-use intelligent services.
103
+
- Use the Cognitive Services for big data to enrich your data with ready-to-use intelligent services.
104
104
105
105
### Microsoft Machine Learning for Apache Spark (MMLSpark)
106
106
107
-
[Microsoft Machine Learning for Apache Spark](https://mmlspark.blob.core.windows.net/website/index.html#install) (MMLSpark) is an open-source, distributed machine learning library (ML) built on Apache Spark. The Cognitive Services for Big Data is included in this package. Additionally, MMLSpark contains several other ML tools for Apache Spark, such as LightGBM, Vowpal Wabbit, OpenCV, LIME, and more. With MMLSpark, you can build powerful predictive and analytical models from any Spark datasource.
107
+
[Microsoft Machine Learning for Apache Spark](https://mmlspark.blob.core.windows.net/website/index.html#install) (MMLSpark) is an open-source, distributed machine learning library (ML) built on Apache Spark. The Cognitive Services for big data is included in this package. Additionally, MMLSpark contains several other ML tools for Apache Spark, such as LightGBM, Vowpal Wabbit, OpenCV, LIME, and more. With MMLSpark, you can build powerful predictive and analytical models from any Spark datasource.
108
108
109
109
### HTTP on Spark
110
110
111
-
Cognitive Services for Big Data is an example of how we can integrate intelligent web services with big data. Web services power many applications across the globe and most services communicate through the Hypertext Transfer Protocol (HTTP). To work with *arbitrary* web services at large scales, we provide HTTP on Spark. With HTTP on Spark, you can pass terabytes of data through any web service. Under the hood, we use this technology to power Cognitive Services for Big Data.
111
+
Cognitive Services for big data is an example of how we can integrate intelligent web services with big data. Web services power many applications across the globe and most services communicate through the Hypertext Transfer Protocol (HTTP). To work with *arbitrary* web services at large scales, we provide HTTP on Spark. With HTTP on Spark, you can pass terabytes of data through any web service. Under the hood, we use this technology to power Cognitive Services for big data.
112
112
113
113
## Developer samples
114
114
@@ -126,11 +126,11 @@ Cognitive Services for Big Data is an example of how we can integrate intelligen
126
126
127
127
-[The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Services](https://databricks.com/session/the-azure-cognitive-services-on-spark-clusters-with-embedded-intelligent-services)
128
128
-[Spark Summit Keynote: Scalable AI for Good](https://databricks.com/session_eu19/scalable-ai-for-good)
129
-
-[The Cognitive Services for Big Data in Cosmos DB](https://medius.studios.ms/Embed/Video-nc/B19-BRK3004?latestplayer=true&l=2571.208093)
129
+
-[The Cognitive Services for big data in Cosmos DB](https://medius.studios.ms/Embed/Video-nc/B19-BRK3004?latestplayer=true&l=2571.208093)
130
130
-[Lightning Talk on Large Scale Intelligent Microservices](https://www.youtube.com/watch?v=BtuhmdIy9Fk&t=6s)
131
131
132
132
## Next steps
133
133
134
-
-[Getting Started with the Cognitive Services for Big Data](getting-started.md)
134
+
-[Getting Started with the Cognitive Services for big data](getting-started.md)
Copy file name to clipboardExpand all lines: articles/cognitive-services/big-data/getting-started.md
+57-41Lines changed: 57 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
1
---
2
-
title: "Get started with Cognitive Services for Big Data"
3
-
description: Set up your MMLSpark pipeline with Cognitive Services in Azure Databricks and run a sample.
2
+
title: "Get started with Cognitive Services for big data"
3
+
description: Set up your SynapseML or MMLSpark pipeline with Cognitive Services in Azure Databricks and run a sample.
4
4
services: cognitive-services
5
5
author: mhamilton723
6
6
manager: nitinme
7
7
ms.service: cognitive-services
8
8
ms.topic: how-to
9
-
ms.date: 10/28/2021
9
+
ms.date: 08/16/2022
10
10
ms.author: marhamil
11
11
ms.devlang: python
12
12
ms.custom: mode-other
@@ -16,15 +16,16 @@ ms.custom: mode-other
16
16
17
17
Setting up your environment is the first step to building a pipeline for your data. After your environment is ready, running a sample is quick and easy.
18
18
19
-
In this article, we'll perform these steps to get you started:
19
+
In this article, you'll perform these steps to get started:
20
20
21
-
1.[Create a Cognitive Services resource](#create-a-cognitive-services-resource)
22
-
1.[Create an Apache Spark Cluster](#create-an-apache-spark-cluster)
23
-
1.[Try a sample](#try-a-sample)
21
+
> [!div class="checklist"]
22
+
> *[Create a Cognitive Services resource](#create-a-cognitive-services-resource)
23
+
> *[Create an Apache Spark cluster](#create-an-apache-spark-cluster)
24
+
> *[Try a sample](#try-a-sample)
24
25
25
26
## Create a Cognitive Services resource
26
27
27
-
To use the Big Data Cognitive Services, you must first create a Cognitive Service for your workflow. There are two main types of Cognitive Services: cloud services hosted in Azure and containerized services managed by users. We recommend starting with the simpler cloud-based Cognitive Services.
28
+
To work with big data in Cognitive Services, first create a Cognitive Services resource for your workflow. There are two main types of Cognitive Services: cloud services hosted in Azure and containerized services managed by users. We recommend starting with the simpler cloud-based Cognitive Services.
28
29
29
30
### Cloud services
30
31
@@ -46,21 +47,30 @@ Follow [this guide](../cognitive-services-container-support.md?tabs=luis) to cre
46
47
47
48
## Create an Apache Spark cluster
48
49
49
-
[Apache Spark™](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the Big Data Cognitive Services, you must first create a cluster. If you already have a Spark cluster, feel free to try an example.
50
+
[Apache Spark™](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the big data Cognitive Services, you must first create a cluster. If you already have a Spark cluster, feel free to try an example.
50
51
51
52
### Azure Databricks
52
53
53
-
Azure Databricks is an Apache Spark-based analytics platform with a one-click setup, streamlined workflows, and an interactive workspace. It's often used to collaborate between data scientists, engineers, and business analysts. To use the Big Data Cognitive Services on Azure Databricks, follow these steps:
54
+
Azure Databricks is an Apache Spark-based analytics platform with a one-click setup, streamlined workflows, and an interactive workspace. It's often used to collaborate between data scientists, engineers, and business analysts. To use the big data Cognitive Services on Azure Databricks, follow these steps:
54
55
55
56
1.[Create an Azure Databricks workspace](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-an-azure-databricks-workspace)
57
+
56
58
1.[Create a Spark cluster in Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-a-spark-cluster-in-databricks)
57
-
1. Install the Big Data Cognitive Services
59
+
60
+
1. Install the SynapseML open-source library (or MMLSpark library if you're supporting a legacy application):
61
+
58
62
* Create a new library in your databricks workspace
<imgsrc="media/install-library.png"alt="Install Library on Cluster"width="50%"/>
66
76
@@ -69,9 +79,10 @@ Azure Databricks is an Apache Spark-based analytics platform with a one-click se
69
79
Optionally, you can use Synapse Analytics to create a spark cluster. Azure Synapse Analytics brings together enterprise data warehousing and big data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources at scale. To get started using Azure Synapse Analytics, follow these steps:
70
80
71
81
1.[Create a Synapse Workspace (preview)](../../synapse-analytics/quickstart-create-workspace.md).
82
+
72
83
1.[Create a new serverless Apache Spark pool (preview) using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md).
73
84
74
-
In Azure Synapse Analytics, Big Data for Cognitive Services is installed by default.
85
+
In Azure Synapse Analytics, big data for Cognitive Services is installed by default.
75
86
76
87
### Azure Kubernetes Service
77
88
@@ -80,12 +91,14 @@ If you're using containerized Cognitive Services, one popular option for deployi
80
91
To get started on Azure Kubernetes Service, follow these steps:
81
92
82
93
1.[Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md)
94
+
83
95
1.[Install the Apache Spark 2.4.0 helm chart](https://hub.helm.sh/charts/microsoft/spark)
96
+
84
97
1.[Install a cognitive service container using Helm](../computer-vision/deploy-computer-vision-on-premises.md)
85
98
86
99
## Try a sample
87
100
88
-
After you set up your Spark cluster and environment, you can run a short sample. This section demonstrates how to use the Big Data for Cognitive Services in Azure Databricks.
101
+
After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package.
89
102
90
103
First, you can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit.
91
104
@@ -101,36 +114,39 @@ First, you can create a notebook in Azure Databricks. For other Spark cluster pr
101
114
102
115
1. Paste this code snippet into your new notebook.
103
116
104
-
```python
105
-
from mmlspark.cognitive import*
106
-
from pyspark.sql.functions import col
107
-
108
-
# Add your subscription key from the Language service (or a general Cognitive Service key)
109
-
service_key ="ADD-SUBSCRIPTION-KEY-HERE"
110
-
111
-
df = spark.createDataFrame([
112
-
("I am so happy today, its sunny!", "en-US"),
113
-
("I am frustrated by this rush hour traffic", "en-US"),
114
-
("The cognitive services on spark aint bad", "en-US"),
1. Get your subscription key from the **Keys and Endpoint** menu from your Language resource in the Azure portal.
147
+
133
148
1. Replace the subscription key placeholder in your Databricks notebook code with your subscription key.
149
+
134
150
1. Select the play, or triangle, symbol in the upper right of your notebook cell to run the sample. Optionally, select **Run All** at the top of your notebook to run all cells. The answers will display below the cell in a table.
Copy file name to clipboardExpand all lines: articles/cognitive-services/big-data/recipes/anomaly-detection.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
-
title: "Recipe: Predictive maintenance with the Cognitive Services for Big Data"
2
+
title: "Recipe: Predictive maintenance with the Cognitive Services for big data"
3
3
titleSuffix: Azure Cognitive Services
4
-
description: This quickstart shows how to perform distributed anomaly detection with the Cognitive Services for Big Data
4
+
description: This quickstart shows how to perform distributed anomaly detection with the Cognitive Services for big data
5
5
services: cognitive-services
6
6
author: mhamilton723
7
7
manager: nitinme
@@ -14,7 +14,7 @@ ms.devlang: python
14
14
ms.custom: devx-track-python
15
15
---
16
16
17
-
# Recipe: Predictive maintenance with the Cognitive Services for Big Data
17
+
# Recipe: Predictive maintenance with the Cognitive Services for big data
18
18
19
19
This recipe shows how you can use Azure Synapse Analytics and Cognitive Services on Apache Spark for predictive maintenance of IoT devices. We'll follow along with the [CosmosDB and Synapse Link](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples) sample. To keep things simple, in this recipe we'll read the data straight from a CSV file rather than getting streamed data through CosmosDB and Synapse Link. We strongly encourage you to look over the Synapse Link sample.
0 commit comments