Skip to content

Commit 99e5880

Browse files
authored
Merge pull request #78112 from aahill/aahi-shthowse
[Cogsvcs][Text Analytics] Adding Node.js SDK article
2 parents 7b132fe + 242cdcd commit 99e5880

File tree

2 files changed

+285
-0
lines changed

2 files changed

+285
-0
lines changed
Lines changed: 282 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,282 @@
1+
---
2+
title: 'Quickstart: Using Node.js to call the Text Analytics API'
3+
titleSuffix: Azure Cognitive Services
4+
description: Get information and code samples to help you quickly get started with using the Text Analytics API.
5+
services: cognitive-services
6+
author: raymondl
7+
manager: nitinme
8+
9+
ms.service: cognitive-services
10+
ms.subservice: text-analytics
11+
ms.topic: quickstart
12+
ms.date: 06/11/2019
13+
ms.author: shthowse
14+
---
15+
16+
# Quickstart: Using Node.js to call the Text Analytics Cognitive Service
17+
<a name="HOLTop"></a>
18+
19+
Use this quickstart to begin analyzing language with the Text Analytics SDK for Node.js. While the [Text Analytics](//go.microsoft.com/fwlink/?LinkID=759711) REST API is compatible with most programming languages, the SDK provides an easy way to integrate the service into your applications. The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-node-sdk-samples/blob/master/Samples/textAnalytics.js).
20+
21+
Refer to the [API definitions](//go.microsoft.com/fwlink/?LinkID=759346) for technical documentation for the APIs.
22+
23+
## Prerequisites
24+
25+
* [Node.js](https://nodejs.org/)
26+
* The Text Analytics [SDK for Node.js](https://www.npmjs.com/package/azure-cognitiveservices-textanalytics)
27+
You can install the SDK with:
28+
29+
`npm install azure-cognitiveservices-textanalytics`
30+
31+
[!INCLUDE [cognitive-services-text-analytics-signup-requirements](../../../../includes/cognitive-services-text-analytics-signup-requirements.md)]
32+
33+
You must also have the [endpoint and access key](../How-tos/text-analytics-how-to-access-key.md) that was generated for you during sign-up.
34+
35+
## Create a Node.js application and install the SDK
36+
37+
After installing Node.js, create a node project. Create a new directory for your app, and navigate to its directory.
38+
39+
```mkdir myapp && cd myapp```
40+
41+
Run ```npm init``` to create a node application with a package.json file. Install the `ms-rest-azure` and `azure-cognitiveservices-textanalytics` NPM packages:
42+
43+
```npm install azure-cognitiveservices-textanalytics ms-rest-azure```
44+
45+
Your app's package.json file will be updated with the dependencies.
46+
47+
## Authenticate your credentials
48+
49+
Create a new file `index.js` in the project root and import the installed libraries
50+
51+
```javascript
52+
const CognitiveServicesCredentials = require("ms-rest-azure").CognitiveServicesCredentials;
53+
const TextAnalyticsAPIClient = require("azure-cognitiveservices-textanalytics");
54+
```
55+
56+
Create a variable for your Text Analytics subscription key.
57+
58+
```javascript
59+
let credentials = new CognitiveServicesCredentials(
60+
"enter-your-key-here"
61+
);
62+
```
63+
64+
> [!Tip]
65+
> For secure deployment of secrets in production systems we recommend using [Azure Key Vault](https://docs.microsoft.com/azure/key-vault/quick-create-net).
66+
>
67+
68+
## Create a Text Analytics client
69+
70+
Create a new `TextAnalyticsClient` object with `credentials` as a parameter. Use the correct Azure region for your Text Analytics subscription.
71+
72+
```javascript
73+
//Replace 'westus' with the correct region for your Text Analytics subscription
74+
let client = new TextAnalyticsAPIClient(
75+
credentials,
76+
"https://westus.api.cognitive.microsoft.com/"
77+
);
78+
```
79+
80+
## Sentiment analysis
81+
82+
Create a list of objects, containing the documents you want to analyze. The payload to the API consists of a list of `documents`, which contain an `id`, `language`, and `text` attribute. The `text` attribute stores the text to be analyzed, `language` is the language of the document, and the `id` can be any value.
83+
84+
```javascript
85+
const inputDocuments = {documents:[
86+
{language:"en", id:"1", text:"I had the best day of my life."},
87+
{language:"en", id:"2", text:"This was a waste of my time. The speaker put me to sleep."},
88+
{language:"es", id:"3", text:"No tengo dinero ni nada que dar..."},
89+
{language:"it", id:"4", text:"L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."}
90+
]}
91+
```
92+
93+
Call `client.sentiment` and get the result. Then iterate through the results, and print each document's ID, and sentiment score. A score closer to 0 indicates a negative sentiment, while a score closer to 1 indicates a positive sentiment.
94+
95+
```javascript
96+
const operation = client.sentiment({multiLanguageBatchInput: inputDocuments})
97+
operation
98+
.then(result => {
99+
console.log(result.documents);
100+
})
101+
.catch(err => {
102+
throw err;
103+
});
104+
```
105+
106+
Run your code with `node index.js` in your console window.
107+
108+
### Output
109+
110+
```console
111+
[ { id: '1', score: 0.8723785877227783 },
112+
{ id: '2', score: 0.1059873104095459 },
113+
{ id: '3', score: 0.43635445833206177 },
114+
{ id: '4', score: 1 } ]
115+
```
116+
117+
## Language detection
118+
119+
Create a list of objects containing your documents. The payload to the API consists of a list of `documents`, which contain an `id` and `text` attribute. The `text` attribute stores the text to be analyzed, and the `id` can be any value.
120+
121+
```javascript
122+
// The documents to be submitted for language detection. The ID can be any value.
123+
const inputDocuments = {
124+
documents: [
125+
{ id: "1", text: "This is a document written in English." },
126+
{ id: "2", text: "Este es un document escrito en Español." },
127+
{ id: "3", text: "这是一个用中文写的文件" }
128+
]
129+
};
130+
```
131+
132+
Call `client.detectLanguage()` and get the result. Then iterate through the results, and print each document's ID, and the first returned language.
133+
134+
```javascript
135+
const operation = client.detectLanguage({
136+
languageBatchInput: inputDocuments
137+
});
138+
operation
139+
.then(result => {
140+
result.documents.forEach(document => {
141+
console.log(`ID: ${document.id}`);
142+
document.detectedLanguages.forEach(language =>
143+
console.log(`\tLanguage: ${language.name}`)
144+
);
145+
});
146+
})
147+
.catch(err => {
148+
throw err;
149+
});
150+
```
151+
152+
Run your code with `node index.js` in your console window.
153+
154+
### Output
155+
156+
```console
157+
===== LANGUAGE EXTRACTION ======
158+
ID: 1 Language English
159+
ID: 2 Language Spanish
160+
ID: 3 Language Chinese_Simplified
161+
```
162+
163+
## Entity recognition
164+
165+
Create a list of objects, containing your documents. The payload to the API consists of a list of `documents`, which contain an `id`, `language`, and `text` attribute. The `text` attribute stores the text to be analyzed, `language` is the language of the document, and the `id` can be any value.
166+
167+
```javascript
168+
169+
const inputDocuments = {documents:[
170+
{language:"en", id:"1", text:"Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800"},
171+
{language:"es", id:"2", text:"La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle."},
172+
]}
173+
174+
}
175+
```
176+
177+
Call `client.entities()` and get the result. Then iterate through the results, and print each document's ID. For each detected entity, print its wikipedia name, the type and sub-types (if exists) as well as the locations in the original text.
178+
179+
```javascript
180+
const operation = client.entities({
181+
multiLanguageBatchInput: inputDocuments
182+
});
183+
operation
184+
.then(result => {
185+
result.documents.forEach(document => {
186+
console.log(`Document ID: ${document.id}`)
187+
document.entities.forEach(e =>{
188+
console.log(`\tName: ${e.name} Type: ${e.type} Sub Type: ${e.type}`)
189+
e.matches.forEach(match => (
190+
console.log(`\t\tOffset: ${match.offset} Length: ${match.length} Score: ${match.entityTypeScore}`)
191+
))
192+
})
193+
});
194+
})
195+
.catch(err => {
196+
throw err;
197+
});
198+
```
199+
200+
Run your code with `node index.js` in your console window.
201+
202+
### Output
203+
204+
```console
205+
Document ID: 1
206+
Name: Microsoft Type: Organization Sub Type: Organization
207+
Offset: 0 Length: 9 Score: 1
208+
Name: Bill Gates Type: Person Sub Type: Person
209+
Offset: 25 Length: 10 Score: 0.999786376953125
210+
Name: Paul Allen Type: Person Sub Type: Person
211+
Offset: 40 Length: 10 Score: 0.9988105297088623
212+
Name: April 4 Type: Other Sub Type: Other
213+
Offset: 54 Length: 7 Score: 0.8
214+
Name: April 4, 1975 Type: DateTime Sub Type: DateTime
215+
Offset: 54 Length: 13 Score: 0.8
216+
Name: BASIC Type: Other Sub Type: Other
217+
Offset: 89 Length: 5 Score: 0.8
218+
Name: Altair 8800 Type: Other Sub Type: Other
219+
Offset: 116 Length: 11 Score: 0.8
220+
Document ID: 2
221+
Name: Microsoft Type: Organization Sub Type: Organization
222+
Offset: 21 Length: 9 Score: 0.999755859375
223+
Name: Redmond (Washington) Type: Location Sub Type: Location
224+
Offset: 60 Length: 7 Score: 0.9911284446716309
225+
Name: 21 kilómetros Type: Quantity Sub Type: Quantity
226+
Offset: 71 Length: 13 Score: 0.8
227+
Name: Seattle Type: Location Sub Type: Location
228+
Offset: 88 Length: 7 Score: 0.9998779296875
229+
```
230+
231+
## Key phrase extraction
232+
233+
Create a list of objects, containing your documents. The payload to the API consists of a list of `documents`, which contain an `id`, `language`, and `text` attribute. The `text` attribute stores the text to be analyzed, `language` is the language of the document, and the `id` can be any value.
234+
235+
```javascript
236+
let inputLanguage = {
237+
documents: [
238+
{language:"ja", id:"1", text:"猫は幸せ"},
239+
{language:"de", id:"2", text:"Fahrt nach Stuttgart und dann zum Hotel zu Fu."},
240+
{language:"en", id:"3", text:"My cat might need to see a veterinarian."},
241+
{language:"es", id:"4", text:"A mi me encanta el fútbol!"}
242+
]
243+
};
244+
```
245+
246+
Call `client.keyPhrases()` and get the result. Then iterate through the results and print each document's ID, and any detected key phrases.
247+
248+
```javascript
249+
let operation = client.keyPhrases({
250+
multiLanguageBatchInput: inputLanguage
251+
});
252+
operation
253+
.then(result => {
254+
console.log(result.documents);
255+
})
256+
.catch(err => {
257+
throw err;
258+
});
259+
```
260+
261+
Run your code with `node index.js` in your console window.
262+
263+
### Output
264+
265+
```console
266+
[
267+
{ id: '1', keyPhrases: [ '幸せ' ] },
268+
{ id: '2', keyPhrases: [ 'Stuttgart', 'Hotel', 'Fahrt', 'Fu' ] },
269+
{ id: '3', keyPhrases: [ 'cat', 'veterinarian' ] },
270+
{ id: '4', keyPhrases: [ 'fútbol' ] }
271+
]
272+
```
273+
274+
## Next steps
275+
276+
> [!div class="nextstepaction"]
277+
> [Text Analytics With Power BI](../tutorials/tutorial-power-bi-key-phrases.md)
278+
279+
## See also
280+
281+
[Text Analytics overview](../overview.md)
282+
[Frequently asked questions (FAQ)](../text-analytics-resource-faq.md)

articles/cognitive-services/text-analytics/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
- name: Example user scenarios
99
href: text-analytics-user-scenarios.md
1010
- name: Quickstarts
11+
expanded: true
1112
items:
1213
- name: REST API
1314
items:
@@ -33,6 +34,8 @@
3334
href: quickstarts/go-sdk.md
3435
- name: Ruby
3536
href: quickstarts/ruby-sdk.md
37+
- name: Node.js
38+
href: quickstarts/nodejs-sdk.md
3639

3740
- name: Tutorials
3841
items:

0 commit comments

Comments
 (0)