Skip to content

Commit 6eacbe0

Browse files
Merge pull request #108506 from erhopf/tls-patches
[CogSvcs] Documentation updates for security + TLS
2 parents 6d66717 + dd25e61 commit 6eacbe0

File tree

17 files changed

+52
-19
lines changed

17 files changed

+52
-19
lines changed

articles/cognitive-services/Anomaly-Detector/overview.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ ms.author: aahi
1414

1515
# What is the Anomaly Detector API?
1616

17+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
18+
1719
The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data with machine learning. The Anomaly Detector API adapts by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
1820

1921
![Detect pattern changes in service requests](./media/anomaly_detection2.png)
@@ -22,7 +24,7 @@ Using the Anomaly Detector doesn't require any prior experience in machine learn
2224

2325
## Features
2426

25-
With the Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
27+
With the Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time.
2628

2729
|Feature |Description |
2830
|---------|---------|
@@ -47,7 +49,7 @@ To run the Notebook, complete the following steps:
4749
1. Un-check the "public" option in the dialog box before completing the clone operation, otherwise your notebook, including any subscription keys, will be public.
4850
1. Click **Run on free compute**
4951
1. Select one of the notebooks.
50-
1. Add your valid Anomaly Detector API subscription key to the `subscription_key` variable.
52+
1. Add your valid Anomaly Detector API subscription key to the `subscription_key` variable.
5153
1. Change the `endpoint` variable to your endpoint. For example: `https://westus2.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/last/detect`
5254
1. On the top menu bar, click **Cell**, then **Run All**.
5355

articles/cognitive-services/Computer-vision/Home.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ ms.custom: seodec18
1717

1818
# What is Computer Vision?
1919

20+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
21+
2022
Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information, depending on the visual features you're interested in. For example, Computer Vision can determine if an image contains adult content, or it can find all of the human faces in an image.
2123

2224
You can use Computer Vision in your application through a native SDK or by invoking the REST API directly. This page broadly covers what you can do with Computer Vision.

articles/cognitive-services/Content-Moderator/overview.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ ms.author: pafarley
1717

1818
# What is Azure Content Moderator?
1919

20+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
21+
2022
Azure Content Moderator is a cognitive service that checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. When this material is found, the service applies appropriate labels (flags) to the content. Your app can then handle flagged content in order to comply with regulations or maintain the intended environment for users. See the [Moderation APIs](#moderation-apis) section to learn more about what the different content flags indicate.
2123

2224
## Where it's used
@@ -73,4 +75,4 @@ As with all of the Cognitive Services, developers using the Content Moderator se
7375

7476
## Next steps
7577

76-
Get started using the Content Moderator service by following the instructions in [Try Content Moderator on the web](quick-start.md).
78+
Get started using the Content Moderator service by following the instructions in [Try Content Moderator on the web](quick-start.md).

articles/cognitive-services/Custom-Vision-Service/home.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ ms.author: pafarley
1616

1717
# What is Custom Vision?
1818

19+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
20+
1921
Custom Vision is a cognitive service that lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels (which represent _classes_) to images, according to their visual characteristics. Unlike the [Computer Vision](https://docs.microsoft.com/azure/cognitive-services/computer-vision/home) service, Custom Vision allows you to determine the labels to apply.
2022

2123
## What it does

articles/cognitive-services/Face/Overview.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@ ms.author: pafarley
1515

1616
# What is the Azure Face service?
1717

18+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
19+
1820
The Azure Cognitive Services Face service provides algorithms that are used to detect, recognize, and analyze human faces in images. The ability to process human face information is important in many different software scenarios. Example scenarios are security, natural user interface, image content analysis and management, mobile apps, and robotics.
1921

2022
The Face service provides several different functions. Each function is outlined in the following sections. Read on to learn more about them.

articles/cognitive-services/Face/QuickStarts/CSharp.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -74,11 +74,6 @@ Add the following code to the **Main** method of the **Program** class. This cod
7474
```csharp
7575
static void Main(string[] args)
7676
{
77-
78-
// Explicitly set TLS 1.2.
79-
ServicePointManager.SecurityProtocol = ServicePointManager.SecurityProtocol |
80-
SecurityProtocolType.Tls12;
81-
8277
// Get the path and filename to process from the user.
8378
Console.WriteLine("Detect faces:");
8479
Console.Write(

articles/cognitive-services/LUIS/what-is-luis.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ ms.date: 02/23/2020
77

88
# What is Language Understanding (LUIS)?
99

10+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
11+
1012
Language Understanding (LUIS) is a cloud-based API service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.
1113

1214
A client application for LUIS is any conversational application that communicates with a user in natural language to complete a task. Examples of client applications include social media apps, chat bots, and speech-enabled desktop applications.
@@ -124,4 +126,4 @@ Samples using LUIS:
124126
[flow]: https://docs.microsoft.com/connectors/luis/
125127
[authoring-apis]: https://go.microsoft.com/fwlink/?linkid=2092087
126128
[endpoint-apis]: https://go.microsoft.com/fwlink/?linkid=2092356
127-
[qnamaker]: https://qnamaker.ai/
129+
[qnamaker]: https://qnamaker.ai/

articles/cognitive-services/QnAMaker/Overview/overview.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@ ms.author: diberry
1515

1616
# What is the QnA Maker service?
1717

18+
[!INCLUDE [TLS 1.2 enforcement](../../../../includes/cognitive-services-tls-announcement.md)]
19+
1820
QnA Maker is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom knowledge base (KB) of information.
1921

2022
A client application for QnA Maker is any conversational application that communicates with a user in natural language to answer a question. Examples of client applications include social media apps, chat bots, and speech-enabled desktop applications.

articles/cognitive-services/Speech-Service/overview.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ ms.author: dapine
1414

1515
# What is the Speech service?
1616

17+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
18+
1719
The Speech service is the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It's easy to speech enable your applications, tools, and devices with the [Speech SDK](speech-sdk-reference.md), [Speech Devices SDK](https://aka.ms/sdsdk-quickstart), or [REST APIs](rest-apis.md).
1820

1921
> [!IMPORTANT]

articles/cognitive-services/Speech-Service/speech-to-text.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ ms.author: dapine
1414

1515
# What is speech-to-text?
1616

17+
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
18+
1719
Speech-to-text from the Speech service, also known as speech recognition, enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text as command input. This service is powered by the same recognition technology that Microsoft uses for Cortana and Office products. It seamlessly works with the <a href="./speech-translation.md" target="_blank">translation <span class="docon docon-navigate-external x-hidden-focus"></span></a> and <a href="./text-to-speech.md" target="_blank">text-to-speech <span class="docon docon-navigate-external x-hidden-focus"></span></a> service offerings. For a full list of available speech-to-text languages, see [supported languages](language-support.md#speech-to-text).
1820

1921
The speech-to-text service defaults to using the Universal language model. This model was trained using Microsoft-owned data and is deployed in the cloud. It's optimal for conversational and dictation scenarios. When using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models. Customization is helpful for addressing ambient noise or industry-specific vocabulary.

0 commit comments

Comments
 (0)