Skip to content

Commit b208ffd

Browse files
authored
Merge pull request #96023 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents aa9c058 + 4f523ae commit b208ffd

File tree

13 files changed

+79
-51
lines changed

13 files changed

+79
-51
lines changed

articles/aks/concepts-clusters-workloads.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.author: mlearned
1414

1515
As application development moves towards a container-based approach, the need to orchestrate and manage resources is important. Kubernetes is the leading platform that provides the ability to provide reliable scheduling of fault-tolerant application workloads. Azure Kubernetes Service (AKS) is a managed Kubernetes offering that further simplifies container-based application deployment and management.
1616

17-
This article introduces the core Kubernetes infrastructure components such as the *cluster master*, *nodes*, and *node pools*. Workload resources such as *pods*, *deployments*, and *sets* are also introduced, along with how to group resources into *namespaces*.
17+
This article introduces the core Kubernetes infrastructure components such as the *control plane*, *nodes*, and *node pools*. Workload resources such as *pods*, *deployments*, and *sets* are also introduced, along with how to group resources into *namespaces*.
1818

1919
## What is Kubernetes?
2020

@@ -24,41 +24,41 @@ You can build and run modern, portable, microservices-based applications that be
2424

2525
As an open platform, Kubernetes allows you to build your applications with your preferred programming language, OS, libraries, or messaging bus. Existing continuous integration and continuous delivery (CI/CD) tools can integrate with Kubernetes to schedule and deploy releases.
2626

27-
Azure Kubernetes Service (AKS) provides a managed Kubernetes service that reduces the complexity for deployment and core management tasks, including coordinating upgrades. The AKS cluster masters are managed by the Azure platform, and you only pay for the AKS nodes that run your applications. AKS is built on top of the open-source Azure Kubernetes Service Engine ([aks-engine][aks-engine]).
27+
Azure Kubernetes Service (AKS) provides a managed Kubernetes service that reduces the complexity for deployment and core management tasks, including coordinating upgrades. The AKS control plane is managed by the Azure platform, and you only pay for the AKS nodes that run your applications. AKS is built on top of the open-source Azure Kubernetes Service Engine ([aks-engine][aks-engine]).
2828

2929
## Kubernetes cluster architecture
3030

3131
A Kubernetes cluster is divided into two components:
3232

33-
- *Cluster master* nodes provide the core Kubernetes services and orchestration of application workloads.
33+
- *Control plane* nodes provide the core Kubernetes services and orchestration of application workloads.
3434
- *Nodes* run your application workloads.
3535

36-
![Kubernetes cluster master and node components](media/concepts-clusters-workloads/cluster-master-and-nodes.png)
36+
![Kubernetes control plane and node components](media/concepts-clusters-workloads/control-plane-and-nodes.png)
3737

38-
## Cluster master
38+
## Control plane
3939

40-
When you create an AKS cluster, a cluster master is automatically created and configured. This cluster master is provided as a managed Azure resource abstracted from the user. There's no cost for the cluster master, only the nodes that are part of the AKS cluster.
40+
When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided as a managed Azure resource abstracted from the user. There's no cost for the control plane, only the nodes that are part of the AKS cluster.
4141

42-
The cluster master includes the following core Kubernetes components:
42+
The control plane includes the following core Kubernetes components:
4343

4444
- *kube-apiserver* - The API server is how the underlying Kubernetes APIs are exposed. This component provides the interaction for management tools, such as `kubectl` or the Kubernetes dashboard.
4545
- *etcd* - To maintain the state of your Kubernetes cluster and configuration, the highly available *etcd* is a key value store within Kubernetes.
4646
- *kube-scheduler* - When you create or scale applications, the Scheduler determines what nodes can run the workload and starts them.
4747
- *kube-controller-manager* - The Controller Manager oversees a number of smaller Controllers that perform actions such as replicating pods and handling node operations.
4848

49-
AKS provides a single-tenant cluster master, with a dedicated API server, Scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the cluster master and nodes. Interaction with the cluster master occurs through Kubernetes APIs, such as `kubectl` or the Kubernetes dashboard.
49+
AKS provides a single-tenant control plane, with a dedicated API server, Scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the control plane and nodes. Interaction with the control plane occurs through Kubernetes APIs, such as `kubectl` or the Kubernetes dashboard.
5050

51-
This managed cluster master means that you don't need to configure components like a highly available *etcd* store, but it also means that you can't access the cluster master directly. Upgrades to Kubernetes are orchestrated through the Azure CLI or Azure portal, which upgrades the cluster master and then the nodes. To troubleshoot possible issues, you can review the cluster master logs through Azure Monitor logs.
51+
This managed control plane means that you don't need to configure components like a highly available *etcd* store, but it also means that you can't access the control plane directly. Upgrades to Kubernetes are orchestrated through the Azure CLI or Azure portal, which upgrades the control plane and then the nodes. To troubleshoot possible issues, you can review the control plane logs through Azure Monitor logs.
5252

53-
If you need to configure the cluster master in a particular way or need direct access to them, you can deploy your own Kubernetes cluster using [aks-engine][aks-engine].
53+
If you need to configure the control plane in a particular way or need direct access to it, you can deploy your own Kubernetes cluster using [aks-engine][aks-engine].
5454

5555
For associated best practices, see [Best practices for cluster security and upgrades in AKS][operator-best-practices-cluster-security].
5656

5757
## Nodes and node pools
5858

5959
To run your applications and supporting services, you need a Kubernetes *node*. An AKS cluster has one or more nodes, which is an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime:
6060

61-
- The `kubelet` is the Kubernetes agent that processes the orchestration requests from the cluster master and scheduling of running the requested containers.
61+
- The `kubelet` is the Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers.
6262
- Virtual networking is handled by the *kube-proxy* on each node. The proxy routes network traffic and manages IP addressing for services and pods.
6363
- The *container runtime* is the component that allows containerized applications to run and interact with additional resources such as the virtual network and storage. In AKS, Moby is used as the container runtime.
6464

Binary file not shown.
17.4 KB
Loading

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ zone_pivot_groups: programming-languages-set-two
1818
Before you get started, make sure to:
1919

2020
> [!div class="checklist"]
21+
>
2122
> * [Create an Azure Speech Resource](../../../../get-started.md)
2223
> * [Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
2324
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=windows)
@@ -37,16 +38,16 @@ Let's add some code that works as a skeleton for our project. Make note that you
3738

3839
## Create a Speech configuration
3940

40-
Before you can initialize a `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoing key and region. Insert this code in the `recognizeIntent()` method.
41+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `recognizeIntent()` method.
4142

4243
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig).
4344

4445
> [!NOTE]
45-
> It is important to use the LUIS Endpoint key and not the Starter or Authroing keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
46+
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
4647
4748
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=25)]
4849

49-
## Initialize a IntentRecognizer
50+
## Initialize an IntentRecognizer
5051

5152
Now, let's create an `IntentRecognizer`. Insert this code in the `recognizeIntent()` method, right below your Speech configuration.
5253
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=28)]
@@ -58,8 +59,8 @@ You now need to associate a `LanguageUnderstandingModel` with the intent recogni
5859

5960
## Recognize an intent
6061

61-
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop reconizing speech.
62-
For similicity we'll wait on the future returned to complete.
62+
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
63+
For simplicity we'll wait on the future returned to complete.
6364

6465
Inside the using statement, add this code:
6566
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=44)]
@@ -73,15 +74,15 @@ Inside the using statement, below `RecognizeOnceAsync()`, add this code:
7374

7475
## Check your code
7576

76-
At this point, your code should look like this:
77+
At this point, your code should look like this:
7778
(We've added some comments to this version)
7879
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=6-81)]
7980

8081
## Build and run your app
8182

8283
Now you're ready to build your app and test our speech recognition using the Speech service.
8384

84-
1. **Compile the code** - From the menu bar of Visual Stuio, choose **Build** > **Build Solution**.
85+
1. **Compile the code** - From the menu bar of Visual Studio, choose **Build** > **Build Solution**.
8586
2. **Start your app** - From the menu bar, choose **Debug** > **Start Debugging** or press **F5**.
8687
3. **Start recognition** - It'll prompt you to speak a phrase in English. Your speech is sent to the Speech service, transcribed as text, and rendered in the console.
8788

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ zone_pivot_groups: programming-languages-set-two
1818
Before you get started, make sure to:
1919

2020
> [!div class="checklist"]
21+
>
2122
> * [Create an Azure Speech Resource](../../../../get-started.md)
2223
> * [Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
2324
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=dotnet)
@@ -37,18 +38,18 @@ Let's add some code that works as a skeleton for our project. Make note that you
3738

3839
## Create a Speech configuration
3940

40-
Before you can initialize a `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoing key and region. Insert this code in the `RecognizeIntentAsync()` method.
41+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `RecognizeIntentAsync()` method.
4142

4243
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
4344

4445
> [!NOTE]
45-
> It is important to use the LUIS Endpoint key and not the Starter or Authroing keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
46+
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
4647
4748
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=26)]
4849

49-
## Initialize a IntentRecognizer
50+
## Initialize an IntentRecognizer
5051

51-
Now, let's create a `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
52+
Now, let's create an `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
5253
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=28-30,76)]
5354

5455
## Add a LanguageUnderstandingModel and Intents
@@ -58,7 +59,7 @@ You now need to associate a `LanguageUnderstandingModel` with the intent recogni
5859

5960
## Recognize an intent
6061

61-
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop reconizing speech.
62+
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
6263

6364
Inside the using statement, add this code:
6465
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=46)]
@@ -72,7 +73,7 @@ Inside the using statement, below `RecognizeOnceAsync()`, add this code:
7273

7374
## Check your code
7475

75-
At this point, your code should look like this:
76+
At this point, your code should look like this:
7677
(We've added some comments to this version)
7778
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=5-86)]
7879

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,8 @@ zone_pivot_groups: programming-languages-set-two
1515

1616
In this quickstart you will use the [Speech SDK](~/articles/cognitive-services/speech-service/speech-sdk.md) to interactively recognize speech from audio data captured from a microphone. After satisfying a few prerequisites, recognizing speech from a microphone only takes four steps:
1717
> [!div class="checklist"]
18+
>
1819
> * Create a ````SpeechConfig```` object from your subscription key and region.
19-
> * Create a ````IntentRecognizer```` object using the ````SpeechConfig```` object from above.
20+
> * Create an ````IntentRecognizer```` object using the ````SpeechConfig```` object from above.
2021
> * Using the ````IntentRecognizer```` object, start the recognition process for a single utterance.
21-
> * Inspect the ````IntentRecognitionResult```` returned.
22+
> * Inspect the ````IntentRecognitionResult```` returned.

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/java/jre.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ zone_pivot_groups: programming-languages-set-two
1818
Before you get started, make sure to:
1919

2020
> [!div class="checklist"]
21+
>
2122
> * [Create an Azure Speech Resource](../../../../get-started.md)
2223
> * [Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
2324
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=jre)
@@ -34,18 +35,18 @@ Let's add some code that works as a skeleton for our project.
3435

3536
## Create a Speech configuration
3637

37-
Before you can initialize a `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoing key and region. Insert this code in the try / catch block in main
38+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the try / catch block in main
3839

3940
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
4041

4142
> [!NOTE]
42-
> It is important to use the LUIS Endpoint key and not the Starter or Authroing keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
43+
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
4344
4445
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=27)]
4546

46-
## Initialize a IntentRecognizer
47+
## Initialize an IntentRecognizer
4748

48-
Now, let's create a `IntentRecognizer`. Insert this code right below your Speech configuration.
49+
Now, let's create an `IntentRecognizer`. Insert this code right below your Speech configuration.
4950
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=30)]
5051

5152
## Add a LanguageUnderstandingModel and Intents
@@ -55,7 +56,7 @@ You now need to associate a `LanguageUnderstandingModel` with the intent recogni
5556

5657
## Recognize an intent
5758

58-
From the `IntentRecognizer` object, you're going to call the `recognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop reconizing speech.
59+
From the `IntentRecognizer` object, you're going to call the `recognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
5960

6061
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=41)]
6162

@@ -73,7 +74,7 @@ It's important to release the speech resources when you're done using them. Inse
7374

7475
## Check your code
7576

76-
At this point, your code should look like this:
77+
At this point, your code should look like this:
7778
(We've added some comments to this version)
7879
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=6-76)]
7980

0 commit comments

Comments
 (0)