You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/batch-transcription.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ To use the batch transcription REST API:
36
36
1.[Get batch transcription results](batch-transcription-get.md) - Check transcription status and retrieve transcription results asynchronously.
37
37
38
38
> [!IMPORTANT]
39
-
> Batch transcription jobs are scheduled on a best-effort basis. At pick hours it may take up to 30 minutes or longer for a transcription job to start processing. See how to check the current status of a batch transcription job in [this section](batch-transcription-get.md#get-transcription-status).
39
+
> Batch transcription jobs are scheduled on a best-effort basis. At peak hours it may take up to 30 minutes or longer for a transcription job to start processing. See how to check the current status of a batch transcription job in [this section](batch-transcription-get.md#get-transcription-status).
Copy file name to clipboardExpand all lines: articles/aks/azure-cni-overlay.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -169,7 +169,7 @@ The `--pod-cidr` parameter is required when upgrading from legacy CNI because th
169
169
170
170
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
171
171
172
-
You must register the `Microsoft.ContainerService``AzureOverlayDualStackPreview` feature flag.
172
+
You must have the latest aks-preview Azure CLI extension installed and register the `Microsoft.ContainerService``AzureOverlayDualStackPreview` feature flag.
173
173
174
174
Update an existing Kubenet cluster to use Azure CNI Overlay using the [`az aks update`][az-aks-update] command.
Copy file name to clipboardExpand all lines: articles/cosmos-db/analytical-store-change-data-capture.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ In addition to providing incremental data feed from analytical store to diverse
29
29
- Supports applying filters, projections and transformations on the Change feed via source query
30
30
- Multiple change feeds on the same container can be consumed simultaneously
31
31
- Each change in container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you
32
-
- Changes can be synchronized "from the Beginning” or “from a given timestamp” or “from now”
32
+
- Changes can be synchronized "from the beginning” or “from a given timestamp” or “from now”
33
33
- There's no limitation around the fixed data retention period for which changes are available
34
34
35
35
## Efficient incremental data capture with internally managed checkpoints
@@ -46,7 +46,7 @@ Change data capture in Azure Cosmos DB analytical store supports the following k
46
46
47
47
### Capturing changes from the beginning
48
48
49
-
When the `Start from beginning` option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. This is limited by the `analytical TTL` property and documents TTL-removed from analytical store are not included in the change feed. Example: Imagine a container with `analytical TTL` set to 31536000 seconds, what is equivalent to 1 year. If you create a CDC process for this container, only documents newer than 1 year will be included in the initial load.
49
+
When the `Start from beginning` option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. This is limited by the `analytical TTL` property and documents TTL-removed from analytical store are not included in the change feed. Example: Imagine a container with `analytical TTL` set to 31536000 seconds, which is equivalent to 1 year. If you create a CDC process for this container, only documents newer than 1 year will be included in the initial load.
50
50
51
51
### Capturing changes from a given timestamp
52
52
@@ -85,7 +85,7 @@ You can create multiple processes to consume CDC in analytical store. This appro
85
85
86
86
### Throughput isolation, lower latency and lower TCO
87
87
88
-
Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions.
88
+
Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. Change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions.
89
89
90
90
## Scenarios
91
91
@@ -111,7 +111,7 @@ Change data capture capability enables an end-to-end analytical solution providi
111
111
112
112
The linked service interface for the API for MongoDB isn't available within Azure Data Factory data flows yet. You can use your API for MongoDB's account endpoint with the **Azure Cosmos DB for NoSQL** linked service interface as a work around until the Mongo linked service is directly supported.
113
113
114
-
In the interface for a new NoSQL linked service, select **Enter Manually** to provide the Azure Cosmos DB account information. Here, use the account's NoSQL document endpoint (ex: `https://<account-name>.documents.azure.com:443/`) instead of the Mongo DB endpoint (ex: `mongodb://<account-name>.mongo.cosmos.azure.com:10255/`)
114
+
In the interface for a new NoSQL linked service, select **Enter Manually** to provide the Azure Cosmos DB account information. Here, use the account's NoSQL document endpoint (Example: `https://<account-name>.documents.azure.com:443/`) instead of the Mongo DB endpoint (Example: `mongodb://<account-name>.mongo.cosmos.azure.com:10255/`)
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-setup-authentication.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Microsoft Entra Conditional Access can be used to further control or restrict ac
35
35
## Prerequisites
36
36
37
37
* Create an [Azure Machine Learning workspace](how-to-manage-workspace.md).
38
-
*[Configure your development environment](how-to-configure-environment.md) or use a[Azure Machine Learning compute instance](how-to-create-compute-instance.md) and install the [Azure Machine Learning SDK v2](https://aka.ms/sdk-v2-install).
38
+
*[Configure your development environment](how-to-configure-environment.md) or use an[Azure Machine Learning compute instance](how-to-create-compute-instance.md) and install the [Azure Machine Learning SDK v2](https://aka.ms/sdk-v2-install).
39
39
40
40
* Install the [Azure CLI](/cli/azure/install-azure-cli).
Copy file name to clipboardExpand all lines: articles/sentinel/sap/deploy-data-connector-agent-container.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -231,6 +231,9 @@ In this section, you deploy the data connector agent. After you deploy the agent
231
231
:::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment.":::
232
232
233
233
1. Under **Just one step before we finish**, select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to **Agent command**.
234
+
235
+
The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl). You can supply additional parameters to the script to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
236
+
234
237
1. In your target VM (the VM where you plan to install the agent), open a terminal and run the command you copied in the previous step.
235
238
236
239
The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
0 commit comments