Skip to content

Commit 67aab8a

Browse files
authored
Merge pull request #290084 from MicrosoftDocs/main
11/7/2024 AM Publish
2 parents 46b52e3 + 35fd9e2 commit 67aab8a

28 files changed

+56
-66
lines changed

articles/app-service/configure-gateway-required-vnet-integration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ You can't use gateway-required virtual network integration:
3535

3636
To create a gateway:
3737

38-
1. [Create the VPN gateway and subnet](../vpn-gateway/point-to-site-certificate-gateway.md#creategw). Select a route-based VPN type.
38+
1. [Create the VPN gateway and subnet](../vpn-gateway/tutorial-create-gateway-portal.md). Select a route-based VPN type.
3939

4040
1. [Set the point-to-site addresses](../vpn-gateway/point-to-site-certificate-gateway.md#addresspool). If the gateway isn't in the basic SKU, then IKEV2 must be disabled in the point-to-site configuration and SSTP must be selected. The point-to-site address space must be in the RFC 1918 address blocks 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
4141

articles/azure-vmware/configure-vsan.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -88,8 +88,8 @@ Run the `Set-vSANCompressDedupe` cmdlet to set preferred space efficiency model.
8888
>[!NOTE]
8989
>Setting Compression to False and Deduplication to True sets vSAN to Dedupe and Compression.
9090
>Setting Compression to False and Dedupe to False, disables all space efficiency.
91-
>Azure VMware Solution default is Dedupe and Compression
92-
>Compression only provides slightly better performance
91+
>Azure VMware Solution default is Dedupe and Compression.
92+
>Compression only provides slightly better performance.
9393
>Disabling both compression and deduplication offers the greatest performance gains, however at the cost of space utilization.
9494
9595
1. Check **Notifications** to see the progress.

articles/communication-services/concepts/call-automation/call-automation.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -197,6 +197,12 @@ Operation Callback URI is an optional parameter in some mid-call APIs that use e
197197
| `Recognize` | `RecognizeCompleted` / `RecognizeFailed` / `RecognizeCanceled` |
198198
| `StopContinuousDTMFRecognition` | `ContinuousDtmfRecognitionStopped` |
199199
| `SendDTMF` | `ContinuousDtmfRecognitionToneReceived` / `ContinuousDtmfRecognitionToneFailed` |
200+
| `Hold` | `HoldFailed` |
201+
| `StartMediaStreaming` | `MediaStreamingStarted` / `MediaStreamingFailed` |
202+
| `StopMediaStreaming` | `MediaStreamingStopped` / `MediaStreamingFailed` |
203+
| `StartTranscription` | `TranscriptionStarted` / `TranscriptionFailed` |
204+
| `UpdateTranscription` | `TranscriptionUpdated` / `TranscriptionFailed` |
205+
| `StopTranscription` | `TranscriptionStopped` / `TranscriptionFailed` |
200206

201207
## Next steps
202208

articles/cost-management-billing/dataset-schema/cost-usage-details-focus.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,10 @@ To learn more about FOCUS, see [FOCUS: A new specification for cloud cost transp
2020

2121
You can view the latest changes to the FOCUS cost and usage details file schema in the [FinOps Open Cost and Usage Specification changelog](https://github.com/FinOps-Open-Cost-and-Usage-Spec/FOCUS_Spec/blob/working_draft/CHANGELOG.md).
2222

23+
#### Note: Version 1.0r2
24+
25+
FOCUS 1.0r2 is a follow-up release to the FOCUS 1.0 dataset that changes how date columns are formatted, which may impact anyone who is parsing and especially modifying these values. The 1.0r2 dataset is still aligned with the FOCUS 1.0 specification. The "r2" indicates this is the second release of that 1.0 specification. The only change in this release is that all date columns now include seconds to more closely adhere to the FOCUS 1.0 specification. As an example, a 1.0 export may use "2024-01-01T00:00Z" and a 1.0r2 export would use "2024-01-01T00:00:00Z". The only difference is the extra ":00" for seconds at the end of the time segment of the ISO formatted date string.
26+
2327
## Version 1.0
2428

2529
| Column | Fields | Description |

articles/event-hubs/event-processor-balance-partition-load.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.date: 07/31/2024
1010

1111
To scale your event processing application, you can run multiple instances of the application and have the load balanced among themselves. In the older and deprecated versions, `EventProcessorHost` allowed you to balance the load between multiple instances of your program and checkpoint events when receiving the events. In the newer versions (5.0 onwards), **EventProcessorClient** (.NET and Java), or **EventHubConsumerClient** (Python and JavaScript) allows you to do the same. The development model is made simpler by using events. You can subscribe to the events that you're interested in by registering an event handler. If you're using the old version of the client library, see the following migration guides: [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md), [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/servicebus/azure-messaging-servicebus/migration-guide.md), [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/servicebus/azure-servicebus/migration_guide.md), and [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/servicebus/service-bus/migrationguide.md).
1212

13-
This article describes a sample scenario for using multiple instances of client `applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
13+
This article describes a sample scenario for using multiple instances of client applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
1414

1515
> [!NOTE]
1616
> The key to scale for Event Hubs is the idea of partitioned consumers. In contrast to the [competing consumers](/previous-versions/msp-n-p/dn568101(v=pandp.10)) pattern, the partitioned consumer pattern enables high scale by removing the contention bottleneck and facilitating end to end parallelism.

articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -52,12 +52,6 @@ A cluster host:
5252

5353
If you deployed Azure IoT Operations to your cluster previously, uninstall those resources before continuing. For more information, see [Update Azure IoT Operations](./howto-manage-update-uninstall.md#upgrade).
5454

55-
* Verify that your cluster host is configured correctly for deployment by using the [verify-host](/cli/azure/iot/ops#az-iot-ops-verify-host) command on the cluster host:
56-
57-
```azurecli
58-
az iot ops verify-host
59-
```
60-
6155
* (Optional) Prepare your cluster for observability before deploying Azure IoT Operations: [Configure observability](../configure-observability-monitoring/howto-configure-observability.md).
6256

6357
## Deploy

articles/openshift/confidential-containers-deploy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.date: 11/04/2024
1010
ms.custom: template-how-to
1111
---
1212

13-
# Deploy Confidential Containers in an Azure Red Hat OpenShift (ARO) cluster
13+
# Deploy Confidential Containers in an Azure Red Hat OpenShift (ARO) cluster (Preview)
1414

1515
This article describes the steps required to deploy Confidential Containers for an ARO cluster. This process involves two main parts and multiple steps:
1616

articles/openshift/confidential-containers-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-redhat-openshift
77
ms.topic: conceptual
88
ms.date: 11/04/2024
99
---
10-
# Confidential Containers with Azure Red Hat OpenShift
10+
# Confidential Containers with Azure Red Hat OpenShift (Preview)
1111

1212
Confidential Containers offer a robust solution to protect sensitive data within cloud environments. By using hardware-based trusted execution environments (TEEs), Confidential Containers provide a secure enclave within the host system, isolating applications and their data from potential threats. This isolation ensures that even if the host system is compromised, the confidential data remains protected.
1313

articles/sentinel/sap/cross-workspace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ To do this, use the following steps:
6666

6767
- **Use Log Analytics in Azure Monitor to manage access to data by resource**. For more information, see [Manage access to Microsoft Sentinel data by resource](../resource-context-rbac.md).
6868

69-
- **Associate SAP resources with an Azure resource ID**. Specify the required `azure_resource_id` field in the connector configuration section on the data collector that you use to ingest data from the SAP system into Microsoft Sentinel. For more information, see [Connector configuration](reference-systemconfig-json.md#connector-configuration).
69+
- **Associate SAP resources with an Azure resource ID**. This option is supported only for a data connector agent deployed via CLI. Specify the required `azure_resource_id` field in the connector configuration section on the data collector that you use to ingest data from the SAP system into Microsoft Sentinel. For more information, see [Deploy an SAP data connector agent from the command line](deploy-command-line.md) and [Connector configuration](reference-systemconfig-json.md#connector-configuration).
7070

7171
:::image type="content" source="media/cross-workspace/sap-cross-workspace-combined.png" alt-text="Diagram that shows how to work with the Microsoft Sentinel solution for SAP applications by using the same workspace for SAP and SOC data." border="false":::
7272

articles/sentinel/sap/deploy-command-line.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ This procedure describes how to create a new agent and connect it to your SAP sy
147147
docker update --restart unless-stopped <container-name>
148148
```
149149
150-
The deployment procedure generates a **systemconfig.json** file that contains the configuration details for the SAP data connector agent. For more information, see [SAP data connector agent configuration file](deployment-overview.md#sap-data-connector-agent-configuration-file).
150+
The deployment procedure generates a [**systemconfig.json**](reference-systemconfig-json.md) file that contains the configuration details for the SAP data connector agent. The file is located in the `/sapcon-app/sapcon/config/system` directory on your VM.
151151
152152
## Deploy the data connector using a configuration file
153153
@@ -230,7 +230,7 @@ Azure Key Vault is the recommended method to store your authentication credentia
230230
docker update --restart unless-stopped <container-name>
231231
```
232232
233-
The deployment procedure generates a **systemconfig.json** file that contains the configuration details for the SAP data connector agent. For more information, see [SAP data connector agent configuration file](deployment-overview.md#sap-data-connector-agent-configuration-file).
233+
The deployment procedure generates a [**systemconfig.json**](reference-systemconfig-json.md) file that contains the configuration details for the SAP data connector agent. The file is located in the `/sapcon-app/sapcon/config/system` directory on your VM.
234234
235235
## Prepare the kickstart script for secure communication with SNC
236236

0 commit comments

Comments
 (0)