Skip to content

Commit 35bf4a6

Browse files
authored
Merge pull request #260582 from MicrosoftDocs/main
Release pre post scripts--scheduled release at 9:30PM of 12/06
2 parents 34a6fe7 + 717e64d commit 35bf4a6

File tree

52 files changed

+1165
-55
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+1165
-55
lines changed

.openpublishing.redirection.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25737,6 +25737,11 @@
2573725737
"redirect_url": "/azure/update-manager/workbooks",
2573825738
"redirect_document_id": false
2573925739
},
25740+
{
25741+
"source_path": "articles/update-manager/whats-upcoming.md",
25742+
"redirect_url": "/azure/update-manager/whats-new",
25743+
"redirect_document_id": false
25744+
},
2574025745
{
2574125746
"source_path_from_root": "/articles/orbital/delete-contact.md",
2574225747
"redirect_url": "/azure/orbital/spacecraft-object",

articles/azure-monitor/essentials/data-collection-transformations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ The following table describes the different goals that you can achieve by using
2424
You can apply transformations to the following tables in a Log Analytics workspace:
2525

2626
- Any Azure table listed in [Tables that support transformations in Azure Monitor Logs](../logs/tables-feature-support.md)
27-
- Any custom table
27+
- Any custom table created for the Azure Monitor Agent. (MMA custom table can't use transformations)
2828

2929
## How transformations work
3030
Transformations are performed in Azure Monitor in the [data ingestion pipeline](../essentials/data-collection.md) after the data source delivers the data and before it's sent to the destination. The data source might perform its own filtering before sending data but then rely on the transformation for further manipulation before it's sent to the destination.

articles/cosmos-db/mongodb/vcore/how-to-private-link.md

Lines changed: 98 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,22 @@ To establish a connection, Azure Cosmos DB for MongoDB vCore with Private Link s
3232
- An existing Azure Cosmos DB for MongoDB vCore cluster.
3333
- If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free).
3434
- If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md).
35-
- Access to an active Virtual network and Subnet
35+
- Access to an active Virtual network and Subnet.
3636
- If you don’t have a Virtual network, [create a virtual network using the Azure portal](../../../virtual-network/quick-create-portal.md)
37+
- Verify your access to Azure Cosmos DB for MongoDB vCore Private Endpoint.
38+
- If you don’t have access, you can request it by following the steps below.
39+
40+
41+
## Requesting Access to Azure Cosmos DB for MongoDB vCore Private Endpoint via Azure Portal
42+
43+
To request access for a private endpoint for an existing Azure Cosmos DB for MongoDB vCore cluster, follow these steps using the Azure portal:
44+
45+
1. Sign in to the [Azure portal](https://portal.azure.com), and search for **Preview Features** in the search bar.
46+
47+
1. Choose **Azure Cosmos DB for MongoDB vCore Private Endpoint** from the available options list and click "register."
48+
49+
1. You will receive a notification once access to the Private Endpoint is granted.
50+
3751

3852
## Create a private endpoint by using the Azure portal
3953

@@ -91,6 +105,89 @@ Follow these steps to create a private endpoint for an existing Azure Cosmos DB
91105

92106
When you have an approved Private Endpoint for an Azure Cosmos DB account, in the Azure portal, the **All networks** option in the **Firewall and virtual networks** pane is unavailable.
93107

108+
## Create a private endpoint by using Azure CLI
109+
110+
Run the following Azure CLI script to create a private endpoint named *myPrivateEndpoint* for an existing Azure Cosmos DB account. Replace the variable values with the details for your environment.
111+
112+
```azurecli-interactive
113+
# Resource group where the Azure Cosmos DB account and virtual network resources are located
114+
ResourceGroupName="myResourceGroup"
115+
116+
# Name of the existing Azure Cosmos DB account
117+
MongovCoreClusterName="myMongoCluster"
118+
119+
# Subscription ID where the Azure Cosmos DB account and virtual network resources are located
120+
SubscriptionId="<your Azure subscription ID>"
121+
122+
# API type of your Azure Cosmos DB account: Sql, SqlDedicated, MongoCluster, Cassandra, Gremlin, or Table
123+
CosmosDbSubResourceType="MongoCluster"
124+
125+
# Name of the virtual network to create
126+
VNetName="myVnet"
127+
128+
# Name of the subnet to create
129+
SubnetName="mySubnet"
130+
131+
# Name of the private endpoint to create
132+
PrivateEndpointName="myPrivateEndpoint"
133+
134+
# Name of the private endpoint connection to create
135+
PrivateConnectionName="myConnection"
136+
137+
az network vnet create \
138+
--name $VNetName \
139+
--resource-group $ResourceGroupName \
140+
--subnet-name $SubnetName
141+
142+
az network vnet subnet update \
143+
--name <name> \
144+
--resource-group $ResourceGroupName \
145+
--vnet-name $VNetName \
146+
--disable-private-endpoint-network-policies true
147+
148+
az network private-endpoint create \
149+
--name $PrivateEndpointName \
150+
--resource-group $ResourceGroupName \
151+
--vnet-name $VNetName \
152+
--subnet $SubnetName \
153+
--private-connection-resource-id "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DocumentDB/mongoClusters/$MongovCoreClusterName" \
154+
--group-ids MongoCluster --connection-name $PrivateConnectionName
155+
```
156+
157+
### Integrate the private endpoint with a private DNS zone
158+
After you create the private endpoint, you can integrate it with a private DNS zone by using the following Azure CLI script:
159+
160+
```azurecli-interactive
161+
#Zone name differs based on the API type and group ID you are using.
162+
zoneName="privatelink.mongocluster.azure.com"
163+
164+
az network private-dns zone create \
165+
--resource-group $ResourceGroupName \
166+
--name $zoneName
167+
168+
az network private-dns link vnet create --resource-group $ResourceGroupName \
169+
--zone-name $zoneName \
170+
--name <dns-link-name> \
171+
--virtual-network $VNetName \
172+
--registration-enabled false
173+
174+
#Create a DNS zone group
175+
az network private-endpoint dns-zone-group create \
176+
--resource-group $ResourceGroupName \
177+
--endpoint-name <pe-name> \
178+
--name <zone-group-name> \
179+
--private-dns-zone $zoneName \
180+
--zone-name mongocluster
181+
```
182+
183+
## MongoClusters Commands on Private Link
184+
```azurecli-interactive
185+
az network private-link-resource list \
186+
-g <rg-name> \
187+
-n <resource-name> \
188+
--type Microsoft.DocumentDB/mongoClusters
189+
```
190+
94191
## View private endpoints by using the Azure portal
95192

96193
Follow these steps to view a private endpoint for an existing Azure Cosmos DB account by using the Azure portal:

articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ ms.service: data-manager-for-agri
77
ms.topic: conceptual
88
ms.date: 11/17/2023
99
ms.custom: template-concept
10+
show_latex: true
1011
---
1112

1213
# Using satellite imagery in Azure Data Manager for Agriculture
@@ -124,4 +125,4 @@ The image names and resolutions supported by APIs used to ingest and read satell
124125

125126
## Next steps
126127

127-
* Test our APIs [here](/rest/api/data-manager-for-agri).
128+
* Test our APIs [here](/rest/api/data-manager-for-agri).

articles/hdinsight/hdinsight-administer-use-portal-linux.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,14 @@ description: Learn how to create and manage Azure HDInsight clusters using the A
44
ms.service: hdinsight
55
ms.topic: conceptual
66
ms.custom: hdinsightactive
7-
ms.date: 11/11/2022
7+
ms.date: 12/06/2023
88
---
99

1010
# Manage Apache Hadoop clusters in HDInsight by using the Azure portal
1111

1212
[!INCLUDE [selector](includes/hdinsight-portal-management-selector.md)]
1313

14-
Using the [Azure portal](https://portal.azure.com), you can manage [Apache Hadoop](https://hadoop.apache.org/) clusters in Azure HDInsight. Use the tab selector above for information on managing Hadoop clusters in HDInsight using other tools.
14+
Using the [Azure portal](https://portal.azure.com), you can manage [Apache Hadoop](https://hadoop.apache.org/) clusters in Azure HDInsight. Use the tab selector for information on managing Hadoop clusters in HDInsight using other tools.
1515

1616
## Prerequisites
1717

@@ -23,13 +23,13 @@ Sign in to [https://portal.azure.com](https://portal.azure.com).
2323

2424
## <a name="showClusters"></a> List and show clusters
2525

26-
The **HDInsight clusters** page will list your existing clusters. From the portal:
26+
The **HDInsight clusters** page lists your existing clusters. From the portal:
2727
1. Select **All services** from the left menu.
2828
2. Select **HDInsight clusters** under **ANALYTICS**.
2929

3030
## <a name="homePage"></a> Cluster home page
3131

32-
Select your cluster name from the [**HDInsight clusters**](#showClusters) page. This will open the **Overview** view, which looks similar to the following image:
32+
Select your cluster name from the [**HDInsight clusters**](#showClusters) page. This opens the **Overview** view, which looks similar to the following image:
3333

3434
:::image type="content" source="./media/hdinsight-administer-use-portal-linux/hdinsight-essentials2.png" alt-text="Azure portal HDInsight cluster essentials":::
3535

@@ -98,7 +98,7 @@ From the [cluster home page](#homePage), under **Settings** select **Properties
9898
|CLUSTER URL|The URL for the Ambari web interface.|
9999
|Private Endpoint|The private endpoint for the cluster.|
100100
|Secure shell (SSH)|The username and host name to use in accessing the cluster via SSH.|
101-
|STATUS|One of: Aborted, Accepted, ClusterStorageProvisioned, AzureVMConfiguration, HDInsightConfiguration, Operational, Running, Error, Deleting, Deleted, Timedout, DeleteQueued, DeleteTimedout, DeleteError, PatchQueued, CertRolloverQueued, ResizeQueued, or ClusterCustomization.|
101+
|STATUS|One of: Aborted, Accepted, ClusterStorageProvisioned, AzureVMConfiguration, HDInsightConfiguration, Operational, Running, Error, Deleting, Deleted, Timeout, DeleteQueued, DeleteTimeout, DeleteError, PatchQueued, CertRolloverQueued, ResizeQueued, or ClusterCustomization.|
102102
|REGION|Azure location. For a list of supported Azure locations, see the **Region** drop-down list box on [HDInsight pricing](https://azure.microsoft.com/pricing/details/hdinsight/).|
103103
|DATE CREATED|The date the cluster was deployed.|
104104
|OPERATING SYSTEM|Either **Windows** or **Linux**.|

articles/hdinsight/hdinsight-scaling-best-practices.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: yeturis
66
ms.service: hdinsight
77
ms.topic: how-to
88
ms.custom: seoapr2020
9-
ms.date: 11/17/2022
9+
ms.date: 12/07/2023
1010
---
1111

1212
# Manually scale Azure HDInsight clusters
@@ -15,7 +15,7 @@ HDInsight provides elasticity with options to scale up and scale down the number
1515

1616
Scale up your cluster before periodic batch processing so the cluster has adequate resources. After processing completes, and usage goes down, scale down the HDInsight cluster to fewer worker nodes.
1717

18-
You can scale a cluster manually using one of the methods outlined below. You can also use [autoscale](hdinsight-autoscale-clusters.md) options to automatically scale up and down in response to certain metrics.
18+
You can scale a cluster manually using one of the following methods. You can also use [autoscale](hdinsight-autoscale-clusters.md) options to automatically scale up and down in response to certain metrics.
1919

2020
> [!NOTE]
2121
> Only clusters with HDInsight version 3.1.3 or higher are supported. If you are unsure of the version of your cluster, you can check the Properties page.
@@ -42,9 +42,9 @@ Using any of these methods, you can scale your HDInsight cluster up or down with
4242
4343
## Impact of scaling operations
4444

45-
When you **add** nodes to your running HDInsight cluster (scale up), jobs won't be affected. New jobs can be safely submitted while the scaling process is running. If the scaling operation fails, the failure will leave your cluster in a functional state.
45+
When you **add** nodes to your running HDInsight cluster (scale up), jobs remains unaffected. New jobs can be safely submitted while the scaling process is running. If the scaling operation fails, the failure leaves your cluster in a functional state.
4646

47-
If you **remove** nodes (scale down), pending or running jobs will fail when the scaling operation completes. This failure is because of some of the services restarting during the scaling process. Your cluster may get stuck in safe mode during a manual scaling operation.
47+
If you **remove** nodes (scale down), pending or running jobs fail when the scaling operation completes. This failure is because of some of the services restarting during the scaling process. Your cluster may get stuck in safe mode during a manual scaling operation.
4848

4949
The impact of changing the number of data nodes varies for each type of cluster supported by HDInsight:
5050

@@ -72,7 +72,7 @@ The impact of changing the number of data nodes varies for each type of cluster
7272

7373
* Apache Hive LLAP
7474

75-
After scaling to `N` worker nodes, HDInsight will automatically set the following configurations and restart Hive.
75+
After scaling to `N` worker nodes, HDInsight automatically set the following configurations and restart Hive.
7676

7777
* Maximum Total Concurrent Queries: `hive.server2.tez.sessions.per.default.queue = min(N, 32)`
7878
* Number of nodes used by Hive's LLAP: `num_llap_nodes = N`
@@ -152,7 +152,7 @@ The following sections describe these options.
152152
153153
Stop all Hive jobs before scaling down to one worker node. If your workload is scheduled, then execute your scale-down after Hive work is done.
154154
155-
Stopping the Hive jobs before scaling, helps minimize the number of scratch files in the tmp folder (if any).
155+
Stop the Hive jobs before scaling, helps minimize the number of scratch files in the tmp folder (if any).
156156
157157
#### Manually clean up Hive's scratch files
158158

@@ -199,11 +199,11 @@ If Hive has left behind temporary files, then you can manually clean up those fi
199199
200200
If your clusters get stuck in safe mode frequently when scaling down to fewer than three worker nodes, then keep at least three worker nodes.
201201
202-
Having three worker nodes is more costly than scaling down to only one worker node. However, this action will prevent your cluster from getting stuck in safe mode.
202+
Having three worker nodes is more costly than scaling down to only one worker node. However, this action prevents your cluster from getting stuck in safe mode.
203203
204204
### Scale HDInsight down to one worker node
205205
206-
Even when the cluster is scaled down to one node, worker node 0 will still survive. Worker node 0 can never be decommissioned.
206+
Even when the cluster is scaled down to one node, worker node 0 still survive. Worker node 0 can never be decommissioned.
207207
208208
#### Run the command to leave safe mode
209209

articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to use Apache Kafka on HDInsight with Azure IoT Hub. The
55
ms.service: hdinsight
66
ms.topic: how-to
77
ms.custom: hdinsightactive
8-
ms.date: 11/17/2022
8+
ms.date: 12/05/2023
99
---
1010

1111
# Use Apache Kafka on HDInsight with Azure IoT Hub
@@ -44,11 +44,11 @@ For more information on the Connect API, see [https://kafka.apache.org/documenta
4444
sbt assembly
4545
```
4646
47-
The build will take a few minutes to complete. The command creates a file named `kafka-connect-iothub-assembly_2.11-0.7.0.jar` in the `toketi-kafka-connect-iothub-master\target\scala-2.11` directory for the project.
47+
The build takes a few minutes to complete. The command creates a file named `kafka-connect-iothub-assembly_2.11-0.7.0.jar` in the `toketi-kafka-connect-iothub-master\target\scala-2.11` directory for the project.
4848
4949
## Install the connector
5050
51-
1. Upload the .jar file to the edge node of your Kafka on HDInsight cluster. Edit the command below by replacing `CLUSTERNAME` with the actual name of your cluster. The default values for the SSH user account and name of [edge node](../hdinsight-apps-use-edge-node.md#access-an-edge-node) are used below, modify as needed.
51+
1. Upload the .jar file to the edge node of your Kafka on HDInsight cluster. Edit the following command by replacing `CLUSTERNAME` with the actual name of your cluster. The default values for the SSH user account and name of [edge node](../hdinsight-apps-use-edge-node.md#access-an-edge-node) are used and modify as needed.
5252
5353
```cmd
5454
scp kafka-connect-iothub-assembly*.jar [email protected]:
@@ -115,7 +115,7 @@ From your SSH connection to the edge node, use the following steps to configure
115115
|---|---|---|
116116
|`bootstrap.servers=localhost:9092`|Replace the `localhost:9092` value with the broker hosts from the previous step|Configures the standalone configuration for the edge node to find the Kafka brokers.|
117117
|`key.converter=org.apache.kafka.connect.json.JsonConverter`|`key.converter=org.apache.kafka.connect.storage.StringConverter`|This change allows you to test using the console producer included with Kafka. You may need different converters for other producers and consumers. For information on using other converter values, see [https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md](https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md).|
118-
|`value.converter=org.apache.kafka.connect.json.JsonConverter`|`value.converter=org.apache.kafka.connect.storage.StringConverter`|Same as above.|
118+
|`value.converter=org.apache.kafka.connect.json.JsonConverter`|`value.converter=org.apache.kafka.connect.storage.StringConverter`|Same as given.|
119119
|N/A|`consumer.max.poll.records=10`|Add to end of file. This change is to prevent timeouts in the sink connector by limiting it to 10 records at a time. For more information, see [https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md](https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md).|
120120
121121
1. To save the file, use __Ctrl + X__, __Y__, and then __Enter__.
@@ -277,7 +277,7 @@ For more information on configuring the connector sink, see [https://github.com/
277277
> [!NOTE]
278278
> You may see several warnings as the connector starts. These warnings do not cause problems with receiving messages from IoT hub.
279279
280-
1. Stop the connector after a few minutes using **Ctrl + C** twice. It will take a few minutes for the connector to stop.
280+
1. Stop the connector after a few minutes using **Ctrl + C** twice. It takes a few minutes for the connector to stop.
281281
282282
## Start the sink connector
283283
@@ -337,14 +337,15 @@ To send messages through the connector, use the following steps:
337337
338338
The schema for this JSON document is described in more detail at [https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md](https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md).
339339
340-
If you're using the simulated Raspberry Pi device, and it's running, the following message is logged by the device:
340+
341+
If you're using the simulated Raspberry Pi device, and it's running, the device logs the following message.:
341342
342-
```output
343-
Receive message: Turn On
344-
```
343+
```output
344+
Receive message: Turn On
345345
346-
Resend the JSON document, but change the value of the `"message"` entry. The new value is logged by the device.
347346
347+
Resend the JSON document, but change the value of the `"message"` entry. The new value is logged by the device.
348+
```
348349
For more information on using the sink connector, see [https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md](https://github.com/Azure/toketi-kafka-connect-iothub/blob/master/README_Sink.md).
349350
350351
## Next steps

articles/payment-hsm/inspect-traffic.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Azure Payment HSM traffic inspection
3-
description: Guiance on how to bypass the UDR restriction and inspect traffic destined to an Azure Payment HSM.
3+
description: Guidance on how to bypass the UDR restriction and inspect traffic destined to an Azure Payment HSM.
44
services: payment-hsm
55
ms.service: payment-hsm
66
author: davidsntg

0 commit comments

Comments
 (0)