Skip to content

Commit 1515065

Browse files
authored
Merge pull request #291195 from MicrosoftDocs/main
11/29 11:00 AM IST Publish
2 parents edd846c + d2997cf commit 1515065

File tree

7 files changed

+37
-34
lines changed

7 files changed

+37
-34
lines changed

articles/automation/automation-network-configuration.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Azure Automation network configuration details
33
description: This article provides details of network information required by Azure Automation State Configuration, Azure Automation Hybrid Runbook Worker, Update Management, and Change Tracking and Inventory
44
ms.topic: overview
5-
ms.date: 09/09/2024
5+
ms.date: 11/29/2024
66
---
77

88
# Azure Automation network configuration details
@@ -20,7 +20,13 @@ The following port and URLs are required for the Hybrid Runbook Worker, and for
2020

2121
### Network planning for Hybrid Runbook Worker
2222

23-
For either a system or user Hybrid Runbook Worker to connect to and register with Azure Automation, it must have access to the port number and URLs described in this section. The worker must also have access to the [ports and URLs required for the Log Analytics agent](/azure/azure-monitor/agents/log-analytics-agent) to connect to the Azure Monitor Log Analytics workspace.
23+
If you use a firewall to restrict access to the Internet, you must configure the firewall to permit access. The following port and URLs are required for the Hybrid Runbook Worker, and for [Automation State Configuration](./automation-dsc-overview.md) to communicate with Azure Automation.
24+
25+
| Property | Description |
26+
| --- | --- |
27+
| Port | 443 for outbound internet access |
28+
| Global URL | *.azure-automation.net |
29+
| Global URL of US Gov Virginia | *.azure-automation.us |
2430

2531
If you have an Automation account that's defined for a specific region, you can restrict Hybrid Runbook Worker communication to that regional datacenter. Review the [DNS records used by Azure Automation](how-to/automation-region-dns-records.md) for the required DNS records.
2632

articles/automation/extension-based-hybrid-runbook-worker-install.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: This article provides information about deploying the extension-bas
44
services: automation
55
ms.subservice: process-automation
66
ms.custom: devx-track-azurepowershell, devx-track-azurecli, devx-track-bicep, linux-related-content
7-
ms.date: 06/29/2024
7+
ms.date: 11/29/2024
88
ms.topic: how-to
99
#Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
1010
ms.service: azure-automation

articles/azure-functions/flex-consumption-how-to.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -270,8 +270,8 @@ You can't currently enable virtual networking when you use Visual Studio Code to
270270
271271
For end-to-end examples of how to create apps in Flex Consumption with virtual network integration see these resources:
272272
273-
+ [Flex Consumption: HTTP to Event Hubs using VNET Integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/HTTP-VNET-EH/README.md)
274-
+ [Flex Consumption: triggered from Service Bus using VNET Integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/SB-VNET/README.md)
273+
+ [Flex Consumption: HTTP to Event Hubs using VNET Integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/README.md)
274+
+ [Flex Consumption: triggered from Service Bus using VNET Integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/README.md)
275275
276276
To modify or delete virtual network integration in an existing app:
277277

articles/azure-functions/functions-infrastructure-as-code.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1982,8 +1982,8 @@ You can create your function app in a deployment where one or more of the resour
19821982
::: zone pivot="flex-consumption-plan"
19831983
These projects provide Bicep-based examples of how to deploy your function apps in a virtual network, including with network access restrictions:
19841984

1985-
+ [High-scale HTTP triggered function connects to an event hub secured by a virtual network](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/HTTP-VNET-EH/README.md): An HTTP triggered function (.NET isolated worker mode) accepts calls from any source and then sends the body of those HTTP calls to a secure event hub running in a virtual network by using virtual network integration.
1986-
+ [Function is triggered by a Service Bus queue secured in a virtual network](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/SB-VNET/README.md): A Python function is triggered by a Service Bus queue secured in a virtual network. The queue is accessed in the virtual network using private endpoint. A virtual machine in the virtual network is used to send messages.
1985+
+ [High-scale HTTP triggered function connects to an event hub secured by a virtual network](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/README.md): An HTTP triggered function (.NET isolated worker mode) accepts calls from any source and then sends the body of those HTTP calls to a secure event hub running in a virtual network by using virtual network integration.
1986+
+ [Function is triggered by a Service Bus queue secured in a virtual network](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/README.md): A Python function is triggered by a Service Bus queue secured in a virtual network. The queue is accessed in the virtual network using private endpoint. A virtual machine in the virtual network is used to send messages.
19871987
::: zone-end
19881988
::: zone pivot="premium-plan,dedicated-plan"
19891989
When creating a deployment that uses a secured storage account, you must both explicitly set the `WEBSITE_CONTENTSHARE` setting and create the file share resource named in this setting. Make sure you create a `Microsoft.Storage/storageAccounts/fileServices/shares` resource using the value of `WEBSITE_CONTENTSHARE`, as shown in this example ([ARM template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/azuredeploy.json#L467)|[Bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/main.bicep#L351)). You'll also need to set the site property `vnetContentShareEnabled` to true.

articles/azure-functions/functions-scenarios.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,6 @@ public static async Task Run([BlobTrigger("catalog-uploads/{name}", Source = Blo
5555
}
5656
```
5757

58-
+ [Event-based Blob storage triggered function that converts PDF documents to text at scale](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/tree/main/E2E/BLOB-PDF)
5958
+ [Upload and analyze a file with Azure Functions and Blob Storage](../storage/blobs/blob-upload-function-trigger.md?tabs=dotnet)
6059
+ [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md?tabs=dotnet)
6160
+ [Trigger Azure Functions on blob containers using an event subscription](functions-event-grid-blob-trigger.md?pivots=programming-language-csharp)
@@ -111,7 +110,6 @@ public static async Task Run(
111110
await xformer.Transform(debatchedMessages, partitionContext.PartitionId, outputMessages);
112111
}
113112
```
114-
+ [Service Bus trigger using virtual network integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/SB-VNET/)
115113
+ [Streaming at scale with Azure Event Hubs, Functions and Azure SQL](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-functions-azuresql)
116114
+ [Streaming at scale with Azure Event Hubs, Functions and Cosmos DB](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-functions-cosmosdb)
117115
+ [Streaming at scale with Azure Event Hubs with Kafka producer, Functions with Kafka trigger and Cosmos DB](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubskafka-functions-cosmosdb)
@@ -121,25 +119,21 @@ public static async Task Run(
121119
::: zone-end
122120

123121
::: zone pivot="programming-language-python"
124-
+ [Service Bus trigger using virtual network integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/SB-VNET/)
125122
+ [Azure Event Hubs trigger for Azure Functions](functions-bindings-event-hubs-trigger.md?pivots=programming-language-python)
126123
+ [Apache Kafka trigger for Azure Functions](functions-bindings-kafka-trigger.md?pivots=programming-language-python)
127124
::: zone-end
128125

129126
::: zone pivot="programming-language-javascript"
130-
+ [Service Bus trigger using virtual network integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/SB-VNET/)
131127
+ [Azure Event Hubs trigger for Azure Functions](functions-bindings-event-hubs-trigger.md?pivots=programming-language-javascript)
132128
+ [Apache Kafka trigger for Azure Functions](functions-bindings-kafka-trigger.md?pivots=programming-language-javascript)
133129
::: zone-end
134130

135131
::: zone pivot="programming-language-powershell"
136-
+ [Service Bus trigger using virtual network integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/SB-VNET/)
137132
+ [Azure Event Hubs trigger for Azure Functions](functions-bindings-event-hubs-trigger.md?pivots=programming-language-powershell)
138133
+ [Apache Kafka trigger for Azure Functions](functions-bindings-kafka-trigger.md?pivots=programming-language-powershell)
139134
::: zone-end
140135

141136
::: zone pivot="programming-language-java"
142-
+ [Service Bus trigger using virtual network integration](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/blob/main/E2E/SB-VNET/)
143137
+ [Azure Functions Kafka trigger Java Sample](https://github.com/azure/azure-functions-kafka-extension/tree/main/samples/WalletProcessing_KafkademoSample)
144138
+ [Azure Event Hubs trigger for Azure Functions](functions-bindings-event-hubs-trigger.md?pivots=programming-language-java)
145139
+ [Apache Kafka trigger for Azure Functions](functions-bindings-kafka-trigger.md?pivots=programming-language-java)

articles/azure-signalr/signalr-howto-scale-multi-instances.md

Lines changed: 23 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -96,17 +96,10 @@ The following example overrides the default negotiate behavior and selects the e
9696
private class CustomRouter : EndpointRouterDecorator
9797
{ public override ServiceEndpoint GetNegotiateEndpoint(HttpContext context, IEnumerable<ServiceEndpoint> endpoints)
9898
{
99-
// Override the negotiate behavior to get the endpoint from query string
100-
var endpointName = context.Request.Query["endpoint"];
101-
if (endpointName.Count == 0)
102-
{
103-
context.Response.StatusCode = 400;
104-
var response = Encoding.UTF8.GetBytes("Invalid request");
105-
context.Response.Body.Write(response, 0, response.Length);
106-
return null;
107-
}
108-
109-
return endpoints.FirstOrDefault(s => s.Name == endpointName && s.Online) // Get the endpoint with name matching the incoming request
99+
// Sample code showing how to choose endpoints based on the incoming request endpoint query
100+
var endpointName = context.Request.Query["endpoint"].FirstOrDefault() ?? "";
101+
// Select from the available endpoints, don't construct a new ServiceEndpoint object here
102+
return endpoints.FirstOrDefault(s => s.Name == endpointName && s.Online) // Get the endpoint with name matching the incoming request
110103
?? base.GetNegotiateEndpoint(context, endpoints); // Or fallback to the default behavior to randomly select one from primary endpoints, or fallback to secondary when no primary ones are online
111104
}
112105
}
@@ -129,6 +122,22 @@ services.AddSignalR()
129122
});
130123
```
131124

125+
`ServiceOptions.Endpoints` also supports hot-reload. The below sample code shows how to load connection strings from one configuration section and public URL exposed by [reverse proxies](./signalr-howto-reverse-proxy-overview.md) from another, and as long as configuration supports hot-reload, the endpoints could be updated on the fly.
126+
```cs
127+
services.Configure<ServiceOptions>(o =>
128+
{
129+
o.Endpoints = [
130+
new ServiceEndpoint(Configuration["ConnectionStrings:AzureSignalR:East"], name: "east")
131+
{
132+
ClientEndpoint = new Uri(Configuration.GetValue<string>("PublicClientEndpoints:East"))
133+
},
134+
new ServiceEndpoint(Configuration["ConnectionStrings:AzureSignalR:West"], name: "west")
135+
{
136+
ClientEndpoint = new Uri(Configuration.GetValue<string>("PublicClientEndpoints:West"))
137+
},
138+
];
139+
});
140+
```
132141
## For ASP.NET
133142

134143
### Add multiple endpoints from config
@@ -184,15 +193,9 @@ private class CustomRouter : EndpointRouterDecorator
184193
{
185194
public override ServiceEndpoint GetNegotiateEndpoint(IOwinContext context, IEnumerable<ServiceEndpoint> endpoints)
186195
{
187-
// Override the negotiate behavior to get the endpoint from query string
188-
var endpointName = context.Request.Query["endpoint"];
189-
if (string.IsNullOrEmpty(endpointName))
190-
{
191-
context.Response.StatusCode = 400;
192-
context.Response.Write("Invalid request.");
193-
return null;
194-
}
195-
196+
// Sample code showing how to choose endpoints based on the incoming request endpoint query
197+
var endpointName = context.Request.Query["endpoint"] ?? "";
198+
// Select from the available endpoints, don't construct a new ServiceEndpoint object here
196199
return endpoints.FirstOrDefault(s => s.Name == endpointName && s.Online) // Get the endpoint with name matching the incoming request
197200
?? base.GetNegotiateEndpoint(context, endpoints); // Or fallback to the default behavior to randomly select one from primary endpoints, or fallback to secondary when no primary ones are online
198201
}

articles/storage/solution-integration/validated-partners/analytics/partner-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ This article highlights Microsoft partner companies that are integrated with Azu
2323
![Informatica company logo](./media/informatica-logo.png) |**Informatica**<br>Informatica’s enterprise-scale, cloud-native data management platform automates and accelerates the discovery, delivery, quality, and governance of enterprise data on Azure. AI-powered, metadata-driven data integration, and data quality and governance capabilities enable you to modernize analytics and accelerate your move to a data warehouse or to a data lake on Azure.|[Partner page](https://www.informatica.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.annualiics?tab=Overview)|
2424
![Qlik company logo](./media/qlik-logo.png) |**Qlik**<br>Qlik helps accelerate BI and ML initiatives with a scalable data integration and automation solution. Qlik also goes beyond migration tools to help drive agility throughout the data and analytics process with automated data pipelines and a governed, self-service catalog.|[Partner page](https://www.qlik.com/us/products/technology/qlik-microsoft-azure-migration)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik_data_integration_platform)|
2525
![Starburst logo](./media/starburst-logo.jpg) |**Starburst**<br>Starburst unlocks the value of data by making it fast and easy to access anywhere. Starburst queries data across any database, making it actionable for data-driven organizations. With Starburst, teams can prevent vendor lock-in, and use the existing tools that work for their business.|[Partner page](https://www.starburst.io/platform/deployment-options/starburst-on-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/starburstdatainc1582306810515.starburst-enterprise)|
26-
![Striim company logo](./media/striim-logo.png) |**Striim**<br>Striim enables continuous data movement and in-stream transformations from a wide variety of sources into multiple Azure solutions including Azure Synapse Analytics, Azure Cosmos DB, and Azure cloud databases. The Striim solution enables Azure Data Lake Storage customers to quickly build streaming data pipelines. Customers can choose their desired data latency (real-time, micro-batch, or batch) and enrich the data with more context. These pipelines can then support any application or big data analytics solution, including Azure SQL Data Warehouse and Azure Databricks. |[Partner ](https://www.striim.com/partners/striim-and-microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/striim.azurestorageintegration?tab=overview)|
26+
![Striim company logo](./media/striim-logo.png) |**Striim**<br>Striim enables continuous data movement and in-stream transformations from a wide variety of sources into multiple Azure solutions including Azure Synapse Analytics, Azure Cosmos DB, and Azure cloud databases. The Striim solution enables Azure Data Lake Storage customers to quickly build streaming data pipelines. Customers can choose their desired data latency (real-time, micro-batch, or batch) and enrich the data with more context. These pipelines can then support any application or big data analytics solution, including Azure SQL Data Warehouse and Azure Databricks. |[Partner ](https://www.striim.com/partners/striim-and-microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/striim.striim_cloud_enterprise?tab=Overview)|
2727
![Talend company logo](./media/talend-logo.png) |**Talend**<br>Talend Data Fabric is a platform that brings together multiple integration and governance capabilities. Using a single unified platform, Talend delivers complete, clean, and uncompromised data in real time. The Talend Trust Score helps assess the reliability of any data set. |[Partner page](https://www.talend.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/talend.talendclouddi)|
2828
![Unravel](./media/unravel-logo.png) |**Unravel Data**<br>Unravel Data provides observability and automatic management through a single pane of glass. AI-powered recommendations proactively improve reliability, speed, and resource allocations of your data pipelines and jobs. Unravel connects easily with Azure Databricks, HDInsight, Azure Data Lake Storage, and more through the Azure Marketplace or Unravel SaaS service. Unravel Data also helps migrate to Azure by providing an assessment of your environment. This assessment uncovers usage details, dependency maps, cost, and effort needed for a fast move with less risk.|[Partner page](https://www.unraveldata.com/azure-databricks/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/unravel-data.unravel4databrickssubscriptionasaservice?tab=Overview)
2929
|![Wandisco company logo](./media/wandisco-logo.jpg) |**WANdisco**<br>WANdisco’s migration engine lets you migrate Hadoop data to Data Lake Storage while it remains in active use at any scale, with zero downtime and zero data loss.<br><br>Developed in partnership with Microsoft, [WANdisco LiveData Platform for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wandisco.livedata-pipeline-azure-mp?tab=Overview) is tightly integrated with Azure. Besides having an Azure portal deployment experience, it also uses role-based access control, Microsoft Entra ID, Azure Policy enforcement, and Activity log integration. With Azure Billing integration, you don't need to add a vendor contract or get more vendor approvals.<br><br>Accelerate the replication of Hadoop data between multiple sources and targets for any data architecture. With LiveData Cloud Services, your data will be available for Azure Databricks, Synapse Analytics, and HDInsight as soon as it lands, with guaranteed 100% data consistency. |[Partner page](https://www.wandisco.com/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wandisco.livedata-pipeline-azure-mp?tab=Overview)|

0 commit comments

Comments
 (0)