diff --git a/docs/integrations/microsoft-azure/azure-app-service-plan.md b/docs/integrations/microsoft-azure/azure-app-service-plan.md
index f49976cdb3..d5affb7d91 100644
--- a/docs/integrations/microsoft-azure/azure-app-service-plan.md
+++ b/docs/integrations/microsoft-azure/azure-app-service-plan.md
@@ -61,6 +61,7 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
1. Choose `Stream to an event hub` as destination.
1. Select `AllMetrics`.
1. Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+4. Tag the location field in the source with right location value.
### Configure logs collection
@@ -69,11 +70,11 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
In this section, you will configure a pipeline for shipping diagnostic logs from [Azure Monitor](https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started) to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. To create the **Diagnostic setting** in the Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the below steps for each Azure Functions that you want to monitor.
+2. To create the **Diagnostic setting** in the Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the below steps for each Azure Functions that you want to monitor.
1. Choose `Stream to an event hub` as the destination.
1. Select `AllMetrics`.
1. Use the Event Hub namespace and Event Hub name configured in previous step in destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
+3. Tag the location field in the source with right location value.
#### Activity logs (optional)
diff --git a/docs/integrations/microsoft-azure/azure-application-gateway.md b/docs/integrations/microsoft-azure/azure-application-gateway.md
index 6780da6c0f..f7f41ac9dd 100644
--- a/docs/integrations/microsoft-azure/azure-application-gateway.md
+++ b/docs/integrations/microsoft-azure/azure-application-gateway.md
@@ -130,6 +130,7 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
1. Choose `Stream to an event hub` as destination.
1. Select `AllMetrics`.
1. Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+1. Tag the location field in the source with right location value.
### Configure logs collection
@@ -141,8 +142,8 @@ In this section, you will configure a pipeline for shipping diagnostic logs from
1. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each Azure application gateway account that you want to monitor.
1. Choose **Stream to an event hub** as the destination.
1. Select `allLogs`.
- 1. Use the Event Hub namespace and Event Hub name configured in the previous step in the destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
+ 1. Use the Event Hub namespace and Event Hub name configured in the previous step in the destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+1. Tag the location field in the source with right location value.
:::note
WAF logs and metrics will be available for WAF V2 tier and only after a WAF Policy has been associated with application gateway. Refer to the azure docs for more information.
diff --git a/docs/integrations/microsoft-azure/azure-cache-for-redis.md b/docs/integrations/microsoft-azure/azure-cache-for-redis.md
index f9ea663520..84f9b6e924 100644
--- a/docs/integrations/microsoft-azure/azure-cache-for-redis.md
+++ b/docs/integrations/microsoft-azure/azure-cache-for-redis.md
@@ -131,12 +131,12 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
1. Create hosted collector and tag tenant_name field.
2. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
-2. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
-3. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Redis Cache resource that you want to monitor.
+3. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
+4. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Redis Cache resource that you want to monitor.
* Choose `Stream to an event hub` as destination.
* Select `AllMetrics`.
* Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-
+5. Tag the location field in the source with right location value.
### Configure logs collection
@@ -145,11 +145,12 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
In this section, you will configure a pipeline for shipping diagnostic logs from Azure Monitor to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to the [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each azure redis cache account that you want to monitor.
+2. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each azure redis cache account that you want to monitor.
1. Choose **Stream to an event hub** as the destination.
1. Select `allLogs`.
1. Use the Event Hub namespace and Event Hub name configured in the previous step in the destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
+
+3. Tag the location field in the source with right location value.
#### Activity Logs
@@ -171,57 +172,57 @@ import AppInstallNoDataSourceV2 from '../../reuse/apps/app-install-index-apps-v2
The **Azure Cache for Redis - Administrative Operations** dashboard provides details like distribution by operation type, by operation, recent delete operations, top 10 operations that caused most errors, and users/applications by operation type.
-
+
### Connections(Enterprise)
The **Azure Cache for Redis - Connections(Enterprise)** provides details like connections by location, total unique connected clients, total connections, event types, disconnection events, failure by operations, connected clients, cache read vs write, and hit vs misses.
-
+
### Connections(Non-Enterprise)
The **Azure Cache for Redis - Connections(Non-Enterprise)** dashboard provides details like connections by location, total unique connected clients, total connections, top 10 ip's by connection count, connections by resource name, connected clients (instance based), connected clients, cache read vs write, and hit vs misses.
-
+
### Geo Replication
The **Azure Cache for Redis - Geo Replication** dashboard provides details like geo-replication healthy - fetched from geo-secondary cache, geo-replication full sync events - fetched from geo-secondary cache, geo-replication data sync offset - fetched from geo-primary cache, and geo-replication connectivity lag - fetched from geo-secondary cache.
-
+
### MSEntra Authentication Audit
The **Azure Cache for Redis - MSEntra Authentication Audit** dashboard provides details like requests by location, requests by resource name, requests by username, and MSEntra authentication audit details.
-
+
### Policy and Recommendations
The **Azure Cache for Redis - Policy and Recommendations** dashboard provides details like total success policy events, total failed policy events, total recommendation events, and recent recommendation events.
-
+
### Resource Operations
The **Azure Cache for Redis - Resource Operations** dashboard provides details like total operations, ops per second (max), gets, sets, evicted key count, and expired key count.
-
+
### Resource Overview
The **Azure Cache for Redis - Resource Overview** dashboard provides details like max server load %, max CPU %, max bytes used, max number of connected clients, and errors.
-
+
### Resource Performance
The **Azure Cache for Redis - Resource Performance** dashboard provides details like cache hits, cache misses, cache write (max), cache read (max), cache latency microseconds, and 99th percentile latency (max).
-
+
## Troubleshooting
diff --git a/docs/integrations/microsoft-azure/azure-cosmos-db.md b/docs/integrations/microsoft-azure/azure-cosmos-db.md
index 3927d65cc5..cc80b1ce6e 100644
--- a/docs/integrations/microsoft-azure/azure-cosmos-db.md
+++ b/docs/integrations/microsoft-azure/azure-cosmos-db.md
@@ -132,12 +132,13 @@ resourceId=/SUBSCRIPTIONS/*/RESOURCEGROUPS/*/PROVIDERS/MICROSOFT.DOCUMENTDB/*/*
In this section, you will configure a pipeline for shipping metrics from Azure Monitor to an Event Hub, on to an Azure Function, and finally to an HTTP Source on a hosted collector in Sumo Logic.
1. Create hosted collector and tag `tenant_name` field.
-1. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
-1. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
-1. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform the following steps for each Azure Cosmos DB account that you want to monitor.
+2. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
+3. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
+4. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform the following steps for each Azure Cosmos DB account that you want to monitor.
1. Choose **Stream to an event hub** as destination.
1. Select all the metrics under **Metrics** section.
1. Use the Event Hub namespace created by the ARM template in the previous step. You can create a new Event Hub or use the one created by the ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+5. Tag the location field in the source with right location value.
:::note
Currently, only Azure Cosmos DB for NoSQL database account type supports exporting metrics using diagnostic settings.
@@ -148,12 +149,12 @@ Currently, only Azure Cosmos DB for NoSQL database account type supports exporti
In this section, you will configure a pipeline for shipping diagnostic logs from Azure Monitor to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. If you want to audit Azure Cosmos DB control plane operations, [disable the key based metadata write access](https://learn.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs#disable-key-based-metadata-write-access).
-1. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/cosmos-db/monitor-resource-logs?tabs=azure-portal#create-diagnostic-settings). Perform the following steps for each Azure Cosmos DB account that you want to monitor.
+2. If you want to audit Azure Cosmos DB control plane operations, [disable the key based metadata write access](https://learn.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs#disable-key-based-metadata-write-access).
+3. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/cosmos-db/monitor-resource-logs?tabs=azure-portal#create-diagnostic-settings). Perform the following steps for each Azure Cosmos DB account that you want to monitor.
1. Choose **Stream to an event hub** as the destination.
1. Select your preferred log categories depending upon your database API or select **allLogs**.
1. Use the Event Hub namespace and Event Hub name configured in the previous step in the destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
+4. Tag the location field in the source with right location value.
#### Activity logs (optional)
diff --git a/docs/integrations/microsoft-azure/azure-database-for-mysql.md b/docs/integrations/microsoft-azure/azure-database-for-mysql.md
index 7d2c2fe625..a69628c0ac 100644
--- a/docs/integrations/microsoft-azure/azure-database-for-mysql.md
+++ b/docs/integrations/microsoft-azure/azure-database-for-mysql.md
@@ -107,13 +107,13 @@ Rule Name: AzureObservabilityMetadataExtractionFlexibleMySQLServerLevel
resourceId=/SUBSCRIPTIONS/*/RESOURCEGROUPS/*/PROVIDERS/*/FLEXIBLESERVERS/* tenant_name=*
```
-| Fields extracted | Metric rule |
-|:------------------|:----------------|
-| subscription_id | $resourceId._1 |
-| resource_group | $resourceId._2 |
-| provider_name | $resourceId._3 |
-| resource_type | FLEXIBLESERVERS |
-| resource_name | $resourceId._4 |
+| Fields extracted | Metric rule |
+|:------------------|:---------------------|
+| subscription_id | $resourceId._1 |
+| resource_group | $resourceId._2 |
+| provider_name | MICROSOFT.DBFORMYSQL |
+| resource_type | FLEXIBLESERVERS |
+| resource_name | $resourceId._3 |
### Configure metrics collection
@@ -127,7 +127,8 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
1. Choose `Stream to an event hub` as destination.
1. Select `AllMetrics`.
1. Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-
+4. Tag the location field in the source with right location value.
+
### Configure logs collection
@@ -136,14 +137,12 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
In this section, you will configure a pipeline for shipping diagnostic logs from Azure Monitor to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to the [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each Azure Redis cache account that you want to monitor.
+2. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each Azure Redis cache account that you want to monitor.
1. Choose **Stream to an event hub** as the destination.
1. Select `allLogs`.
1. Use the Event Hub namespace and Event Hub name configured in the previous step in the destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
-2. Enable slow query and error logs.
-
- Set Audit log related parameters as below:
+
+3. Set Audit log related parameters as below:
- audit_log_enabled: set to *ON*
- audit_log_events: Select the event types to be logged from the dropdown list.
@@ -155,6 +154,8 @@ In this section, you will configure a pipeline for shipping diagnostic logs from
- slow_query_log: set to *ON*
- long_query_time: Set the number of seconds a query can run before it's considered "slow". The default is 10 seconds.
- log_slow_admin_statements: set to *ON*
+4. Enable slow query and error logs.
+5. Tag the location field in the source with right location value.
#### Activity Logs
@@ -182,63 +183,63 @@ import ViewDashboards from '../../reuse/apps/view-dashboards.md';
The **Azure Database for Mysql - Error Logs** dashboard provides details about Errors Count, Server Start and Shutdown Events Over Time, Stopped Servers, Error Log Type Over Time, Crash Recovery Attempts Over Time, Top Errors, Top Warnings, and Log Reduce.
-
+
### Administrative Operations
The **Azure Database for Mysql - Administrative Operations** dashboard provides details like distribution by operation type, by operation, recent delete operations, top 10 operations that caused most errors and users / applications by operation type.
-
+
### Connections
The **Azure Database for Mysql - Connections** dashboard provides details about Connections by Location, Total Connections, Active Connections, Aborted Connections, Total Queries, Connections, Queries, and Recent Disconnect Logs.
-
+
### Overview
The **Azure Database for Mysql - Overview** dashboard provides details about Connections by Location, Requests by DB Instance, Top 10 IPs, Requests by Event Type, Requests by Error Code, Top Users with Executed Queries, Disconnection Events, Performance Overview, Error Details, and Queries Executed.
-
+
### Performance
The **Azure Database for Mysql - Performance** dashboard provides details about Max CPU (%), Max Memory (%), Max IO Consumption (%), Slow Queries Count, Max CPU (%), Max Memory (%), Max IO Consumption (%), and Slow Queries.
-
+
### Policy and Recommendations
The **Azure Database for Mysql - Policy and Recommendations** dashboard provides details about Total Success Policy Events, Total Success Policy Events, Total Failed Policy Events, Failed Policy Events, Total Recommendation Events, and Recent Recommendation Events.
-
+
### Queries
The **Azure Database for Mysql - Queries** dashboard provides details about Queries by IP, Drop Table Count by Instance, Create Table Count by Instance, Create Database Count by Instance, Drop Database Count by Instance, Executed SQL Statements, Queries executed vs Slow Queries, Drop Statements, Create Statements, Drop Database Statements, and Drop Table Statements.
-
+
### Replication
The **Azure Database for Mysql - Replication** dashboard provides details about Average Replication Lag (Seconds), Average Replication Lag (Seconds), Average HA Replication Lag (Seconds), and Average HA Replication Lag (Seconds).
-
+
### Slow Queries
The **Azure Database for Mysql - Slow Queries** dashboard provides details about Top 10 IPs Firing Slow Queries, Top 10 Users Firing, Top 10 Hosts Firing Slow Queries, Excessive Slow Queries by Host, Top 10 Slow Queries by Average Execution Time, Top 10 Excessive Slow Queries by Frequency, Slow Queries Over Time, and Excessive Slow Queries Over Time.
-
+
### Storage Overview
The **Azure Database for Mysql - Storage Overview** dashboard provides details about Max Storage utilisation (MB), Max Data File Size (MB), Max System Tablespace Size (MB), Max System Tablespace Size (MB), Max Binlog Storage (MB), Max Other Storage (MB), Max Storage Limit (MB), Max Backup Storage Used (MB), and Max Storage (%).
-
+
## Troubleshooting
diff --git a/docs/integrations/microsoft-azure/azure-database-for-postgresql.md b/docs/integrations/microsoft-azure/azure-database-for-postgresql.md
index 0ab5faba66..dc8483f866 100644
--- a/docs/integrations/microsoft-azure/azure-database-for-postgresql.md
+++ b/docs/integrations/microsoft-azure/azure-database-for-postgresql.md
@@ -18,46 +18,252 @@ For Azure Database for PostgreSQL, you can collect the following logs and metric
* **PostgreSQL Logs**. These logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. To learn more about the log format, refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-logging#log-format).
* **Audit Logs**. Audit logging of database activities is available through [pgAudit](https://www.pgaudit.org/) extension. By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. To learn more about the audit log format, refer to the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
-* **Metrics**. These metrics are available for a flexible server instance of Azure Database for PostgreSQL.For more information on supported metrics and instructions for enabling them, refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring#metrics).
+* **Metrics**. These metrics are available for a flexible server instance of Azure Database for PostgreSQL. For more information on supported metrics and instructions for enabling them, refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring#metrics).
## Setup
Azure service sends monitoring data to Azure Monitor, which can then [stream data to Eventhub](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/stream-monitoring-data-event-hubs). Sumo Logic supports:
-* Logs collection from [Azure Monitor](https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started) using our [Azure Event Hubs source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-* Metrics collection using our [HTTP Logs and Metrics source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/) via Azure Functions deployed using the ARM template.
+* **PostgreSQL Audit logs**. Azure Database for PostgreSQL flexible server provides users with the ability to configure audit logs. Audit logs can be used to track database-level activity including connection, admin, DDL, and DML events. These types of logs are commonly used for compliance purposes. To learn more about the different log types and schemas collected for Azure Database for PostgreSQL, refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-audit).
+* Logs collection from [Azure Monitor](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring#logs) using our [Azure Event Hubs source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
+* Metrics collection using our [HTTP Logs and Metrics source](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring#metrics) via Azure Functions deployed using the ARM template.
You must explicitly enable diagnostic settings for each Azure Database for PostgreSQL server you want to monitor. You can forward logs to the same event hub provided they satisfy the limitations and permissions as described [here](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal#destination-limitations).
When you configure the event hubs source or HTTP source, plan your source category to ease the querying process. A hierarchical approach allows you to make use of wildcards. For example: `Azure/DatabaseForPostgreSQL/Logs`, `Azure/DatabaseForPostgreSQL/Metrics`.
+
+### Configure field in field schema
+
+1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Logs > Fields**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the top menu select **Configuration**, and then under **Logs** select **Fields**. You can also click the **Go To...** menu at the top of the screen and select **Fields**.
+2. Search for the following fields:
+ - `tenant_name`. This field is tagged at the collector level. You can get the tenant name using the instructions in the [Microsoft Documentation](https://learn.microsoft.com/en-us/azure/active-directory-b2c/tenant-management-read-tenant-name#get-your-tenant-name).
+ - `location`. The region to which the resource name belongs to.
+ - `subscription_id`. ID associated with a subscription where the resource is present.
+ - `resource_group`. The resource group name where the Azure resource is present.
+ - `provider_name`. Azure resource provider name (for example, Microsoft.Network).
+ - `resource_type`. Azure resource type (for example, storage accounts).
+ - `resource_name`. The name of the resource (for example, storage account name).
+ - `service_type`. Type of the service that can be accessed with a Azure resource.
+ - `service_name`. Services that can be accessed with an Azure resource (for example, Azure SQL databases in Azure SQL Server).
+3. Create the fields if they are not present. Refer to [Manage fields](/docs/manage/fields/#manage-fields).
+
+### Configure Field Extraction Rules
+
+Create the following Field Extraction Rules (FER) for Azure Storage by following the instructions in the [Create a Field Extraction Rule](/docs/manage/field-extractions/create-field-extraction-rule/).
+
+#### Azure location extraction FER
+
+ ```sql
+ Rule Name: AzureLocationExtractionFER
+ Applied at: Ingest Time
+ Scope (Specific Data): tenant_name=*
+ ```
+
+ ```sql title="Parse Expression"
+ json "location", "properties.resourceLocation", "properties.region" as location, resourceLocation, service_region nodrop
+ | replace(toLowerCase(resourceLocation), " ", "") as resourceLocation
+ | if (!isBlank(resourceLocation), resourceLocation, location) as location
+ | if (!isBlank(service_region), service_region, location) as location
+ | if (isBlank(location), "global", location) as location
+ | fields location
+ ```
+
+#### Resource ID extraction FER
+
+ ```sql
+ Rule Name: AzureResourceIdExtractionFER
+ Applied at: Ingest Time
+ Scope (Specific Data): tenant_name=*
+ ```
+
+ ```sql title="Parse Expression"
+ json "resourceId", "ResourceId" as resourceId1, resourceId2 nodrop
+ | if (isBlank(resourceId1), resourceId2, resourceId1) as resourceId
+ | toUpperCase(resourceId) as resourceId
+ | parse regex field=resourceId "/SUBSCRIPTIONS/(?[^/]+)" nodrop
+ | parse field=resourceId "/RESOURCEGROUPS/*/" as resource_group nodrop
+ | parse regex field=resourceId "/PROVIDERS/(?[^/]+)" nodrop
+ | parse regex field=resourceId "/PROVIDERS/[^/]+(?:/LOCATIONS/[^/]+)?/(?[^/]+)/(?.+)" nodrop
+ | parse regex field=resource_name "(?[^/]+)(?:/PROVIDERS/[^/]+)?/(?[^/]+)/?(?.+)" nodrop
+ | if (isBlank(parent_resource_name), resource_name, parent_resource_name) as resource_name
+ | fields subscription_id, location, provider_name, resource_group, resource_type, resource_name, service_type, service_name
+ ```
+
+### Configure metric rules
+
+Create the following metrics rules by following the instructions in [Create a metrics rule](/docs/metrics/metric-rules-editor/#create-a-metrics-rule).
+
+#### Azure observability metadata extraction flexible postgresql server level
+
+```sql
+Rule Name: AzureObservabilityMetadataExtractionAzureDatabaseForPostgreSQLLevel
+```
+
+```sql title="Metric match expression"
+resourceId=/SUBSCRIPTIONS/*/RESOURCEGROUPS/*/PROVIDERS/MICROSOFT.DBFORPOSTGRESQL/FLEXIBLESERVERS/* tenant_name=*
+```
+
+| Fields extracted | Metric rule |
+|:------------------|:-------------------------|
+| subscription_id | $resourceId._1 |
+| resource_group | $resourceId._2 |
+| provider_name | MICROSOFT.DBFORPOSTGRESQL|
+| resource_type | FLEXIBLESERVERS |
+| resource_name | $resourceId._3 |
+
+
### Configure metrics collection
In this section, you will configure a pipeline for shipping metrics from Azure Monitor to an Event Hub, on to an Azure Function, and finally to an HTTP Source on a hosted collector in Sumo Logic.
-1. Enable custom metrics such as [autovacuum](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring#how-to-enable-autovacuum-metrics), [PgBouncer metrics](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring#how-to-enable-pgbouncer-metrics), [enhanced metrics](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-monitoring#enhanced-metrics), if required.
+1. Create hosted collector and tag `tenant_name` field.
2. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
-3. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
-4. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Azure Database for PostgreSQL server that you want to monitor.
- * Choose `Stream to an event hub` as destination.
- * Select `AllMetrics`.
- * Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+2. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
+3. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Flexible PostgreSQL Server resource that you want to monitor.
+ 1. Choose `Stream to an event hub` as destination.
+ 1. Select `AllMetrics`.
+ 1. Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+1. Tag the location field in the source with right location value.
### Configure logs collection
+#### Diagnostic logs
+
In this section, you will configure a pipeline for shipping diagnostic logs from Azure Monitor to an Event Hub.
-1. To enable audit logs perform below steps:
- * [Install the pgAudit extension](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-audit#installing-pgaudit).
- * [Configure audit logging](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-audit#pgaudit-settings).
-2. To set up the Azure Event Hubs source in Sumo Logic, refer to [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-3. To create the Diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/howto-configure-and-access-logs#configure-diagnostic-settings). Perform below steps for each Azure Database for PostgreSQL server that you want to monitor.
- * Choose `Stream to an event hub` as the destination.
- * Select `allLogs`.
- * Use the Event hub namespace and Event hub name configured in previous step in destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+1. To set up the Azure Event Hubs source in Sumo Logic, refer to the [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
+2. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each Azure Redis cache account that you want to monitor.
+ 1. Choose **Stream to an event hub** as the destination.
+ 1. Select `allLogs`.
+ 1. Use the Event Hub namespace and Event Hub name configured in the previous step in the destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+3. Set server parameters as below:
+ - `wal_level`. Set to **logical**.
+ - `cron.log_statement`. Set to **ON**.
+ - `log_statement_stats`. Set to **ON**.
+ - `pgaudit.log_statement_once`. Set to **ON**.
+ - `log_statement`. Select **ALL**.
+ - `log_lock_waits`. Set to **ON**.
+ - `log_recovery_conflict_waits`. Set to **ON**.
+4. Tag the location field in the source with right location value.
+
+#### Activity Logs
+
+To collect activity logs, follow the instructions [here](/docs/integrations/microsoft-azure/audit). Skip this step if you are already collecting activity logs for a subscription.
+
+:::note
+Since this source contains logs from multiple regions, ensure that you do not tag this source with the location tag.
+:::
+
+## Installing the Azure Flexible Database for PostgreSql app
+
+Now that you have set up data collection, install the Azure Database for PostgreSQL Sumo Logic app to use the pre-configured dashboards that provide visibility into your environment for real-time analysis of overall usage.
+
+import AppInstallNoDataSourceV2 from '../../reuse/apps/app-install-index-apps-v2.md';
+
+
+
+## Viewing the Flexible Database for PostgreSql dashboards
+
+import ViewDashboards from '../../reuse/apps/view-dashboards.md';
+
+
+
+### Disk
+
+The **Azure Database for PostgreSQL - Disk** dashboard provides insights on Number of Temporary Files Created, Blocks Hit Count, Number of I/O Operations, Number of Temp Files Created, Total Bytes Written to Temp Files (Bytes), Disk I/Os Consumed/min (%), Disk Bandwidth Consumed/min (%), Number of Outstanding I/O Operations, Blocks Read Count, Total Bytes Written to Temp Files, Disk Bandwidth Consumed/min (%), and Disk I/Os Consumed/min (%).
+
+
+
+### Administrative Operations
+
+The **Azure Database for PostgreSQL - Administrative Operations** dashboard provides insights on Top 10 operations that caused the most errors, Distribution by Operation Type (Read, Write, and Delete), Distribution by Operations, Recent Write Operations, Recent Delete Operations, Users/Applications by Operation type, and Distribution by Status.
+
+
+
+### Autovacuum
+
+The **Azure Database for PostgreSQL - Autovacuum** dashboard provides insights on Estimated Dead Rows User Tables, Estimated Live Rows User Tables, Estimated Modifications User Tables, Analyze Counter User Tables, Vacuum Counter User Tables, User Tables Vacuumed vs AutoVacuumed, and User Tables Analyzed vs User Tables Auto.
+
+
+
+### Connections
+
+The **Azure Database for PostgreSQL - Connections** dashboard provides insights on Active Connections, Failed Connections, Succeeded Connections, Max Connections, and Active vs Succeeded vs Failed Connections.
+
+
+
+### Error Logs
+
+The **Azure Database for PostgreSQL - Error Logs** dashboard provides insights on Log by Sql Errcode, Log by Severity, Database Shut Down Events, Log by Backend Type, Database System Up Events, Top Error Statements, and Top Fatal Errors.
+
+
+
+### Overview
+
+The **Azure Database for PostgreSQL - Overview** dashboard provides insights on Requests by Location, Is DB Alive, Number of Backends Connected to Database, and Number of Deadlocks Detected in Database.
+
+
+
+### Performance
+The **Azure Database for PostgreSQL - Performance** dashboard provides insights on Max CPU (%), Max Memory (%), Cpu Credits Consumed, Cpu Credits Remaining, Read Throughput, Read Iops, Write Throughput, and Write Iops.
+
+
+
+### Policy and Recommendations
+The **Azure Database for PostgreSQL - Policy and Recommendations** dashboard provides insights on Total Success Policy Events, Total Success Policy Events, Total Failed Policy Events, Failed Policy Events, Total Recommendation Events, and Recent Recommendation Events.
+
+
+
+### Health
+The **Azure Application Gateway - Health** dashboard provides insights on recent alerts, resource health incidents, recent resource health status by resource name, trend by event type, downtime by causes, and trend of unavailable, degraded, and available.
+
+
+
+### Replication
+
+The **Azure Database for PostgreSQL - Replication** dashboard provides insights on Average Replication Lag, Physical Replication Lag, and Logical Replication Lag.
+
+
+
+### Schema Overview
+
+The **Azure Database for PostgreSQL - Schema Overview** dashboard provides insights on Indexes Scanned By Schema, Rows Inserted By Schema, Rows Updated By Schema, Rows Deleted By Schema, Dead Rows By Schema, Live Rows By Schema, Sequential Scan By Schema, and Tables Vacuumed By Schema.
+
+
+
+### Sessions
+
+The **Azure Database for PostgreSQL - Sessions** dashboard provides insights on Longest Transaction Time (Sec), Oldest Backend Time (Sec), Longest Query Time (Sec), Oldest Backend Xmin (Sec), Oldest Backend Xmin Age, Application Name with Most Sessions, and Session duration distribution.
+
+
+
+### Storage Overview
+
+The **Azure Database for PostgreSQL - Storage Overview** dashboard provides insights on Storage Used (Bytes), Storage Used (%), Storage Used by Transaction Logs(Bytes), Max Backup Storage Used (Bytes), Database Size (Bytes), Storage Free (Bytes), Egress (Bytes), and Ingress (Bytes).
+
+
+
+### Transactions
+
+The **Azure Database for PostgreSQL - Transactions** dashboard provides insights on Transactions Per Second, Total Transactions, Transactions Commit, Transactions Rollback, Maximum Used TransactionIDs, Delete Transactions, Insert Transactions, Fetched Transactions, Returned Transactions, Returned Transactions, and Update Transactions.
+
+
## Troubleshooting
### HTTP Logs and Metrics Source used by Azure Functions
To troubleshoot metrics collection, follow the instructions in [Collect Metrics from Azure Monitor > Troubleshooting metrics collection](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#troubleshooting-metrics-collection).
+
+## Upgrade/Downgrade the Azure Flexible Database for PostgreSql app (Optional)
+
+import AppUpdate from '../../reuse/apps/app-update.md';
+
+
+
+## Uninstalling the Azure Flexible Database for PostgreSql app (Optional)
+
+import AppUninstall from '../../reuse/apps/app-uninstall.md';
+
+
diff --git a/docs/integrations/microsoft-azure/azure-functions.md b/docs/integrations/microsoft-azure/azure-functions.md
index 9617dc24d8..5cf763706c 100644
--- a/docs/integrations/microsoft-azure/azure-functions.md
+++ b/docs/integrations/microsoft-azure/azure-functions.md
@@ -126,11 +126,12 @@ resourceId=/SUBSCRIPTIONS/*/RESOURCEGROUPS/*/PROVIDERS/*/SITES/* tenant_name=*
In this section, you will configure a pipeline for shipping metrics from Azure Monitor to an Event Hub, on to an Azure Function, and finally to an HTTP Source on a hosted collector in Sumo Logic.
1. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
-1. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
-1. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Azure Functions that you want to monitor.
+2. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
+3. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Azure Functions that you want to monitor.
1. Choose `Stream to an event hub` as destination.
1. Select `AllMetrics`.
1. Use the Event Hub namespace created by the ARM template in Step 2 above. You can create a new Event Hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+4. Tag the location field in the source with right location value.
### Configure logs collection
@@ -139,11 +140,11 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
In this section, you will configure a pipeline for shipping diagnostic logs from [Azure Monitor](https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started) to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to the [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. To create the **Diagnostic setting** in the Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the below steps for each Azure Functions that you want to monitor.
+2. To create the **Diagnostic setting** in the Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the below steps for each Azure Functions that you want to monitor.
1. Choose `Stream to an event hub` as the destination.
1. Select `AllMetrics`.
- 1. Use the Event Hub namespace and Event Hub name configured in previous step in destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
+ 1. Use the Event Hub namespace and Event Hub name configured in previous step in destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+3. Tag the location field in the source with right location value.
#### Activity logs (optional)
diff --git a/docs/integrations/microsoft-azure/azure-load-balancer.md b/docs/integrations/microsoft-azure/azure-load-balancer.md
index 96824ca3f9..da25c8fdc6 100644
--- a/docs/integrations/microsoft-azure/azure-load-balancer.md
+++ b/docs/integrations/microsoft-azure/azure-load-balancer.md
@@ -118,11 +118,12 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
1. Create hosted collector and tag tenant_name field.
2. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
-2. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
-3. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Load Balancer that you want to monitor.
+3. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
+4. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Load Balancer that you want to monitor.
* Choose `Stream to an event hub` as destination.
* Select `AllMetrics`.
* Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+5. Tag the location field in the source with right location value.
:::note
SNAT related metrics will appear only when a outbound rule is configured.
@@ -135,11 +136,12 @@ SNAT related metrics will appear only when a outbound rule is configured.
In this section, you will configure a pipeline for shipping diagnostic logs from Azure Monitor to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to the [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each Azure load balancer account that you want to monitor.
+2. To create the diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the steps below for each Azure load balancer account that you want to monitor.
1. Choose **Stream to an event hub** as the destination.
1. Select `allLogs`.
1. Use the Event Hub namespace and Event Hub name configured in the previous step in the destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
+
+3. Tag the location field in the source with right location value.
#### Activity Logs
@@ -167,7 +169,7 @@ import ViewDashboards from '../../reuse/apps/view-dashboards.md';
The **Azure Load Balancer - Overview** dashboard provides a collective information on Health Probe Status, Average Data Path Availability, Transmission Details, Connection Details, and SNAT Ports Utilization(%).
-
+
### Administrative Operations
@@ -179,7 +181,7 @@ Use this dashboard to:
- View top 10 operations that caused the most errors.
- View recent read, write, and delete operations.
-
+
### Health
@@ -190,7 +192,7 @@ Use this dashboard to:
- Identify failed requests and operations.
- Detect when all backend instances in a pool are not responding to the configured health probes.
-
+
### Network
@@ -201,7 +203,7 @@ Use this dashboard to:
- Detect when there is less data path availability than expected due to platform issues.
- Monitor data transmission (packets and bytes) through your load balancers.
-
+
### Policy
@@ -211,7 +213,7 @@ Use this dashboard to:
- Monitor policy events with warnings and errors.
- View recent failed policy events.
-
+
## Troubleshooting
diff --git a/docs/integrations/microsoft-azure/azure-storage.md b/docs/integrations/microsoft-azure/azure-storage.md
index 63271eba3b..d3975f7e77 100644
--- a/docs/integrations/microsoft-azure/azure-storage.md
+++ b/docs/integrations/microsoft-azure/azure-storage.md
@@ -172,6 +172,8 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
* Choose `Stream to an event hub` as destination.
* Select `Transaction`.
* Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+4. Tag the location field in the source with right location value.
+
### Configure logs collection
@@ -185,7 +187,7 @@ In this section, you will configure a pipeline for shipping diagnostic logs from
* Select `allLogs`.
* Use the Event hub namespace and Event hub name configured in previous step in destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
1. Tag the location field in the source with right location value.
-
+
#### Activity Logs
diff --git a/docs/integrations/microsoft-azure/sql.md b/docs/integrations/microsoft-azure/sql.md
index ac8b7a7b01..2ad005ced2 100644
--- a/docs/integrations/microsoft-azure/sql.md
+++ b/docs/integrations/microsoft-azure/sql.md
@@ -182,16 +182,17 @@ Create a Field Extraction Rule (FER) by following the instructions [here](/docs/
In this section, you will configure a pipeline to send metrics from Azure Monitor to an Event Hub, then to an Azure Function, and finally to an HTTP Source on a hosted collector in Sumo Logic.
-1. Create hosted collector and tag `tenant_name` field.
-1. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
-1. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
-1. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Azure SQL database that you want to monitor.
+1. Create hosted collector and tag `tenant_name` field.
+2. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
+3. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
+4. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Azure SQL database that you want to monitor.
* Choose `Stream to an event hub` as destination.
* Select all the metric types under `Metrics` section.
* Use the Event hub namespace created by the ARM template in Step 2 above. You can create a new Event hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-

+5. Tag the location field in the source with right location value.
+
### Configure logs collection
@@ -200,14 +201,14 @@ In this section, you will configure a pipeline to send metrics from Azure Monito
In this section, you will configure a pipeline for shipping diagnostic logs from [Azure Monitor](https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started) to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to the [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. To create the Diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform below steps for each Azure SQL database that you want to monitor.
+2. To create the Diagnostic settings in Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform below steps for each Azure SQL database that you want to monitor.
* Choose `Stream to an event hub` as the destination.
* Select all the log types except `SQL Security Audit Event`.
* Use the Event hub namespace and Event hub name configured in previous step in destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-
+
-1. Tag the location field in the source with right location value.
+3. Tag the location field in the source with right location value.
#### Enable SQL Security Audit logs
In this section, you will configure a pipeline for shipping diagnostic logs from [Azure Monitor](https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started) to an Event Hub.
diff --git a/docs/integrations/microsoft-azure/web-apps.md b/docs/integrations/microsoft-azure/web-apps.md
index 73b33b0b04..fce0fc4a95 100644
--- a/docs/integrations/microsoft-azure/web-apps.md
+++ b/docs/integrations/microsoft-azure/web-apps.md
@@ -147,11 +147,13 @@ resourceId=/SUBSCRIPTIONS/*/RESOURCEGROUPS/*/PROVIDERS/*/SITES/* tenant_name=*
In this section, you will configure a pipeline for shipping metrics from Azure Monitor to an Event Hub, on to an Azure Function, and finally to an HTTP Source on a hosted collector in Sumo Logic.
1. [Configure an HTTP Source](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-1-configure-an-http-source).
-1. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
-1. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Azure WebApps that you want to monitor.
+2. [Configure and deploy the ARM Template](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-2-configure-azure-resources-using-arm-template).
+3. [Export metrics to Event Hub](/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-metrics-azure-monitor/#step-3-export-metrics-for-a-particular-resource-to-event-hub). Perform below steps for each Azure WebApps that you want to monitor.
1. Choose `Stream to an event hub` as destination.
1. Select `AllMetrics`.
1. Use the Event Hub namespace created by the ARM template in Step 2 above. You can create a new Event Hub or use the one created by ARM template. You can use the default policy `RootManageSharedAccessKey` as the policy name.
+4. Tag the location field in the source with right location value.
+
### Configure logs collection
@@ -160,11 +162,12 @@ In this section, you will configure a pipeline for shipping metrics from Azure M
In this section, you will configure a pipeline for shipping diagnostic logs from [Azure Monitor](https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started) to an Event Hub.
1. To set up the Azure Event Hubs source in Sumo Logic, refer to the [Azure Event Hubs Source for Logs](/docs/send-data/collect-from-other-data-sources/azure-monitoring/ms-azure-event-hubs-source/).
-1. To create the **Diagnostic setting** in the Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the below steps for each Azure WebApps that you want to monitor.
+2. To create the **Diagnostic setting** in the Azure portal, refer to the [Azure documentation](https://learn.microsoft.com/en-gb/azure/data-factory/monitor-configure-diagnostics). Perform the below steps for each Azure WebApps that you want to monitor.
1. Choose `Stream to an event hub` as the destination.
1. Select `HTTP logs`, `App Service Console Logs`, `App Service Application Logs`, `Access Audit Logs`, `IPSecurity Audit logs`, `App Service Platform logs`, `Report Antivirus Audit Logs`, `Site Content Change Audit Logs`.
1. Use the Event Hub namespace and Event Hub name configured in previous step in destination details section. You can use the default policy `RootManageSharedAccessKey` as the policy name.
-1. Tag the location field in the source with right location value.
+
+3. Tag the location field in the source with right location value.
#### Activity logs (optional)
diff --git a/static/img/send-data/azureflexible-postgresqlserver-logs.png b/static/img/send-data/azureflexible-postgresqlserver-logs.png
new file mode 100644
index 0000000000..f73b9b2b1e
Binary files /dev/null and b/static/img/send-data/azureflexible-postgresqlserver-logs.png differ
diff --git a/static/img/send-data/azureflexible-postgresqlserver-metrics.png b/static/img/send-data/azureflexible-postgresqlserver-metrics.png
new file mode 100644
index 0000000000..c2459f6019
Binary files /dev/null and b/static/img/send-data/azureflexible-postgresqlserver-metrics.png differ