Skip to content

Commit d5132c9

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into yelevin/auxiliary-logs
2 parents 5cf7c23 + f3d6255 commit d5132c9

32 files changed

+122
-104
lines changed

articles/aks/azure-csi-files-storage-provision.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to create a static or dynamic persistent volume with Azur
55
ms.topic: article
66
ms.custom: devx-track-azurecli
77
ms.subservice: aks-storage
8-
ms.date: 07/09/2024
8+
ms.date: 07/20/2024
99
author: tamram
1010
ms.author: tamram
1111

@@ -108,6 +108,7 @@ For more information on Kubernetes storage classes for Azure Files, see [Kuberne
108108
- mfsymlinks
109109
- cache=strict
110110
- actimeo=30
111+
- nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
111112
parameters:
112113
skuName: Premium_LRS
113114
```
@@ -240,6 +241,7 @@ mountOptions:
240241
- mfsymlinks
241242
- cache=strict
242243
- actimeo=30
244+
- nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
243245
parameters:
244246
skuName: Premium_LRS
245247
```
@@ -368,7 +370,7 @@ Kubernetes needs credentials to access the file share created in the previous st
368370
- mfsymlinks
369371
- cache=strict
370372
- nosharesock
371-
- nobrl
373+
- nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
372374
```
373375

374376
2. Create the persistent volume using the [`kubectl create`][kubectl-create] command.
@@ -470,7 +472,7 @@ spec:
470472
volumeAttributes:
471473
secretName: azure-secret # required
472474
shareName: aksshare # required
473-
mountOptions: 'dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock' # optional
475+
mountOptions: 'dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock,nobrl' # optional
474476
```
475477

476478
2. Create the pod using the [`kubectl apply`][kubectl-apply] command.

articles/aks/csi-migrate-in-tree-volumes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS)
33
description: Learn how to migrate from in-tree persistent volume to the Container Storage Interface (CSI) driver in an Azure Kubernetes Service (AKS) cluster.
44
ms.topic: article
5-
ms.date: 01/11/2024
5+
ms.date: 07/20/2024
66
author: mgoedtel
77
ms.subservice: aks-storage
88
---
@@ -444,7 +444,7 @@ Migration from in-tree to CSI is supported by creating a static volume:
444444
- mfsymlinks
445445
- cache=strict
446446
- nosharesock
447-
- nobrl
447+
- nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
448448
```
449449
450450
5. Create a file named *azurefile-mount-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume* using the following code.

articles/azure-monitor/agents/azure-monitor-agent-custom-text-log-migration.md

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,24 +5,29 @@ ms.topic: conceptual
55
ms.date: 05/09/2023
66
---
77

8-
# Migrate from MMA custom text log to AMA DCR based custom text logs
9-
This article describes the steps to migrate a [MMA Custom text log](data-sources-custom-logs.md) table so you can use it as a destination for a new [AMA custom text logs](data-collection-log-text.md) DCR. When you follow the steps, you won't lose any data. If you're creating a new AMA custom text log table, then this article doesn't pertain to you.
8+
# Migrate from MMA custom text table to AMA DCR based custom text table
9+
This article describes the steps to migrate a [MMA Custom text log](data-sources-custom-logs.md) table so you can use it as a destination for a new [AMA custom text logs](data-collection-log-text.md) DCR. If you're creating a new AMA custom text table, then this article doesn't pertain to you.
1010

11-
> Note: Once logs are migrated, MMA will not be able to write to the destination table. This is an issue for the migration of production system that we are actively working.
12-
>
1311

14-
## Background
15-
MMA custom text logs must be configured to support new features in order for AMA custom text log DCRs to write to it. The following actions are taken:
16-
- The table is reconfigured to enable all DCR-based custom logs features.
17-
- All MMA custom fields stop updating in the table. AMA can write data to any column in the table.
18-
- The MMA Custom text log can write to noncustom fields, but it will not be able to create new columns. The portal table management UI can be used to change the schema after migration.
12+
> [!Warning]
13+
> Your MMA agents won't be able to write to existing custom tables after migration. If your AMA agent writes to an existing custom table, it is implicitly migrated.
14+
1915

20-
## Migration procedure
16+
## Background
17+
You must configure MMA custom text logs to support new DCR features that allow AMA agents to write to it. Take the following actions:
18+
- Your table is reconfigured to enable all DCR-based custom logs features.
19+
- Your AMA agents can write data to any column in the table.
20+
- Your MMA Custom text log will lose the ability to write to the custom log.
21+
To continue to write you custom data from both MMA and AMA each must have its own custom table. Your data queries in LA that process your data must join the two tables until the migration is complete at which point you can remove the join.
22+
23+
## Migration
2124
You should follow the steps only if the following criteria are true:
2225
- You created the original table using the Custom Log Wizard.
2326
- You're going to preserve the existing data in the table.
24-
- You're going to write new data using and [AMA custom text log DCR](data-collection-log-text.md) and possibly configure an [ingestion time transformation](azure-monitor-agent-transformation.md).
27+
- You do not need MMA agents to send data to the existing table
28+
- You're going to exclusively write new data using and [AMA custom text log DCR](data-collection-log-text.md) and possibly configure an [ingestion time transformation](azure-monitor-agent-transformation.md).
2529

30+
## Procedure
2631
1. Configure your data collection rule (DCR) following procedures at [collect text logs with Azure Monitor Agent](data-collection-log-text.md)
2732
2. Issue the following API call against your existing custom logs table to enable ingestion from Data Collection Rule and manage your table from the portal UI. This call is idempotent and future calls have no effect. Migration is one-way, you can't migrate the table back to MMA.
2833

articles/azure-monitor/agents/data-collection-log-text.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,9 +62,8 @@ The incoming stream of data includes the columns in the following table.
6262
## Custom table
6363
Before you can collect log data from a text file, you must create a custom table in your Log Analytics workspace to receive the data. The table schema must match the data you are collecting, or you must add a transformation to ensure that the output schema matches the table.
6464

65-
>
66-
> Warning: You shouldn’t use an existing custom log table used by MMA agents. Your MMA agents won't be able to write to the table once the first AMA agent writes to the table. You should create a new table for AMA to use to prevent MMA data loss.
67-
>
65+
> [!Warning]
66+
> You shouldn’t use an existing custom log table used by MMA agents. Your MMA agents won't be able to write to the table once the first AMA agent writes to the table. You should create a new table for AMA to use to prevent MMA data loss.
6867
6968

7069
For example, you can use the following PowerShell script to create a custom table with `RawData` and `FilePath`. You wouldn't need a transformation for this table because the schema matches the default schema of the incoming stream.

articles/azure-monitor/essentials/activity-log.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -155,8 +155,6 @@ You can also access activity log events by using the following methods:
155155
- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the activity log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log).
156156
- Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the activity log from the CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log).
157157
- Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the activity log from a REST client.
158-
-
159-
-
160158
## Legacy collection methods
161159

162160
> [!NOTE]

articles/azure-monitor/essentials/data-collection-rule-create-edit.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,9 @@ The following table lists methods to create data collection scenarios using the
3535

3636
## Create a DCR
3737

38-
The Azure portal provides a data collection rule wizard for collecting data from virtual machines and for collecting Prometheus metrics from containers.
38+
Azure provides a centralized cloud based data collection configuration plan for virtual machines, virtual machine scale sets, On-Prem machines and Prometheus metrics from containers.
39+
40+
This article describes how to create a DCR from scratch. There are other insights solution that provide DCR creation experiences like Sentinel, VM insights, and Application Insights that create DCRs as part of there own workflows. Some time the DCRs created in these by different solution can seem to conflict. There are three tables to which Windows events can be sent to. Sentinel security audit events with go to SecurityEvents, WEF connector events go to the WindowsEvent table. If you use the scratch Windows event collection the results go to the Event table.
3941

4042
To create a data collection rule using the Azure CLI, PowerShell, API, or ARM templates, create a JSON file, starting with one of the [sample DCRs](./data-collection-rule-samples.md). Use information in [Structure of a data collection rule in Azure Monitor](./data-collection-rule-structure.md) to modify the JSON file for your particular environment and requirements.
4143

articles/business-continuity-center/tutorial-view-protectable-resources.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Tutorial - View protectable resources
33
description: In this tutorial, learn how to view your resources that are currently not protected by any solution using Azure Business Continuity center.
44
ms.topic: tutorial
5-
ms.date: 03/29/2024
5+
ms.date: 07/22/2024
66
ms.service: azure-business-continuity-center
77
ms.custom:
88
- ignite-2023
@@ -18,7 +18,7 @@ This tutorial shows you how to view your resources that are currently not protec
1818

1919
Before you start this tutorial:
2020

21-
- Review supported regions for ABC Center.
21+
- Review [supported regions for ABC Center](business-continuity-center-support-matrix.md#supported-regions).
2222
- Ensure you have the required resource permissions to view them in the ABC center.
2323

2424
## View protectable resources

articles/defender-for-cloud/faq-vulnerability-assessments.yml

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ metadata:
55
services: defender-for-cloud
66
ms.author: dacurwin
77
author: dcurwin
8-
manager: raynew
98
ms.topic: faq
109
ms.date: 06/20/2023
1110
title: Common questions about vulnerability assessment
@@ -14,6 +13,18 @@ summary: |
1413
sections:
1514
- name: Ignored
1615
questions:
16+
17+
- question: |
18+
What is the Auto-Provisioning feature for BYOL, and can it work on multiple solutions?
19+
answer: |
20+
The Defender for Cloud BYOL integration allows only one solution to have auto-provisioning enabled per subscription. This feature scans all unhealthy machines in the subscription (those without any VA solution installed) and automatically remediates them by installing the selected VA solution. Auto-provisioning will use the single selected BYOL solution for remediation. If no solution is selected or if multiple solutions have auto-provisioning enabled, the system will not perform auto-remediation, as it can't implicitly decide which solution to prioritize.
21+
22+
- question: |
23+
Why do I have to specify a resource group when configuring a Bring Your Own License (BYOL) solution?
24+
answer: |
25+
When you set up your solution, you must choose a resource group to attach it to. The solution isn't an Azure resource, so it won't be included in the list of the resource group’s resources. Nevertheless, it's attached to that resource group. If you later delete the resource group, the BYOL solution is unavailable.
26+
27+
1728
- question: |
1829
Are there any additional charges for the Qualys license?
1930
answer: |
@@ -99,7 +110,3 @@ sections:
99110
There are multiple Qualys platforms across various geographic locations. The SOC CIDR and URLs differ depending on the host platform of your Qualys subscription. [Identify your Qualys host platform](https://www.qualys.com/platform-identification/).
100111
101112
102-
- question: |
103-
Why do I have to specify a resource group when configuring a Bring Your Own License (BYOL) solution?
104-
answer: |
105-
When you set up your solution, you must choose a resource group to attach it to. The solution isn't an Azure resource, so it won't be included in the list of the resource group’s resources. Nevertheless, it's attached to that resource group. If you later delete the resource group, the BYOL solution is unavailable.

articles/defender-for-cloud/release-notes-recommendations-alerts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ New and updated recommendations and alerts are added to the table in date order.
4848
| ----------- | ------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
4949
| June 28 | Recommendation | GA | [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](recommendations-reference-devops.md#preview-azure-devops-repositories-should-require-minimum-two-reviewer-approval-for-code-pushes) |
5050
| June 28 | Recommendation | GA | [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](recommendations-reference-devops.md#preview-azure-devops-repositories-should-not-allow-requestors-to-approve-their-own-pull-requests) |
51-
| June 28 | Recommendation | GA | [GitHub organizations should not make action secrets accessible to all repositories](recommendations-reference-devops.md#github-organizations-should-not-make-action-secrets-accessible-to-all repositories) |
51+
| June 28 | Recommendation | GA | [GitHub organizations should not make action secrets accessible to all repositories](recommendations-reference-devops.md#github-organizations-should-not-make-action-secrets-accessible-to-all-repositories) |
5252
| June 27 | Alert | Deprecation | `Security incident detected suspicious source IP activity`<br><br/> Severity: Medium/High |
5353
| June 27 | Alert | Deprecation | `Security incident detected on multiple resources`<br><br/> Severity: Medium/High |
5454
| June 27 | Alert | Deprecation | `Security incident detected compromised machine`<br><br/> Severity: Medium/High |

articles/defender-for-cloud/secrets-scanning.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,20 @@ Defender for Cloud provides secrets scanning for virtual machines, and for cloud
2323
- **Cloud deployments**: Agentless secrets scanning across multicloud infrastructure-as-code deployment resources.
2424
- **Azure DevOps**: [Scanning to discover exposed secrets in Azure DevOps](defender-for-devops-introduction.md).
2525

26+
## Prerequisites
27+
28+
Required roles and permissions:
29+
30+
- Security Reader
31+
32+
- Security Admin
33+
34+
- Reader
35+
36+
- Contributor
37+
38+
- Owner
39+
2640
## Deploying secrets scanning
2741

2842
Secrets scanning is provided as a feature in Defender for Cloud plans:

0 commit comments

Comments
 (0)