Skip to content

Commit 210dff9

Browse files
authored
Merge pull request #291649 from MicrosoftDocs/main
Publish to live, Monday 4 AM PST, 12/9
2 parents 3118284 + 5e624f5 commit 210dff9

File tree

6 files changed

+51
-17
lines changed

6 files changed

+51
-17
lines changed

articles/automation/change-tracking/overview-monitoring-agent.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure Automation Change Tracking and Inventory overview using Azure Monit
33
description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent, which helps you identify software and Microsoft service changes in your environment.
44
services: automation
55
ms.subservice: change-inventory-management
6-
ms.date: 11/15/2024
6+
ms.date: 12/09/2024
77
ms.topic: overview
88
ms.service: azure-automation
99
---
@@ -21,6 +21,20 @@ This article explains on the latest version of change tracking support using Azu
2121
> - [FIM with Change Tracking and Inventory using AMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-ama).
2222
> - [FIM with Change Tracking and Inventory using MMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-mma).
2323
24+
## What is Change Tracking & Inventory
25+
26+
Azure Change Tracking & Inventory service enhances the auditing and governance for in-guest operations by monitoring changes and providing detailed inventory logs for servers across Azure, on-premises, and other cloud environments.
27+
28+
1. **Change Tracking**
29+
30+
a. Monitors changes, including modifications to files, registry keys, software installations, and Windows services or Linux daemons.</br>
31+
b. Provides detailed logs of what and when the changes were made, who made them, enabling you to quickly detect configuration drifts or unauthorized changes.
32+
33+
1. **Inventory**
34+
35+
a. Collects and maintains an updated list of installed software, operating system details, and other server configurations in linked LA workspace </br>
36+
b. Helps create an overview of system assets, which is useful for compliance, audits, and proactive maintenance.
37+
2438
## Support matrix
2539

2640
|**Component**| **Applies to**|

articles/backup/backup-azure-immutable-vault-concept.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: This article explains about the concept of Immutable vault for Azur
44
ms.topic: overview
55
ms.service: azure-backup
66
ms.custom: references_regions, engagement-fy24, ignite-2024
7-
ms.date: 11/20/2024
7+
ms.date: 12/09/2024
88
author: AbhishekMallick-MS
99
ms.author: v-abhmallick
1010
---
@@ -27,6 +27,9 @@ Immutable vault can help you protect your backup data by blocking any operations
2727
- Immutable vault applies to all the data in the vault. Therefore, all instances that are protected in the vault have immutability applied to them.
2828
- Immutability doesn't apply to operational backups, such as operational backup of blobs, files, and disks.
2929

30+
>[!Note]
31+
>Ensure that the resource provider is registered in your subscription for `Microsoft.RecoveryServices`, otherwise Zone-redundant and vault property options like “Immutability settings” will not be accessible.
32+
3033
## How does immutability work?
3134

3235
While Azure Backup stores data in isolation from production workloads, it allows performing management operations to help you manage your backups, including those operations that allow you to delete recovery points. However, in certain scenarios, you may want to make the backup data immutable by preventing any such operations that, if used by malicious actors, could lead to the loss of backups. The Immutable vault setting on your vault enables you to block such operations to ensure that your backup data is protected, even if any malicious actors try to delete them to affect the recoverability of data.

articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,17 @@ author: KarlErickson
66
ms.topic: tutorial
77
ms.author: edburns
88
ms.service: azure-container-apps
9-
ms.date: 10/10/2024
9+
ms.date: 12/09/2024
1010
ms.custom: devx-track-azurecli, devx-track-extended-java, devx-track-java, devx-track-javaee, devx-track-javaee-quarkus, passwordless-java, service-connector, devx-track-javaee-quarkus-aca
1111
---
1212

1313
# Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
1414

1515
[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables.
1616

17-
This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](/azure/postgresql/) database with a managed identity running on [Container Apps](overview.md).
17+
This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](/azure/postgresql/) database with a managed identity running on [Container Apps](overview.md).
1818

19-
What you will learn:
19+
What you learn:
2020

2121
> [!div class="checklist"]
2222
> * Configure a Quarkus app to authenticate using Microsoft Entra ID with a PostgreSQL Database.
@@ -88,7 +88,7 @@ cd quarkus-quickstarts/hibernate-orm-panache-quickstart
8888

8989
1. Configure the Quarkus app properties.
9090

91-
The Quarkus configuration is located in the *src/main/resources/application.properties* file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.
91+
The Quarkus configuration is located in the *src/main/resources/application.properties* file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.
9292

9393
Delete the existing content in *application.properties* and replace with the following to configure the database for dev, test, and production modes:
9494

@@ -211,7 +211,7 @@ cd quarkus-quickstarts/hibernate-orm-panache-quickstart
211211
212212
## 5. Create and connect a PostgreSQL database with identity connectivity
213213

214-
Next, create a PostgreSQL Database and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
214+
Next, create a PostgreSQL Database and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app connects to this database and stores its data when running, persisting the application state no matter where you run the application.
215215

216216
1. Create the database service.
217217

@@ -236,7 +236,7 @@ Next, create a PostgreSQL Database and configure your container app to connect t
236236
* *resource-group* &rarr; Use the same resource group name in which you created the web app - for example, `msdocs-quarkus-postgres-webapp-rg`.
237237
* *name* &rarr; The PostgreSQL database server name. This name must be **unique across all Azure** (the server endpoint becomes `https://<name>.postgres.database.azure.com`). Allowed characters are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and server identifier. (`msdocs-quarkus-postgres-webapp-db`)
238238
* *location* &rarr; Use the same location used for the web app. Change to a different location if it doesn't work.
239-
* *public-access* &rarr; `None` which sets the server in public access mode with no firewall rules. Rules will be created in a later step.
239+
* *public-access* &rarr; `None` which sets the server in public access mode with no firewall rules. Rules are created in a later step.
240240
* *sku-name* &rarr; The name of the pricing tier and compute configuration - for example, `Standard_B1ms`. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
241241
* *tier* &rarr; The compute tier of the server. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
242242
* *active-directory-auth* &rarr; `Enabled` to enable Microsoft Entra authentication.

articles/data-factory/connector-google-bigquery-legacy.md

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: jianleishen
77
ms.subservice: data-movement
88
ms.topic: conceptual
99
ms.custom: synapse
10-
ms.date: 11/05/2024
10+
ms.date: 12/02/2024
1111
---
1212

1313
# Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics (legacy)
@@ -35,12 +35,19 @@ The service provides a built-in driver to enable connectivity. Therefore, you do
3535

3636
The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
3737

38-
The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).
39-
40-
4138
>[!NOTE]
4239
>This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
4340
41+
## Prerequisites
42+
43+
To use this connector, you need the following minimum permissions of Google BigQuery:
44+
- bigquery.connections.*
45+
- bigquery.datasets.*
46+
- bigquery.jobs.*
47+
- bigquery.readsessions.*
48+
- bigquery.routines.*
49+
- bigquery.tables.*
50+
4451
## Get started
4552

4653
[!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
@@ -128,10 +135,13 @@ Set "authenticationType" property to **ServiceAuthentication**, and specify the
128135
| Property | Description | Required |
129136
|:--- |:--- |:--- |
130137
| email | The service account email ID that is used for ServiceAuthentication. It can be used only on Self-hosted Integration Runtime. | No |
131-
| keyFilePath | The full path to the `.p12` or `.json` key file that is used to authenticate the service account email address. | Yes |
138+
| keyFilePath | The full path to the `.json` key file that is used to authenticate the service account email address. | Yes |
132139
| trustedCertPath | The full path of the .pem file that contains trusted CA certificates used to verify the server when you connect over TLS. This property can be set only when you use TLS on Self-hosted Integration Runtime. The default value is the cacerts.pem file installed with the integration runtime. | No |
133140
| useSystemTrustStore | Specifies whether to use a CA certificate from the system trust store or from a specified .pem file. The default value is **false**. | No |
134141

142+
> [!NOTE]
143+
> The connector no longer supports P12 key files. If you rely on service accounts, you are recommended to use JSON key files instead. The P12CustomPwd property used for supporting the P12 key file was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).
144+
135145
**Example:**
136146

137147
```json
@@ -144,7 +154,7 @@ Set "authenticationType" property to **ServiceAuthentication**, and specify the
144154
"requestGoogleDriveScope" : true,
145155
"authenticationType" : "ServiceAuthentication",
146156
"email": "<email>",
147-
"keyFilePath": "<.p12 or .json key path on the IR machine>"
157+
"keyFilePath": "<.json key path on the IR machine>"
148158
},
149159
"connectVia": {
150160
"referenceName": "<name of Self-hosted Integration Runtime>",

articles/data-factory/connector-netezza.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: jianleishen
66
ms.subservice: data-movement
77
ms.custom: synapse
88
ms.topic: conceptual
9-
ms.date: 06/28/2024
9+
ms.date: 12/02/2024
1010
ms.author: jianleishen
1111
---
1212
# Copy data from Netezza by using Azure Data Factory or Synapse Analytics
@@ -30,7 +30,11 @@ This Netezza connector is supported for the following capabilities:
3030

3131
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
3232

33-
Netezza connector supports parallel copying from source. See the [Parallel copy from Netezza](#parallel-copy-from-netezza) section for details.
33+
This Netezza connector supports:
34+
35+
- Parallel copying from source. See the [Parallel copy from Netezza](#parallel-copy-from-netezza) section for details.
36+
- Netezza Performance Server version 11.
37+
- Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
3438

3539
The service provides a built-in driver to enable connectivity. You don't need to manually install any driver to use this connector.
3640

@@ -85,6 +89,9 @@ A typical connection string is `Server=<server>;Port=<port>;Database=<database>;
8589
|:--- |:--- |:--- |
8690
| SecurityLevel | The level of security that the driver uses for the connection to the data store. <br>Example: `SecurityLevel=preferredUnSecured`. Supported values are:<br/>- **Only unsecured** (**onlyUnSecured**): The driver doesn't use SSL.<br/>- **Preferred unsecured (preferredUnSecured) (default)**: If the server provides a choice, the driver doesn't use SSL. | No |
8791

92+
> [!NOTE]
93+
> The connector doesn't support SSLv3 as it is [officially deprecated by Netezza](https://www.ibm.com/docs/en/netezza?topic=npssac-netezza-performance-server-client-encryption-security-1).
94+
8895
**Example**
8996

9097
```json

articles/migrate/vmware/migrate-support-matrix-vmware-migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ The VMware vSphere hypervisor requirements are:
3939
- **VMware vCenter Server** - Version 5.5, 6.0, 6.5, 6.7, 7.0, 8.0.
4040
- **VMware vSphere ESXi host** - Version 5.5, 6.0, 6.5, 6.7, 7.0, 8.0.
4141
- **Multiple vCenter Servers** - A single appliance can connect to up to 10 vCenter Servers.
42-
- **vCenter Server permissions** - VMware account used to access the vCenter server from the Azure Migrate appliance needs below permissions to replicate virtual machines:
42+
- **vCenter Server permissions** - The VMware account used to access the vCenter server from the Azure Migrate appliance must have the following permissions assigned at all required levels - datacenter, cluster, host, VM, and datastore. Ensure permissions are applied at each level to avoid replication errors.
4343

4444
**Privilege Name in the vSphere Client** | **The purpose for the privilege** | **Required On** | **Privilege Name in the API**
4545
--- | --- | --- | ---

0 commit comments

Comments
 (0)