You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/automation/change-tracking/overview-monitoring-agent.md
+15-1Lines changed: 15 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Azure Automation Change Tracking and Inventory overview using Azure Monit
3
3
description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent, which helps you identify software and Microsoft service changes in your environment.
4
4
services: automation
5
5
ms.subservice: change-inventory-management
6
-
ms.date: 11/15/2024
6
+
ms.date: 12/09/2024
7
7
ms.topic: overview
8
8
ms.service: azure-automation
9
9
---
@@ -21,6 +21,20 @@ This article explains on the latest version of change tracking support using Azu
21
21
> -[FIM with Change Tracking and Inventory using AMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-ama).
22
22
> -[FIM with Change Tracking and Inventory using MMA](https://learn.microsoft.com/azure/defender-for-cloud/migrate-file-integrity-monitoring#migrate-from-fim-over-mma).
23
23
24
+
## What is Change Tracking & Inventory
25
+
26
+
Azure Change Tracking & Inventory service enhances the auditing and governance for in-guest operations by monitoring changes and providing detailed inventory logs for servers across Azure, on-premises, and other cloud environments.
27
+
28
+
1.**Change Tracking**
29
+
30
+
a. Monitors changes, including modifications to files, registry keys, software installations, and Windows services or Linux daemons.</br>
31
+
b. Provides detailed logs of what and when the changes were made, who made them, enabling you to quickly detect configuration drifts or unauthorized changes.
32
+
33
+
1.**Inventory**
34
+
35
+
a. Collects and maintains an updated list of installed software, operating system details, and other server configurations in linked LA workspace </br>
36
+
b. Helps create an overview of system assets, which is useful for compliance, audits, and proactive maintenance.
@@ -27,6 +27,9 @@ Immutable vault can help you protect your backup data by blocking any operations
27
27
- Immutable vault applies to all the data in the vault. Therefore, all instances that are protected in the vault have immutability applied to them.
28
28
- Immutability doesn't apply to operational backups, such as operational backup of blobs, files, and disks.
29
29
30
+
>[!Note]
31
+
>Ensure that the resource provider is registered in your subscription for `Microsoft.RecoveryServices`, otherwise Zone-redundant and vault property options like “Immutability settings” will not be accessible.
32
+
30
33
## How does immutability work?
31
34
32
35
While Azure Backup stores data in isolation from production workloads, it allows performing management operations to help you manage your backups, including those operations that allow you to delete recovery points. However, in certain scenarios, you may want to make the backup data immutable by preventing any such operations that, if used by malicious actors, could lead to the loss of backups. The Immutable vault setting on your vault enables you to block such operations to ensure that your backup data is protected, even if any malicious actors try to delete them to affect the recoverability of data.
# Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
14
14
15
15
[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables.
16
16
17
-
This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](/azure/postgresql/) database with a managed identity running on [Container Apps](overview.md).
17
+
This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](/azure/postgresql/) database with a managed identity running on [Container Apps](overview.md).
18
18
19
-
What you will learn:
19
+
What you learn:
20
20
21
21
> [!div class="checklist"]
22
22
> * Configure a Quarkus app to authenticate using Microsoft Entra ID with a PostgreSQL Database.
@@ -88,7 +88,7 @@ cd quarkus-quickstarts/hibernate-orm-panache-quickstart
88
88
89
89
1. Configure the Quarkus app properties.
90
90
91
-
The Quarkus configuration is located in the *src/main/resources/application.properties* file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.
91
+
The Quarkus configuration is located in the *src/main/resources/application.properties* file. Open this file in your editor, and observe several default properties. The properties prefixed with `%prod` are only used when the application is built and deployed, for example when deployed to Azure App Service. When the application runs locally, `%prod` properties are ignored. Similarly, `%dev` properties are used in Quarkus' Live Coding / Dev mode, and `%test` properties are used during continuous testing.
92
92
93
93
Delete the existing content in *application.properties* and replace with the following to configure the database for dev, test, and production modes:
94
94
@@ -211,7 +211,7 @@ cd quarkus-quickstarts/hibernate-orm-panache-quickstart
211
211
212
212
## 5. Create and connect a PostgreSQL database with identity connectivity
213
213
214
-
Next, create a PostgreSQL Database and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
214
+
Next, create a PostgreSQL Database and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app connects to this database and stores its data when running, persisting the application state no matter where you run the application.
215
215
216
216
1. Create the database service.
217
217
@@ -236,7 +236,7 @@ Next, create a PostgreSQL Database and configure your container app to connect t
236
236
**resource-group*→ Use the same resource group name in which you created the web app - for example, `msdocs-quarkus-postgres-webapp-rg`.
237
237
**name*→ The PostgreSQL database server name. This name must be **unique across all Azure** (the server endpoint becomes `https://<name>.postgres.database.azure.com`). Allowed characters are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and server identifier. (`msdocs-quarkus-postgres-webapp-db`)
238
238
**location*→ Use the same location used for the web app. Change to a different location if it doesn't work.
239
-
**public-access*→`None` which sets the server in public access mode with no firewall rules. Rules will be created in a later step.
239
+
**public-access*→`None` which sets the server in public access mode with no firewall rules. Rules are created in a later step.
240
240
**sku-name*→ The name of the pricing tier and compute configuration - for example, `Standard_B1ms`. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
241
241
**tier*→ The compute tier of the server. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
242
242
**active-directory-auth*→`Enabled` to enable Microsoft Entra authentication.
Copy file name to clipboardExpand all lines: articles/data-factory/connector-google-bigquery-legacy.md
+16-6Lines changed: 16 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: jianleishen
7
7
ms.subservice: data-movement
8
8
ms.topic: conceptual
9
9
ms.custom: synapse
10
-
ms.date: 11/05/2024
10
+
ms.date: 12/02/2024
11
11
---
12
12
13
13
# Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics (legacy)
@@ -35,12 +35,19 @@ The service provides a built-in driver to enable connectivity. Therefore, you do
35
35
36
36
The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
37
37
38
-
The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).
39
-
40
-
41
38
>[!NOTE]
42
39
>This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
43
40
41
+
## Prerequisites
42
+
43
+
To use this connector, you need the following minimum permissions of Google BigQuery:
@@ -128,10 +135,13 @@ Set "authenticationType" property to **ServiceAuthentication**, and specify the
128
135
| Property | Description | Required |
129
136
|:--- |:--- |:--- |
130
137
| email | The service account email ID that is used for ServiceAuthentication. It can be used only on Self-hosted Integration Runtime. | No |
131
-
| keyFilePath | The full path to the `.p12` or `.json` key file that is used to authenticate the service account email address. | Yes |
138
+
| keyFilePath | The full path to the `.json` key file that is used to authenticate the service account email address. | Yes |
132
139
| trustedCertPath | The full path of the .pem file that contains trusted CA certificates used to verify the server when you connect over TLS. This property can be set only when you use TLS on Self-hosted Integration Runtime. The default value is the cacerts.pem file installed with the integration runtime. | No |
133
140
| useSystemTrustStore | Specifies whether to use a CA certificate from the system trust store or from a specified .pem file. The default value is **false**. | No |
134
141
142
+
> [!NOTE]
143
+
> The connector no longer supports P12 key files. If you rely on service accounts, you are recommended to use JSON key files instead. The P12CustomPwd property used for supporting the P12 key file was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).
144
+
135
145
**Example:**
136
146
137
147
```json
@@ -144,7 +154,7 @@ Set "authenticationType" property to **ServiceAuthentication**, and specify the
144
154
"requestGoogleDriveScope" : true,
145
155
"authenticationType" : "ServiceAuthentication",
146
156
"email": "<email>",
147
-
"keyFilePath": "<.p12 or .json key path on the IR machine>"
157
+
"keyFilePath": "<.json key path on the IR machine>"
148
158
},
149
159
"connectVia": {
150
160
"referenceName": "<name of Self-hosted Integration Runtime>",
Copy file name to clipboardExpand all lines: articles/data-factory/connector-netezza.md
+9-2Lines changed: 9 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: jianleishen
6
6
ms.subservice: data-movement
7
7
ms.custom: synapse
8
8
ms.topic: conceptual
9
-
ms.date: 06/28/2024
9
+
ms.date: 12/02/2024
10
10
ms.author: jianleishen
11
11
---
12
12
# Copy data from Netezza by using Azure Data Factory or Synapse Analytics
@@ -30,7 +30,11 @@ This Netezza connector is supported for the following capabilities:
30
30
31
31
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
32
32
33
-
Netezza connector supports parallel copying from source. See the [Parallel copy from Netezza](#parallel-copy-from-netezza) section for details.
33
+
This Netezza connector supports:
34
+
35
+
- Parallel copying from source. See the [Parallel copy from Netezza](#parallel-copy-from-netezza) section for details.
36
+
- Netezza Performance Server version 11.
37
+
- Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
34
38
35
39
The service provides a built-in driver to enable connectivity. You don't need to manually install any driver to use this connector.
36
40
@@ -85,6 +89,9 @@ A typical connection string is `Server=<server>;Port=<port>;Database=<database>;
85
89
|:--- |:--- |:--- |
86
90
| SecurityLevel | The level of security that the driver uses for the connection to the data store. <br>Example: `SecurityLevel=preferredUnSecured`. Supported values are:<br/>- **Only unsecured** (**onlyUnSecured**): The driver doesn't use SSL.<br/>- **Preferred unsecured (preferredUnSecured) (default)**: If the server provides a choice, the driver doesn't use SSL. | No |
87
91
92
+
> [!NOTE]
93
+
> The connector doesn't support SSLv3 as it is [officially deprecated by Netezza](https://www.ibm.com/docs/en/netezza?topic=npssac-netezza-performance-server-client-encryption-security-1).
-**Multiple vCenter Servers** - A single appliance can connect to up to 10 vCenter Servers.
42
-
-**vCenter Server permissions** - VMware account used to access the vCenter server from the Azure Migrate appliance needs below permissions to replicate virtual machines:
42
+
-**vCenter Server permissions** - The VMware account used to access the vCenter server from the Azure Migrate appliance must have the following permissions assigned at all required levels - datacenter, cluster, host, VM, and datastore. Ensure permissions are applied at each level to avoid replication errors.
43
43
44
44
**Privilege Name in the vSphere Client** | **The purpose for the privilege** | **Required On** | **Privilege Name in the API**
0 commit comments