Skip to content

Commit 0328b95

Browse files
authored
Merge pull request #264909 from MicrosoftDocs/main
Merge main to live, 4 AM
2 parents 52f7333 + b5d1c0e commit 0328b95

File tree

46 files changed

+254
-66
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+254
-66
lines changed

articles/azure-monitor/alerts/alerts-overview.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -77,9 +77,7 @@ The alert condition for stateful alerts is `fired`, until it is considered resol
7777

7878
For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved.
7979

80-
Stateful log alerts have these limitations:
81-
- they can trigger up to 300 alerts per evaluation.
82-
- you can have a maximum of 6000 alerts with the `fired` alert condition.
80+
Stateful log alerts have limitations - details [here](https://learn.microsoft.com/azure/azure-monitor/service-limits#alerts).
8381

8482
This table describes when a stateful alert is considered resolved:
8583

articles/data-factory/connector-microsoft-fabric-lakehouse.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -524,6 +524,8 @@ Microsoft Fabric Lakehouse connector supports the following file formats. Refer
524524
- [JSON format](format-json.md)
525525
- [ORC format](format-orc.md)
526526
- [Parquet format](format-parquet.md)
527+
528+
To use Fabric Lakehouse file-based connector in inline dataset type, you need to choose the right Inline dataset type for your data. You can use DelimitedText, Avro, JSON, ORC, or Parquet depending on your data format.
527529

528530
### Microsoft Fabric Lakehouse Table in mapping data flow
529531

@@ -579,6 +581,7 @@ sink(allowSchemaDrift: true,
579581
skipDuplicateMapOutputs: true) ~> CustomerTable
580582
581583
```
584+
For Fabric Lakehouse table-based connector in inline dataset type, you only need to use Delta as dataset type. This will allow you to read and write data from Fabric Lakehouse tables.
582585

583586
## Related content
584587

articles/defender-for-cloud/concept-data-security-posture-prepare.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: dcurwin
55
ms.author: dacurwin
66
ms.service: defender-for-cloud
77
ms.topic: conceptual
8-
ms.date: 01/14/2024
8+
ms.date: 01/28/2024
99
ms.custom: references_regions
1010
---
1111

@@ -22,6 +22,7 @@ Sensitive data discovery is available in the Defender CSPM, Defender for Storage
2222
- Existing plan status shows as “Partial” rather than “Full” if one or more extensions aren't turned on.
2323
- The feature is turned on at the subscription level.
2424
- If sensitive data discovery is turned on, but Defender CSPM isn't enabled, only storage resources will be scanned.
25+
- If a subscription is enabled with Defender CSPM and in parallel you scanned the same resources with Purview, Purview's scan result is ignored and defaults to displaying the Microsoft Defender for Cloud's scanning results for the supported resource type.
2526

2627
## What's supported
2728

articles/defender-for-cloud/concept-data-security-posture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: dcurwin
55
ms.author: dacurwin
66
ms.service: defender-for-cloud
77
ms.topic: conceptual
8-
ms.date: 10/26/2023
8+
ms.date: 01/28/2024
99
---
1010
# About data-aware security posture
1111

articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ Your transparent gateway is now configured and ready to start forwarding telemet
202202
203203
## Provision a downstream device
204204
205-
IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
205+
IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python installed and internet connectivity. Check the [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python/blob/main/README.md) for current Python version requirements. The [Azure Cloud Shell](https://shell.azure.com/) has Python pre-installed:
206206

207207
1. Run the following command to install the `azure.iot.device` module:
208208

articles/machine-learning/concept-responsible-ai.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.topic: conceptual
99
ms.author: mesameki
1010
author: mesameki
1111
ms.reviewer: lagayhar
12-
ms.date: 11/09/2022
12+
ms.date: 01/31/2024
1313
ms.custom: responsible-ai, event-tier1-build-2022, build-2023, build-2023-dataai
1414
#Customer intent: As a data scientist, I want to learn what Responsible AI is and how I can use it in Azure Machine Learning.
1515
---
@@ -20,7 +20,7 @@ ms.custom: responsible-ai, event-tier1-build-2022, build-2023, build-2023-dataai
2020

2121
Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
2222

23-
Microsoft has developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
23+
Microsoft developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
2424

2525
This article demonstrates how Azure Machine Learning supports tools for enabling developers and data scientists to implement and operationalize the six principles.
2626

@@ -49,9 +49,9 @@ When AI systems help inform decisions that have tremendous impacts on people's l
4949

5050
A crucial part of transparency is *interpretability*: the useful explanation of the behavior of AI systems and their components. Improving interpretability requires stakeholders to comprehend how and why AI systems function the way they do. The stakeholders can then identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
5151

52-
**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
52+
**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
5353

54-
The model interpretability component provides multiple views into a model's behavior:
54+
The model interpretability component provides multiple views into a model's behavior:
5555

5656
- *Global explanations*. For example, what features affect the overall behavior of a loan allocation model?
5757
- *Local explanations*. For example, why was a customer's loan application approved or rejected?
@@ -76,7 +76,7 @@ As AI becomes more prevalent, protecting privacy and securing personal and busin
7676
- Scan for vulnerabilities.
7777
- Apply and audit configuration policies.
7878

79-
Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles:
79+
Microsoft also created two open-source packages that can enable further implementation of privacy and security principles:
8080

8181
- [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core): Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy might be required for regulatory compliance. SmartNoise is an open-source project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
8282

articles/machine-learning/how-to-authenticate-batch-endpoint.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -297,6 +297,10 @@ To successfully invoke a batch endpoint you need the following explicit actions
297297
"Microsoft.MachineLearningServices/workspaces/listStorageAccountKeys/action",
298298
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/read",
299299
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/read",
300+
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/write",
301+
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/write",
302+
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/jobs/write",
303+
"Microsoft.MachineLearningServices/workspaces/batchEndpoints/jobs/write",
300304
"Microsoft.MachineLearningServices/workspaces/computes/read",
301305
"Microsoft.MachineLearningServices/workspaces/computes/listKeys/action",
302306
"Microsoft.MachineLearningServices/workspaces/metadata/secrets/read",
@@ -314,7 +318,7 @@ To successfully invoke a batch endpoint you need the following explicit actions
314318
"Microsoft.MachineLearningServices/workspaces/endpoints/pipelines/write",
315319
"Microsoft.MachineLearningServices/workspaces/environments/read",
316320
"Microsoft.MachineLearningServices/workspaces/environments/write",
317-
"Microsoft.MachineLearningServices/workspaces/environments/build/action"
321+
"Microsoft.MachineLearningServices/workspaces/environments/build/action",
318322
"Microsoft.MachineLearningServices/workspaces/environments/readSecrets/action"
319323
]
320324
```

0 commit comments

Comments
 (0)