You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/alerts/alerts-overview.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,9 +77,7 @@ The alert condition for stateful alerts is `fired`, until it is considered resol
77
77
78
78
For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved.
79
79
80
-
Stateful log alerts have these limitations:
81
-
- they can trigger up to 300 alerts per evaluation.
82
-
- you can have a maximum of 6000 alerts with the `fired` alert condition.
80
+
Stateful log alerts have limitations - details [here](https://learn.microsoft.com/azure/azure-monitor/service-limits#alerts).
83
81
84
82
This table describes when a stateful alert is considered resolved:
Copy file name to clipboardExpand all lines: articles/data-factory/connector-microsoft-fabric-lakehouse.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -524,6 +524,8 @@ Microsoft Fabric Lakehouse connector supports the following file formats. Refer
524
524
-[JSON format](format-json.md)
525
525
-[ORC format](format-orc.md)
526
526
-[Parquet format](format-parquet.md)
527
+
528
+
To use Fabric Lakehouse file-based connector in inline dataset type, you need to choose the right Inline dataset type for your data. You can use DelimitedText, Avro, JSON, ORC, or Parquet depending on your data format.
527
529
528
530
### Microsoft Fabric Lakehouse Table in mapping data flow
529
531
@@ -579,6 +581,7 @@ sink(allowSchemaDrift: true,
579
581
skipDuplicateMapOutputs: true) ~> CustomerTable
580
582
581
583
```
584
+
For Fabric Lakehouse table-based connector in inline dataset type, you only need to use Delta as dataset type. This will allow you to read and write data from Fabric Lakehouse tables.
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/concept-data-security-posture-prepare.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: dcurwin
5
5
ms.author: dacurwin
6
6
ms.service: defender-for-cloud
7
7
ms.topic: conceptual
8
-
ms.date: 01/14/2024
8
+
ms.date: 01/28/2024
9
9
ms.custom: references_regions
10
10
---
11
11
@@ -22,6 +22,7 @@ Sensitive data discovery is available in the Defender CSPM, Defender for Storage
22
22
- Existing plan status shows as “Partial” rather than “Full” if one or more extensions aren't turned on.
23
23
- The feature is turned on at the subscription level.
24
24
- If sensitive data discovery is turned on, but Defender CSPM isn't enabled, only storage resources will be scanned.
25
+
- If a subscription is enabled with Defender CSPM and in parallel you scanned the same resources with Purview, Purview's scan result is ignored and defaults to displaying the Microsoft Defender for Cloud's scanning results for the supported resource type.
Copy file name to clipboardExpand all lines: articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -202,7 +202,7 @@ Your transparent gateway is now configured and ready to start forwarding telemet
202
202
203
203
## Provision a downstream device
204
204
205
-
IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
205
+
IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python installed and internet connectivity. Check the [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python/blob/main/README.md) for current Python version requirements. The [Azure Cloud Shell](https://shell.azure.com/) has Python pre-installed:
206
206
207
207
1. Run the following command to install the `azure.iot.device` module:
Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product of many decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, Responsible AI can help proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
22
22
23
-
Microsoft has developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
23
+
Microsoft developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf). It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day.
24
24
25
25
This article demonstrates how Azure Machine Learning supports tools for enabling developers and data scientists to implement and operationalize the six principles.
26
26
@@ -49,9 +49,9 @@ When AI systems help inform decisions that have tremendous impacts on people's l
49
49
50
50
A crucial part of transparency is *interpretability*: the useful explanation of the behavior of AI systems and their components. Improving interpretability requires stakeholders to comprehend how and why AI systems function the way they do. The stakeholders can then identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
51
51
52
-
**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
52
+
**Transparency in Azure Machine Learning**: The [model interpretability](how-to-machine-learning-interpretability.md) and [counterfactual what-if](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and developers to generate human-understandable descriptions of the predictions of a model.
53
53
54
-
The model interpretability component provides multiple views into a model's behavior:
54
+
The model interpretability component provides multiple views into a model's behavior:
55
55
56
56
-*Global explanations*. For example, what features affect the overall behavior of a loan allocation model?
57
57
-*Local explanations*. For example, why was a customer's loan application approved or rejected?
@@ -76,7 +76,7 @@ As AI becomes more prevalent, protecting privacy and securing personal and busin
76
76
- Scan for vulnerabilities.
77
77
- Apply and audit configuration policies.
78
78
79
-
Microsoft has also created two open-source packages that can enable further implementation of privacy and security principles:
79
+
Microsoft also created two open-source packages that can enable further implementation of privacy and security principles:
80
80
81
81
-[SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core): Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy might be required for regulatory compliance. SmartNoise is an open-source project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
0 commit comments