You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Due to the increased risk associated with legacy authentication protocols, Micro
21
21
22
22
## Create a Conditional Access policy
23
23
24
-
The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multi-factor authentication.
24
+
The following steps will help create a Conditional Access policy to block legacy authentication requests.
25
25
26
26
1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
27
27
1. Browse to **Azure Active Directory** > **Conditional Access**.
Copy file name to clipboardExpand all lines: articles/machine-learning/service/how-to-create-register-datasets.md
+13-2Lines changed: 13 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ To create and work with datasets, you need:
43
43
44
44
Datasets are categorized into two types based on how users consume them in training.
45
45
46
-
*[TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas DataFrame. A `TabularDataset` object can be created from csv, tsv, parquet files, SQL query results etc. For a complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference).
46
+
*[TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or spark DataFrame. A `TabularDataset` object can be created from csv, tsv, parquet files, SQL query results etc. For a complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference).
47
47
48
48
*[FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. This provides you with the ability to download or mount the files to your compute. The files can be of any format, which enables a wider range of machine learning scenarios including deep learning.
Use the [`from_sql_query()`](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-sql-query-query--validate-true--set-column-types-none-) method on `TabularDatasetFactory` class to read from Azure SQL Database.
110
+
111
+
```Python
112
+
113
+
from azureml.core import Dataset, Datastore
114
+
115
+
# create tabular dataset from a SQL database in datastore
116
+
sql_datastore = Datastore.get(workspace, 'mssql')
117
+
sql_ds = Dataset.Tabular.from_sql_query((sql_datastore, 'SELECT * FROM my_table'))
118
+
```
109
119
Use the [`with_timestamp_columns()`](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py#with-timestamp-columns-fine-grain-timestamp--coarse-grain-timestamp-none--validate-false-) method on `TabularDataset` class to enable easy and efficient filtering by time. More examples and details can be found [here](http://aka.ms/azureml-tsd-notebook).
Registered datasets are accessible locally and remotely on compute clusters like the Azure Machine Learning compute. To access your registered Dataset across experiments, use the following code to get your workspace and registered dataset by name. The [`get_by_name()`](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#get-by-name-workspace--name--version--latest--) method on the `Dataset` class by default returns the latest version of the dataset registered with the workspace.
Copy file name to clipboardExpand all lines: includes/azure-monitor-limits-alerts.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,8 +13,8 @@ ms.custom: "include file"
13
13
| Resource | Default limit | Maximum limit |
14
14
| --- | --- | --- |
15
15
| Metric alerts (classic) |100 active alert rules per subscription. | Call support. |
16
-
| Metric alerts |1000 active alert rules per subscription (in public clouds) and 100 active alert rules per subscription in Azure China 21Vianet and Azure Government. | Call support. |
16
+
| Metric alerts |1000 active alert rules per subscription in Azure public, Azure China 21Vianet and Azure Government clouds. | Call support. |
17
17
| Activity log alerts | 100 active alert rules per subscription. | Same as default. |
18
18
| Log alerts | 512 | Call support. |
19
19
| Action groups |2,000 action groups per subscription. | Call support. |
20
-
| Autoscale settings |100 per region per subscription. | Same as default. |
20
+
| Autoscale settings |100 per region per subscription. | Same as default. |
0 commit comments