Skip to content

Commit 625233e

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into rolyon-rbac-assignedto
2 parents 4e602d7 + fb629b9 commit 625233e

File tree

8 files changed

+53
-17
lines changed

8 files changed

+53
-17
lines changed

articles/active-directory/conditional-access/terms-of-use.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.tgt_pltfrm: na
1212
ms.devlang: na
1313
ms.topic: conceptual
1414
ms.subservice: compliance
15-
ms.date: 05/15/2019
15+
ms.date: 05/23/2019
1616
ms.author: rolyon
1717

1818
ms.collection: M365-identity-device-management
@@ -38,6 +38,8 @@ Azure AD Terms of use has the following capabilities:
3838
- Require employees or guests to accept your Terms of use before getting access.
3939
- Require employees or guests to accept your Terms of use on every device before getting access.
4040
- Require employees or guests to accept your Terms of use on a recurring schedule.
41+
- Require employees or guests to accept your Terms of use prior to registering security information in Azure Multi-Factor Authentication (MFA).
42+
- Require employees to accept your Terms of use prior to registering security information in Azure AD self-service password reset (SSPR).
4143
- Present general Terms of use for all users in your organization.
4244
- Present specific Terms of use based on a user attributes (ex. doctors vs nurses or domestic vs international employees, by using [dynamic groups](../users-groups-roles/groups-dynamic-membership.md)).
4345
- Present specific Terms of use when accessing high business impact applications, like Salesforce.

articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ You can define app roles to target `users`, `applications`, or both. When availa
8989
"allowedMemberTypes": [
9090
"Application"
9191
],
92-
"displayName": "Consumer Apps",
92+
"displayName": "ConsumerApps",
9393
"id": "47fbb575-859a-4941-89c9-0f7a6c30beac",
9494
"isEnabled": true,
9595
"description": "Consumer apps have access to the consumer data.",

articles/azure-netapp-files/azure-netapp-files-register.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,12 +55,12 @@ To use the service, you must register the Azure Resource Provider for Azure NetA
5555

5656
The command output appears as follows:
5757

58-
"id": "/subscriptions/<SubID>/providers/Microsoft.Features/providers/Microsoft.NetApp/features/publicPreviewADC",
59-
"name": "Microsoft.NetApp/publicPreviewADC"
58+
"id": "/subscriptions/<SubID>/providers/Microsoft.Features/providers/Microsoft.NetApp/features/ANFGA",
59+
"name": "Microsoft.NetApp/ANFGA"
6060
6161
`<SubID>` is your subscription ID.
6262

63-
If you do not see the feature name `Microsoft.NetApp/publicPreviewADC`, you do not have access to the service. Stop at this step. Follow instructions in [Submit a waitlist request for accessing the service](#waitlist) to request service access before continuing.
63+
If you do not see the feature name `Microsoft.NetApp/ANFGA`, you do not have access to the service. Stop at this step. Follow instructions in [Submit a waitlist request for accessing the service](#waitlist) to request service access before continuing.
6464

6565
4. In the Azure Cloud Shell console, enter the following command to register the Azure Resource Provider:
6666

articles/hdinsight/spark/apache-spark-overview.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,12 @@ ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.custom: hdinsightactive,mvc
99
ms.topic: overview
10-
ms.date: 01/28/2019
10+
ms.date: 05/28/2019
1111
ms.author: hrasheed
1212

1313
#customer intent: As a developer new to Apache Spark and Apache Spark in Azure HDInsight, I want to have a basic understanding of Microsoft's implementation of Apache Spark in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
14-
1514
---
15+
1616
# What is Apache Spark in Azure HDInsight
1717

1818
Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Apache Spark in Azure HDInsight is the Microsoft implementation of Apache Spark in the cloud. HDInsight makes it easier to create and configure a Spark cluster in Azure. Spark clusters in HDInsight are compatible with Azure Storage and Azure Data Lake Storage. So you can use HDInsight Spark clusters to process your data stored in Azure. For the components and the versioning information, see [Apache Hadoop components and versions in Azure HDInsight](../hdinsight-component-versioning.md).
@@ -40,7 +40,7 @@ Spark clusters in HDInsight offer a fully managed Spark service. Benefits of cre
4040
| Caching on SSDs |You can choose to cache data either in memory or in SSDs attached to the cluster nodes. Caching in memory provides the best query performance but could be expensive. Caching in SSDs provides a great option for improving query performance without the need to create a cluster of a size that is required to fit the entire dataset in memory. |
4141
| Integration with BI Tools |Spark clusters in HDInsight provide connectors for BI tools such as [Power BI](https://www.powerbi.com/) for data analytics. |
4242
| Pre-loaded Anaconda libraries |Spark clusters in HDInsight come with Anaconda libraries pre-installed. [Anaconda](https://docs.continuum.io/anaconda/) provides close to 200 libraries for machine learning, data analysis, visualization, etc. |
43-
| Scalability | HDInsight allow you to change the number of cluster nodes. Also, Spark clusters can be dropped with no loss of data since all the data is stored in Azure Storage or Data Lake Storage. |
43+
| Scalability | HDInsight allows you to change the number of cluster nodes. Also, Spark clusters can be dropped with no loss of data since all the data is stored in Azure Storage or Data Lake Storage. |
4444
| SLA |Spark clusters in HDInsight come with 24/7 support and an SLA of 99.9% up-time. |
4545

4646
Apache Spark clusters in HDInsight include the following components that are available on the clusters by default.
@@ -76,12 +76,14 @@ Spark clusters in HDInsight enable the following key scenarios:
7676
Apache Spark in HDInsight stores data in Azure Storage or Azure Data Lake Storage. Business experts and key decision makers can analyze and build reports over that data and use Microsoft Power BI to build interactive reports from the analyzed data. Analysts can start from unstructured/semi structured data in cluster storage, define a schema for the data using notebooks, and then build data models using Microsoft Power BI. Spark clusters in HDInsight also support a number of third-party BI tools such as Tableau making it easier for data analysts, business experts, and key decision makers.
7777

7878
[Tutorial: Visualize Spark data using Power BI](apache-spark-use-bi-tools.md)
79+
7980
- Spark Machine Learning
8081

8182
Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark cluster in HDInsight. Spark cluster in HDInsight also includes Anaconda, a Python distribution with a variety of packages for machine learning. Couple this with a built-in support for Jupyter and Zeppelin notebooks, and you have an environment for creating machine learning applications.
8283

8384
[Tutorial: Predict building temperatures using HVAC data](apache-spark-ipython-notebook-machine-learning.md)
84-
[Tutorial: Predict food inspection results](apache-spark-machine-learning-mllib-ipython.md)
85+
[Tutorial: Predict food inspection results](apache-spark-machine-learning-mllib-ipython.md)
86+
8587
- Spark streaming and real-time data analysis
8688

8789
Spark clusters in HDInsight offer a rich support for building real-time analytics solutions. While Spark already has connectors to ingest data from many sources like Kafka, Flume, Twitter, ZeroMQ, or TCP sockets, Spark in HDInsight adds first-class support for ingesting data from Azure Event Hubs. Event Hubs is the most widely used queuing service on Azure. Having an out-of-the-box support for Event Hubs makes Spark clusters in HDInsight an ideal platform for building real-time analytics pipeline.
@@ -90,7 +92,7 @@ Spark clusters in HDInsight enable the following key scenarios:
9092

9193
You can use the following articles to learn more about Apache Spark in HDInsight:
9294

93-
- [QuickStart: Create an Apache Spark cluster in HDInsight and run interactive query using Jupyter](./apache-spark-jupyter-spark-sql-use-portal.md)
95+
- [Quickstart: Create an Apache Spark cluster in HDInsight and run interactive query using Jupyter](./apache-spark-jupyter-spark-sql-use-portal.md)
9496
- [Tutorial: Run an Apache Spark job using Jupyter](./apache-spark-load-data-run-query.md)
9597
- [Tutorial: Analyze data using BI tools](./apache-spark-use-bi-tools.md)
9698
- [Tutorial: Machine learning using Apache Spark](./apache-spark-ipython-notebook-machine-learning.md)

articles/machine-learning/service/azure-machine-learning-release-notes.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,21 @@ In this article, learn about the Azure Machine Learning service releases. For a
2020

2121
See [the list of known issues](resource-known-issues.md) to learn about known bugs and workarounds.
2222

23+
## 2019-05-28
24+
25+
### Azure Machine Learning Data Prep SDK v1.1.4
26+
27+
+ **New features**
28+
+ You can now use the following expression language functions to extract and parse datetime values into new columns.
29+
+ `RegEx.extract_record()` extracts datetime elements into a new column.
30+
+ `create_datetime()` creates datetime objects from separate datetime elements.
31+
+ When calling `get_profile()`, you can now see that quantile columns are labeled as (est.) to clearly indicate that the values are approximations.
32+
+ You can now use ** globbing when reading from Azure Blob Storage.
33+
+ e.g. `dprep.read_csv(path='https://yourblob.blob.core.windows.net/yourcontainer/**/data/*.csv')`
34+
35+
+ **Bug fixes**
36+
+ Fixed a bug related to reading a Parquet file from a remote source (Azure Blob).
37+
2338
## 2019-05-14
2439

2540
### Azure Machine Learning SDK for Python v1.0.39

articles/media-services/latest/streaming-locators-concept.md

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ You can also specify the start and end time on your Streaming Locator, which wil
3535
* Properties of **Streaming Locators** that are of the Datetime type are always in UTC format.
3636
* You should design a limited set of policies for your Media Service account and reuse them for your Streaming Locators whenever the same options are needed. For more information, see [Quotas and limitations](limits-quotas-constraints.md).
3737

38-
## Streaming Locator creation
38+
## Create Streaming Locators
3939

4040
### Not encrypted
4141

@@ -80,6 +80,24 @@ See [Filters: associate with Streaming Locators](filters-concept.md#associating-
8080

8181
See [Filtering, ordering, paging of Media Services entities](entities-overview.md).
8282

83+
## List Streaming Locators by Asset name
84+
85+
To get Streaming Locators based on the associated Asset name, use the following operations:
86+
87+
|Language|API|
88+
|---|---|
89+
|REST|[liststreaminglocators](https://docs.microsoft.com/rest/api/media/assets/liststreaminglocators)|
90+
|CLI|[az ams asset list-streaming-locators](https://docs.microsoft.com/cli/azure/ams/asset?view=azure-cli-latest#az-ams-asset-list-streaming-locators)|
91+
|.NET|[ListStreamingLocators](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.assetsoperationsextensions.liststreaminglocators?view=azure-dotnet#Microsoft_Azure_Management_Media_AssetsOperationsExtensions_ListStreamingLocators_Microsoft_Azure_Management_Media_IAssetsOperations_System_String_System_String_System_String_)|
92+
|Java|[AssetStreamingLocator](https://docs.microsoft.com/java/api/com.microsoft.azure.management.mediaservices.v2018_07_01.assetstreaminglocator?view=azure-java-stable)|
93+
|Node.js|[listStreamingLocators](https://docs.microsoft.com/javascript/api/azure-arm-mediaservices/assets?view=azure-node-latest#liststreaminglocators-string--string--string--object-)|
94+
95+
## Also see
96+
97+
* [Assets](assets-concept.md)
98+
* [Streaming Policies](streaming-policy-concept.md)
99+
* [Content Key Policies](content-key-policy-concept.md)
100+
83101
## Next steps
84102

85103
[Tutorial: Upload, encode, and stream videos using .NET](stream-files-tutorial-with-api.md)

articles/sql-database/sql-database-managed-instance-connect-app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ There are two options how to connect on-premises to Azure VNet:
5151
- Site-to-Site VPN connection ([Azure portal](../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.md), [PowerShell](../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md), [Azure CLI](../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md))
5252
- [ExpressRoute](../expressroute/expressroute-introduction.md) connection
5353

54-
If you've established on-premises to Azure connection successfully and you can't establish connection to Managed Instance, check if your firewall has open outbound connection on SQL port 1433 as well as 11000-12000 range of ports for redirection.
54+
If you've established on-premises to Azure connection successfully and you can't establish connection to Managed Instance, check if your firewall has open outbound connection on SQL port 1433 as well as 11000-11999 range of ports for redirection.
5555

5656
## Connect an application on the developers box
5757

@@ -91,7 +91,7 @@ This scenario is illustrated in the following diagram:
9191

9292
For troubleshooting connectivity issues, review the following:
9393

94-
- If you are unable to connect to Managed Instance from an Azure virtual machine within the same VNet but different subnet, check if you have a Network Security Group set on VM subnet that might be blocking access.Additionally note that you need to open outbound connection on SQL port 1433 as well as ports in range 11000-12000 since those are needed for connecting via redirection inside the Azure boundary.
94+
- If you are unable to connect to Managed Instance from an Azure virtual machine within the same VNet but different subnet, check if you have a Network Security Group set on VM subnet that might be blocking access.Additionally note that you need to open outbound connection on SQL port 1433 as well as ports in range 11000-11999 since those are needed for connecting via redirection inside the Azure boundary.
9595
- Ensure that BGP Propagation is set to **Enabled** for the route table associated with the VNet.
9696
- If using P2S VPN, check the configuration in the Azure portal to see if you see **Ingress/Egress** numbers. Non-zero numbers indicate that Azure is routing traffic to/from on-premises.
9797

articles/stream-analytics/sql-reference-data.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -169,6 +169,9 @@ When using the delta query, [temporal tables in Azure SQL Database](../sql-datab
169169

170170
Note that Stream Analytics runtime may periodically run the snapshot query in addition to the delta query to store checkpoints.
171171

172+
## Test your query
173+
It is important to verify that your query is returning the expected dataset that the Stream Analytics job will use as reference data. To test your query, go to Input under Job Topology section on portal. You can then select Sample Data on your SQL Database Reference input. After the sample becomes available, you can download the file and check to see if the data being returned is as expected. If you want a optimize your development and test iterations, it is recommended to use the [Stream Analytics tools for Visual Studio](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-tools-for-visual-studio-install). You can also any other tool of your preference to first ensure the query is returning the right results from you Azure SQL Database and then use that in your Stream Analytics job.
174+
172175
## FAQs
173176

174177
**Will I incur additional cost by using SQL reference data input in Azure Stream Analytics?**
@@ -188,10 +191,6 @@ The combination of both of these metrics can be used to infer if the job is quer
188191

189192
Azure Stream Analytics will work with any type of Azure SQL Database. However, it is important to understand that the refresh rate set for your reference data input could impact your query load. To use the delta query option, it is recommended to use temporal tables in Azure SQL Database.
190193

191-
**Can I sample input from SQL Database reference data input?**
192-
193-
This feature is not available.
194-
195194
**Why does Azure Stream Analytics store snapshots in Azure Storage account?**
196195

197196
Stream Analytics guarantees exactly once event processing and at least once delivery of events. In cases where transient issues impact your job, a small amount of replay is necessary to restore state. To enable replay, it is required to have these snapshots stored in an Azure Storage account. For more information on checkpoint replay, see [Checkpoint and replay concepts in Azure Stream Analytics jobs](stream-analytics-concepts-checkpoint-replay.md).

0 commit comments

Comments
 (0)