You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/conditional-access/terms-of-use.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.tgt_pltfrm: na
12
12
ms.devlang: na
13
13
ms.topic: conceptual
14
14
ms.subservice: compliance
15
-
ms.date: 05/15/2019
15
+
ms.date: 05/23/2019
16
16
ms.author: rolyon
17
17
18
18
ms.collection: M365-identity-device-management
@@ -38,6 +38,8 @@ Azure AD Terms of use has the following capabilities:
38
38
- Require employees or guests to accept your Terms of use before getting access.
39
39
- Require employees or guests to accept your Terms of use on every device before getting access.
40
40
- Require employees or guests to accept your Terms of use on a recurring schedule.
41
+
- Require employees or guests to accept your Terms of use prior to registering security information in Azure Multi-Factor Authentication (MFA).
42
+
- Require employees to accept your Terms of use prior to registering security information in Azure AD self-service password reset (SSPR).
41
43
- Present general Terms of use for all users in your organization.
42
44
- Present specific Terms of use based on a user attributes (ex. doctors vs nurses or domestic vs international employees, by using [dynamic groups](../users-groups-roles/groups-dynamic-membership.md)).
43
45
- Present specific Terms of use when accessing high business impact applications, like Salesforce.
If you do not see the feature name `Microsoft.NetApp/publicPreviewADC`, you do not have access to the service. Stop at this step. Follow instructions in [Submit a waitlist request for accessing the service](#waitlist) to request service access before continuing.
63
+
If you do not see the feature name `Microsoft.NetApp/ANFGA`, you do not have access to the service. Stop at this step. Follow instructions in [Submit a waitlist request for accessing the service](#waitlist) to request service access before continuing.
64
64
65
65
4. In the Azure Cloud Shell console, enter the following command to register the Azure Resource Provider:
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-spark-overview.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,12 +7,12 @@ ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
8
ms.custom: hdinsightactive,mvc
9
9
ms.topic: overview
10
-
ms.date: 01/28/2019
10
+
ms.date: 05/28/2019
11
11
ms.author: hrasheed
12
12
13
13
#customer intent: As a developer new to Apache Spark and Apache Spark in Azure HDInsight, I want to have a basic understanding of Microsoft's implementation of Apache Spark in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
14
-
15
14
---
15
+
16
16
# What is Apache Spark in Azure HDInsight
17
17
18
18
Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Apache Spark in Azure HDInsight is the Microsoft implementation of Apache Spark in the cloud. HDInsight makes it easier to create and configure a Spark cluster in Azure. Spark clusters in HDInsight are compatible with Azure Storage and Azure Data Lake Storage. So you can use HDInsight Spark clusters to process your data stored in Azure. For the components and the versioning information, see [Apache Hadoop components and versions in Azure HDInsight](../hdinsight-component-versioning.md).
@@ -40,7 +40,7 @@ Spark clusters in HDInsight offer a fully managed Spark service. Benefits of cre
40
40
| Caching on SSDs |You can choose to cache data either in memory or in SSDs attached to the cluster nodes. Caching in memory provides the best query performance but could be expensive. Caching in SSDs provides a great option for improving query performance without the need to create a cluster of a size that is required to fit the entire dataset in memory. |
41
41
| Integration with BI Tools |Spark clusters in HDInsight provide connectors for BI tools such as [Power BI](https://www.powerbi.com/) for data analytics. |
42
42
| Pre-loaded Anaconda libraries |Spark clusters in HDInsight come with Anaconda libraries pre-installed. [Anaconda](https://docs.continuum.io/anaconda/) provides close to 200 libraries for machine learning, data analysis, visualization, etc. |
43
-
| Scalability | HDInsight allow you to change the number of cluster nodes. Also, Spark clusters can be dropped with no loss of data since all the data is stored in Azure Storage or Data Lake Storage. |
43
+
| Scalability | HDInsight allows you to change the number of cluster nodes. Also, Spark clusters can be dropped with no loss of data since all the data is stored in Azure Storage or Data Lake Storage. |
44
44
| SLA |Spark clusters in HDInsight come with 24/7 support and an SLA of 99.9% up-time. |
45
45
46
46
Apache Spark clusters in HDInsight include the following components that are available on the clusters by default.
@@ -76,12 +76,14 @@ Spark clusters in HDInsight enable the following key scenarios:
76
76
Apache Spark in HDInsight stores data in Azure Storage or Azure Data Lake Storage. Business experts and key decision makers can analyze and build reports over that data and use Microsoft Power BI to build interactive reports from the analyzed data. Analysts can start from unstructured/semi structured data in cluster storage, define a schema for the data using notebooks, and then build data models using Microsoft Power BI. Spark clusters in HDInsight also support a number of third-party BI tools such as Tableau making it easier for data analysts, business experts, and key decision makers.
77
77
78
78
[Tutorial: Visualize Spark data using Power BI](apache-spark-use-bi-tools.md)
79
+
79
80
- Spark Machine Learning
80
81
81
82
Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark cluster in HDInsight. Spark cluster in HDInsight also includes Anaconda, a Python distribution with a variety of packages for machine learning. Couple this with a built-in support for Jupyter and Zeppelin notebooks, and you have an environment for creating machine learning applications.
82
83
83
84
[Tutorial: Predict building temperatures using HVAC data](apache-spark-ipython-notebook-machine-learning.md)
Spark clusters in HDInsight offer a rich support for building real-time analytics solutions. While Spark already has connectors to ingest data from many sources like Kafka, Flume, Twitter, ZeroMQ, or TCP sockets, Spark in HDInsight adds first-class support for ingesting data from Azure Event Hubs. Event Hubs is the most widely used queuing service on Azure. Having an out-of-the-box support for Event Hubs makes Spark clusters in HDInsight an ideal platform for building real-time analytics pipeline.
@@ -90,7 +92,7 @@ Spark clusters in HDInsight enable the following key scenarios:
90
92
91
93
You can use the following articles to learn more about Apache Spark in HDInsight:
92
94
93
-
-[QuickStart: Create an Apache Spark cluster in HDInsight and run interactive query using Jupyter](./apache-spark-jupyter-spark-sql-use-portal.md)
95
+
-[Quickstart: Create an Apache Spark cluster in HDInsight and run interactive query using Jupyter](./apache-spark-jupyter-spark-sql-use-portal.md)
94
96
-[Tutorial: Run an Apache Spark job using Jupyter](./apache-spark-load-data-run-query.md)
95
97
-[Tutorial: Analyze data using BI tools](./apache-spark-use-bi-tools.md)
96
98
-[Tutorial: Machine learning using Apache Spark](./apache-spark-ipython-notebook-machine-learning.md)
Copy file name to clipboardExpand all lines: articles/media-services/latest/streaming-locators-concept.md
+19-1Lines changed: 19 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ You can also specify the start and end time on your Streaming Locator, which wil
35
35
* Properties of **Streaming Locators** that are of the Datetime type are always in UTC format.
36
36
* You should design a limited set of policies for your Media Service account and reuse them for your Streaming Locators whenever the same options are needed. For more information, see [Quotas and limitations](limits-quotas-constraints.md).
37
37
38
-
## Streaming Locator creation
38
+
## Create Streaming Locators
39
39
40
40
### Not encrypted
41
41
@@ -80,6 +80,24 @@ See [Filters: associate with Streaming Locators](filters-concept.md#associating-
80
80
81
81
See [Filtering, ordering, paging of Media Services entities](entities-overview.md).
82
82
83
+
## List Streaming Locators by Asset name
84
+
85
+
To get Streaming Locators based on the associated Asset name, use the following operations:
If you've established on-premises to Azure connection successfully and you can't establish connection to Managed Instance, check if your firewall has open outbound connection on SQL port 1433 as well as 11000-12000 range of ports for redirection.
54
+
If you've established on-premises to Azure connection successfully and you can't establish connection to Managed Instance, check if your firewall has open outbound connection on SQL port 1433 as well as 11000-11999 range of ports for redirection.
55
55
56
56
## Connect an application on the developers box
57
57
@@ -91,7 +91,7 @@ This scenario is illustrated in the following diagram:
91
91
92
92
For troubleshooting connectivity issues, review the following:
93
93
94
-
- If you are unable to connect to Managed Instance from an Azure virtual machine within the same VNet but different subnet, check if you have a Network Security Group set on VM subnet that might be blocking access.Additionally note that you need to open outbound connection on SQL port 1433 as well as ports in range 11000-12000 since those are needed for connecting via redirection inside the Azure boundary.
94
+
- If you are unable to connect to Managed Instance from an Azure virtual machine within the same VNet but different subnet, check if you have a Network Security Group set on VM subnet that might be blocking access.Additionally note that you need to open outbound connection on SQL port 1433 as well as ports in range 11000-11999 since those are needed for connecting via redirection inside the Azure boundary.
95
95
- Ensure that BGP Propagation is set to **Enabled** for the route table associated with the VNet.
96
96
- If using P2S VPN, check the configuration in the Azure portal to see if you see **Ingress/Egress** numbers. Non-zero numbers indicate that Azure is routing traffic to/from on-premises.
Copy file name to clipboardExpand all lines: articles/stream-analytics/sql-reference-data.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -169,6 +169,9 @@ When using the delta query, [temporal tables in Azure SQL Database](../sql-datab
169
169
170
170
Note that Stream Analytics runtime may periodically run the snapshot query in addition to the delta query to store checkpoints.
171
171
172
+
## Test your query
173
+
It is important to verify that your query is returning the expected dataset that the Stream Analytics job will use as reference data. To test your query, go to Input under Job Topology section on portal. You can then select Sample Data on your SQL Database Reference input. After the sample becomes available, you can download the file and check to see if the data being returned is as expected. If you want a optimize your development and test iterations, it is recommended to use the [Stream Analytics tools for Visual Studio](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-tools-for-visual-studio-install). You can also any other tool of your preference to first ensure the query is returning the right results from you Azure SQL Database and then use that in your Stream Analytics job.
174
+
172
175
## FAQs
173
176
174
177
**Will I incur additional cost by using SQL reference data input in Azure Stream Analytics?**
@@ -188,10 +191,6 @@ The combination of both of these metrics can be used to infer if the job is quer
188
191
189
192
Azure Stream Analytics will work with any type of Azure SQL Database. However, it is important to understand that the refresh rate set for your reference data input could impact your query load. To use the delta query option, it is recommended to use temporal tables in Azure SQL Database.
190
193
191
-
**Can I sample input from SQL Database reference data input?**
192
-
193
-
This feature is not available.
194
-
195
194
**Why does Azure Stream Analytics store snapshots in Azure Storage account?**
196
195
197
196
Stream Analytics guarantees exactly once event processing and at least once delivery of events. In cases where transient issues impact your job, a small amount of replay is necessary to restore state. To enable replay, it is required to have these snapshots stored in an Azure Storage account. For more information on checkpoint replay, see [Checkpoint and replay concepts in Azure Stream Analytics jobs](stream-analytics-concepts-checkpoint-replay.md).
0 commit comments