You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service
14
14
15
-
This tutorial shows how to create a secure PHP app in Azure App Service that's connected to a MySQL database (using Azure Database for MySQL flexible server). You'll also deploy an Azure Cache for Redis to enable the caching code in your application. Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Laravel app running on Azure App Service on Linux.
15
+
This tutorial shows how to create a secure PHP app in Azure App Service that's connected to a MySQL database (using Azure Database for MySQL Flexible Server). You'll also deploy an Azure Cache for Redis to enable the caching code in your application. Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Laravel app running on Azure App Service on Linux.
16
16
17
17
:::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png" alt-text="Screenshot of the Azure app example titled Task List showing new tasks added.":::
18
18
@@ -85,7 +85,7 @@ Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
85
85
- **Virtual network** → Integrated with the App Service app and isolates back-end network traffic.
86
86
- **Private endpoints** → Access endpoints for the database server and the Redis cache in the virtual network.
87
87
- **Network interfaces** → Represents private IP addresses, one for each of the private endpoints.
88
-
- **Azure Database for MySQL flexible server** → Accessible only from behind its private endpoint. A database and a user are created for you on the server.
88
+
- **Azure Database for MySQL Flexible Server** → Accessible only from behind its private endpoint. A database and a user are created for you on the server.
89
89
- **Azure Cache for Redis** → Accessible only from behind its private endpoint.
90
90
- **Private DNS zones** → Enable DNS resolution of the database server and the Redis cache in the virtual network.
Copy file name to clipboardExpand all lines: articles/event-hubs/event-hubs-scalability.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ The throughput capacity of event hubs is controlled by **throughput units**. Thr
22
22
23
23
Beyond the capacity of the purchased throughput units, ingress is throttled and Event Hubs throws a [EventHubsException](/dotnet/api/azure.messaging.eventhubs.eventhubsexception) (with a Reason value of ServiceBusy). Egress doesn't produce throttling exceptions, but is still limited to the capacity of the purchased throughput units. If you receive publishing rate exceptions or are expecting to see higher egress, be sure to check how many throughput units you have purchased for the namespace. You can manage throughput units on the **Scale** page of the namespaces in the [Azure portal](https://portal.azure.com). You can also manage throughput units programmatically using the [Event Hubs APIs](./event-hubs-samples.md).
24
24
25
-
Throughput units are prepurchased and are billed per hour. Once purchased, throughput units are billed for a minimum of one hour. Up to 40 throughput units can be purchased for an Event Hubs namespace and are shared across all event hubs in that namespace.
25
+
Throughput units are prepurchased and are billed per hour. Once purchased, throughput units are billed for a minimum of one hour. Up to 40 throughput units can be purchased for an Event Hubs namespace and are shared across all event hubs in that namespace. The total ingress and egress capacity of these throughput units is also shared among all partitions and consumers within each event hub, meaning multiple consumers reading from the same partition must share the available bandwidth.
26
26
27
27
The **Auto-inflate** feature of Event Hubs automatically scales up by increasing the number of throughput units, to meet usage needs. Increasing throughput units prevents throttling scenarios, in which:
Copy file name to clipboardExpand all lines: articles/event-hubs/includes/event-hubs-partition-count.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,6 @@ ms.custom: "include file"
10
10
11
11
---
12
12
13
-
As[partition](../event-hubs-features.md#partitions) is a data organization mechanism that allows you to publish and consume data in a parallel manner. We recommend that you balance scaling units (throughput units for the standard tier, processing units for the premium tier, or capacity units for the dedicated tier) and partitions to achieve optimal scale. In general, we recommend a maximum throughput of 1 MB/s per partition. Therefore, a rule of thumb for calculating the number of partitions would be to divide the maximum expected throughput by 1 MB/s. For example, if your use case requires 20 MB/s, we recommend that you choose at least 20 partitions to achieve the optimal throughput.
13
+
A[partition](../event-hubs-features.md#partitions) is a data organization mechanism that enables parallel publishing and consumption. While it supports parallel processing and scaling, total capacity remains limited by the namespace’s scaling allocation. We recommend that you balance scaling units (throughput units for the standard tier, processing units for the premium tier, or capacity units for the dedicated tier) and partitions to achieve optimal scale. In general, we recommend a maximum throughput of 1 MB/s per partition. Therefore, a rule of thumb for calculating the number of partitions would be to divide the maximum expected throughput by 1 MB/s. For example, if your use case requires 20 MB/s, we recommend that you choose at least 20 partitions to achieve the optimal throughput.
14
14
15
15
However, if you have a model in which your application has an affinity to a particular partition, increasing the number of partitions isn't beneficial. For more information, see [availability and consistency](../event-hubs-availability-and-consistency.md).
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-hadoop-windows-tools.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,15 +46,15 @@ Examples of tasks you can do with the .NET SDK in Visual Studio:
46
46
*[Run Apache Hive queries using the .NET SDK](hadoop/apache-hadoop-use-hive-dotnet-sdk.md).
47
47
*[Use C# user-defined functions with Apache Hive and Apache Pig streaming on Apache Hadoop](hadoop/apache-hadoop-hive-pig-udf-dotnet-csharp.md).
48
48
49
-
## Intellij IDEA and Eclipse IDE for Spark clusters
49
+
## IntelliJ IDEA and Eclipse IDE for Spark clusters
50
50
51
-
Both [Intellij IDEA](https://www.jetbrains.com/idea/download) and the [Eclipse IDE](https://www.eclipse.org/downloads/) can be used to:
51
+
Both [IntelliJ IDEA](https://www.jetbrains.com/idea/download) and the [Eclipse IDE](https://www.eclipse.org/downloads/) can be used to:
52
52
* Develop and submit a Scala Spark application on an HDInsight Spark cluster.
53
53
* Access Spark cluster resources.
54
54
* Develop and run a Scala Spark application locally.
55
55
56
56
These articles show how:
57
-
*Intellij IDEA: [Create Apache Spark applications using the Azure Toolkit for Intellij plug-in and the Scala SDK.](spark/apache-spark-intellij-tool-plugin.md)
57
+
*IntelliJ IDEA: [Create Apache Spark applications using the Azure Toolkit for IntelliJ plug-in and the Scala SDK.](spark/apache-spark-intellij-tool-plugin.md)
58
58
* Eclipse IDE or Scala IDE for Eclipse: [Create Apache Spark applications and the Azure Toolkit for Eclipse](spark/apache-spark-eclipse-tool-plugin.md)
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-spark-intellij-tool-debug-remotely-through-ssh.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ This article provides step-by-step guidance on how to use HDInsight Tools in [Az
40
40
***Maven** for Scala project-creation wizard support.
41
41
***SBT** for managing the dependencies and building for the Scala project.
42
42
43
-
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-create-projectfor-debug-remotely.png" alt-text="Intellij Create New Project Spark." border="true":::
43
+
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-create-projectfor-debug-remotely.png" alt-text="IntelliJ Create New Project Spark." border="true":::
44
44
45
45
1. Select **Next**.
46
46
@@ -53,7 +53,7 @@ This article provides step-by-step guidance on how to use HDInsight Tools in [Az
53
53
|Project SDK|If blank, select **New...** and navigate to your JDK.|
54
54
|Spark Version|The creation wizard integrates the proper version for Spark SDK and Scala SDK. If the Spark cluster version is earlier than 2.0, select **Spark 1.x**. Otherwise, select **Spark 2.x.**. This example uses **Spark 2.3.0 (Scala 2.11.8)**.|
55
55
56
-
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-new-project.png" alt-text="Intellij New Project select Spark version." border="true":::
56
+
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-new-project.png" alt-text="IntelliJ New Project select Spark version." border="true":::
57
57
58
58
1. Select **Finish**. It may take a few minutes before the project becomes available. Watch the bottom right-hand corner for progress.
59
59
@@ -65,11 +65,11 @@ This article provides step-by-step guidance on how to use HDInsight Tools in [Az
65
65
66
66
1. Once local run completed, you can see the output file save to your current project explorer **data** > **__default__**.
67
67
68
-
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/spark-local-run-result.png" alt-text="Intellij Project local run result." border="true":::
68
+
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/spark-local-run-result.png" alt-text="IntelliJ Project local run result." border="true":::
69
69
70
70
1. Our tools have set the default local run configuration automatically when you perform the local run and local debug. Open the configuration **[Spark on HDInsight] XXX** on the upper right corner, you can see the **[Spark on HDInsight]XXX** already created under **Apache Spark on HDInsight**. Switch to **Locally Run** tab.
71
71
72
-
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/local-run-configuration.png" alt-text="Intellij Run debug configurations local run." border="true":::
72
+
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/local-run-configuration.png" alt-text="IntelliJ Run debug configurations local run." border="true":::
73
73
74
74
-[Environment variables](#prerequisites): If you already set the system environment variable **HADOOP_HOME** to **C:\WinUtils**, it can auto detect that no need to manually add.
75
75
-[WinUtils.exe Location](#prerequisites): If you have not set the system environment variable, you can find the location by clicking its button.
@@ -89,35 +89,35 @@ This article provides step-by-step guidance on how to use HDInsight Tools in [Az
89
89
90
90
1. In the **Run/Debug Configurations** dialog box, select the plus sign (**+**). Then select the **Apache Spark on HDInsight** option.
91
91
92
-
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-add-new-Configuration.png" alt-text="Intellij Add new configuration." border="true":::
92
+
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-add-new-Configuration.png" alt-text="IntelliJ Add new configuration." border="true":::
93
93
94
94
1. Switch to **Remotely Run in Cluster** tab. Enter information for **Name**, **Spark cluster**, and **Main class name**. Then Click **Advanced configuration (Remote Debugging)**. Our tools support debug with **Executors**. The **numExecutors**, the default value is 5. You'd better not set higher than 3.
95
95
96
-
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-run-debug-configurations.png" alt-text="Intellij Run debug configurations." border="true":::
96
+
:::image type="content" source="./media/apache-spark-intellij-tool-debug-remotely-through-ssh/hdinsight-run-debug-configurations.png" alt-text="IntelliJ Run debug configurations." border="true":::
97
97
98
98
1. In the **Advanced Configuration (Remote Debugging)** part, select **Enable Spark remote debug**. Enter the SSH username, and then enter a password or use a private key file. If you want to perform remote debug, you need to set it. There is no need to set it if you just want to use remote run.
1. The configuration is now saved with the name you provided. To view the configuration details, select the configuration name. To make changes, select **Edit Configurations**.
103
103
104
104
1. After you complete the configurations settings, you can run the project against the remote cluster or perform remote debugging.
1. Set up breaking points, and then Click the **Remote debug** icon. The difference with remote submission is that SSH username/password need to be configured.
1. When the program execution reaches the breaking point, you see a **Driver** tab and two **Executor** tabs in the **Debugger** pane. Select the **Resume Program** icon to continue running the code, which then reaches the next breakpoint. You need to switch to the correct **Executor** tab to find the target executor to debug. You can view the execution logs on the corresponding **Console** tab.
1. To dynamically update the variable value by using the IntelliJ debugging capability, select **Debug** again. The **Variables** pane appears again.
137
137
138
138
1. Right-click the target on the **Debug** tab, and then select **Set Value**. Next, enter a new value for the variable. Then select **Enter** to save the value.
1. Select the **Resume Program** icon to continue to run the program. This time, no exception is caught. You can see that the project runs successfully without any exceptions.
0 commit comments