You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/app-service/app-service-configure-premium-tier.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,9 +26,16 @@ The Premium V3 tier is available for both native and custom containers, includin
26
26
27
27
Premium V3 as well as specific Premium V3 SKUs are available in some Azure regions and availability in additional regions is being added continually. To see if a specific PremiumV3 offering is available in your region, run the following Azure CLI command in the [Azure Cloud Shell](../cloud-shell/overview.md) (substitute _P1v3_ with the desired SKU):
28
28
29
+
**Windows** SKU availability
30
+
29
31
```azurecli-interactive
30
32
az appservice list-locations --sku P1V3
31
33
```
34
+
**Linux** SKU availability
35
+
36
+
```azurecli-interactive
37
+
az appservice list-locations --linux-workers-enabled --sku P1V3
Copy file name to clipboardExpand all lines: articles/app-service/configure-custom-container.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ This guide provides key concepts and instructions for containerization of Window
21
21
22
22
::: zone pivot="container-linux"
23
23
24
-
This guide provides key concepts and instructions for containerization of Linux apps in App Service. If you're new to Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. For sidecar containers (preview), see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
24
+
This guide provides key concepts and instructions for containerization of Linux apps in App Service. If you're new to Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. For sidecar containers, see [Tutorial: Configure a sidecar container for custom container in Azure App Service](tutorial-custom-container-sidecar.md).
25
25
26
26
::: zone-end
27
27
@@ -478,7 +478,7 @@ Further troubleshooting information is available at the Azure App Service blog:
478
478
## Configure multi-container apps
479
479
480
480
> [!NOTE]
481
-
> Sidecar containers (preview) will succeed multi-container apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
481
+
> Sidecar containers will succeed multi-container apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service](tutorial-custom-container-sidecar.md).
482
482
483
483
- [Use persistent storage in Docker Compose](#use-persistent-storage-in-docker-compose)
484
484
- [Preview limitations](#preview-limitations)
@@ -562,7 +562,7 @@ The following lists show supported and unsupported Docker Compose configuration
562
562
::: zone pivot="container-linux"
563
563
564
564
> [!div class="nextstepaction"]
565
-
> [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md)
565
+
> [Tutorial: Configure a sidecar container for custom container in Azure App Service](tutorial-custom-container-sidecar.md)
Copy file name to clipboardExpand all lines: articles/app-service/reference-app-settings.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,8 @@ The following environment variables are related to the app environment in genera
26
26
|`WEBSITE_PLATFORM_VERSION`| Read-only. App Service platform version. ||
27
27
|`HOME`| Read-only. Path to the home directory (for example, `D:\home` for Windows). ||
28
28
|`SERVER_PORT`| Read-only. The port the app should listen to. ||
29
-
|`WEBSITE_WARMUP_PATH`| A relative path to ping to warm up the app, beginning with a slash. The default is `/`, which pings the root path. The specific path can be pinged by an unauthenticated client, such as Azure Traffic Manager, even if [App Service authentication](overview-authentication-authorization.md) is set to reject unauthenticated clients. (NOTE: This app setting doesn't change the path used by AlwaysOn.) ||
29
+
|`WEBSITE_WARMUP_PATH`| A relative path to ping to warm up the app, beginning with a slash. The default is `/robots933456.txt`. Whenever the platform starts up a container, the orchestrator makes repeated requests against this endpoint. The platform considers any response from this endpoint as an indication that the container is ready. Once the platform considers the container to be ready, it starts forwarding organic traffic to the newly started container. Unless `WEBSITE_WARMUP_STATUSES` is configured, the platform will consider any response from the container at this endpoint - even error codes such as 404 or 502 - as an indication that the container is ready. Note that this appsetting doesn't change the path used by AlwaysOn. ||
30
+
| `WEBSITE_WARMUP_STATUSES` | A comma-delimited list of HTTP status codes that will be considered successful when the platform makes warmup pings against a newly started container. Used in conjunction with `WEBSITE_WARMUP_PATH`. By default, any status code is considered an indication that the container is ready for organic traffic. This appsetting can be used to require a specific response before organic traffic is routed to the container. `Example: 200,202`. If pings against the app's configured warmup path receive a response with a 200 or 202 status code, organic traffic will be routed to the container. If a status code that is not in the list is received (such as 502), the platform will continue making pings until (1) a 200 or 202 is received, or (2) the container startup timeout limit is reached (see `WEBSITES_CONTAINER_START_TIME_LIMIT`). Note that if the container doesn't respond with an HTTP status code that is in the list, the platform will eventually fail the startup attempt and retry, which will result in 503 errors. ||
30
31
|`WEBSITE_COMPUTE_MODE`| Read-only. Specifies whether app runs on dedicated (`Dedicated`) or shared (`Shared`) VM/s. ||
31
32
|`WEBSITE_SKU`| Read-only. SKU of the app. Possible values are `Free`, `Shared`, `Basic`, and `Standard`. ||
32
33
|`SITE_BITNESS`| Read-only. Shows whether the app is 32-bit (`x86`) or 64-bit (`AMD64`). ||
@@ -49,6 +50,9 @@ The following environment variables are related to the app environment in genera
49
50
|`WEBSITE_SCM_SEPARATE_STATUS`| Read-only. Shows whether the Kudu app is running in a separate process (`1`) or not (`0`). ||
50
51
|`WEBSITE_DNS_ATTEMPTS`| Number of times to try name resolve. ||
51
52
|`WEBSITE_DNS_TIMEOUT`| Number of seconds to wait for name resolve ||
53
+
|`WEBSITES_CONTAINER_START_TIME_LIMIT`| The amount of time (in seconds) that the platform will wait for a container to become ready on startup. This setting applies to both code-based and container-based apps on App Service for Linux. The default value is `230`. When a container starts up, repeated pings are made against the container to gauge its readiness to serve organic traffic (see `WEBSITE_WARMUP_PATH` and `WEBSITE_WARMUP_STATUSES`). These pings are continuously made until either of the following is true: (1) a successful response is received, or (2) the start time limit is reached. If the container isn't deemed ready within the configured timeout, the platform will fail the startup attempt and retry, which will result in 503 errors. For App Service for Windows Containers, the default start time limit is `10 mins`. You can change the start time limit by specifying a timespan like this `00:05:00`, which indicates 5 minutes.||
54
+
55
+
52
56
<!--
53
57
WEBSITE_PROACTIVE_STACKTRACING_ENABLED
54
58
WEBSITE_CLOUD_NAME
@@ -336,8 +340,7 @@ For more information on custom containers, see [Run a custom container in Azure]
336
340
337
341
| Setting name| Description | Example |
338
342
|-|-|-|
339
-
|`WEBSITES_ENABLE_APP_SERVICE_STORAGE`| For Linux custom containers: set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `false` for Linux custom containers.<br/><br/>For Windows containers: set to `true` to enable the `c:\home` directory to be shared across scaled instances. The default is `true` for Windows containers.||
340
-
|`WEBSITES_CONTAINER_START_TIME_LIMIT`| Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. ||
343
+
|`WEBSITES_ENABLE_APP_SERVICE_STORAGE`| For Linux containers, if this app setting is not specified, the `/home` directory is shared across scaled instances by default. You can set it to `false` to disable sharing.<br/><br/>For Windows containers: set to `true` to enable the `c:\home` directory to be shared across scaled instances. The default is `true` for Windows containers.||
341
344
|`WEBSITES_CONTAINER_STOP_TIME_LIMIT`| Amount of time in seconds to wait for the container to terminate gracefully. Default is `5`. You can increase to a maximum of `120`||
342
345
|`DOCKER_REGISTRY_SERVER_URL`| URL of the registry server, when running a custom container in App Service. For security, this variable isn't passed on to the container. |`https://<server-name>.azurecr.io`|
343
346
|`DOCKER_REGISTRY_SERVER_USERNAME`| Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable isn't passed on to the container. ||
Copy file name to clipboardExpand all lines: articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: Integrate elastic logs with Azure Monitor - Microsoft Azure Data Manager for Energy
3
-
description: This is a how-to article on how to start collecting ElasticSearch logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace.
3
+
description: This is a how-to article on how to start collecting Elasticsearch logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace.
[Network Security Group (NSG) flow logs](nsg-flow-logs-overview.md) provide information that can be used to understand ingress and egress IP traffic on network interfaces. These flow logs show outbound and inbound flows on a per NSG rule basis, the NIC the flow applies to, 5-tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol), and if the traffic was allowed or denied.
18
18
19
-
You can have many NSGs in your network with flow logging enabled. This amount of logging data makes it cumbersome to parse and gain insights from your logs. This article provides a solution to centrally manage these NSG flow logs using Grafana, an open source graphing tool, ElasticSearch, a distributed search and analytics engine, and Logstash, which is an open source server-side data processing pipeline.
19
+
You can have many NSGs in your network with flow logging enabled. This amount of logging data makes it cumbersome to parse and gain insights from your logs. This article provides a solution to centrally manage these NSG flow logs using Grafana, an open source graphing tool, Elasticsearch, a distributed search and analytics engine, and Logstash, which is an open source server-side data processing pipeline.
20
20
21
21
## Scenario
22
22
23
-
NSG flow logs are enabled using Network Watcher and are stored in Azure blob storage. A Logstash plugin is used to connect and process flow logs from blob storage and send them to ElasticSearch. Once the flow logs are stored in ElasticSearch, they can be analyzed and visualized into customized dashboards in Grafana.
23
+
NSG flow logs are enabled using Network Watcher and are stored in Azure blob storage. A Logstash plugin is used to connect and process flow logs from blob storage and send them to Elasticsearch. Once the flow logs are stored in Elasticsearch, they can be analyzed and visualized into customized dashboards in Grafana.
@@ -32,7 +32,7 @@ For this scenario, you must have Network Security Group Flow Logging enabled on
32
32
33
33
### Setup considerations
34
34
35
-
In this example Grafana, ElasticSearch, and Logstash are configured on an Ubuntu LTS Server deployed in Azure. This minimal setup is used for running all three components - they are all running on the same VM. This setup should only be used for testing and non-critical workloads. Logstash, Elasticsearch, and Grafana can all be architected to scale independently across many instances. For more information, see the documentation for each of these components.
35
+
In this example Grafana, Elasticsearch, and Logstash are configured on an Ubuntu LTS Server deployed in Azure. This minimal setup is used for running all three components - they are all running on the same VM. This setup should only be used for testing and non-critical workloads. Logstash, Elasticsearch, and Grafana can all be architected to scale independently across many instances. For more information, see the documentation for each of these components.
36
36
37
37
### Install Logstash
38
38
@@ -47,7 +47,7 @@ The following instructions are used to install Logstash in Ubuntu. For instructi
47
47
sudo dpkg -i logstash-5.2.0.deb
48
48
```
49
49
50
-
2. Configure Logstash to parse the flow logs and send them to ElasticSearch. Create a Logstash.conf file using:
50
+
2. Configure Logstash to parse the flow logs and send them to Elasticsearch. Create a Logstash.conf file using:
51
51
52
52
```bash
53
53
sudo touch /etc/logstash/conf.d/logstash.conf
@@ -137,7 +137,7 @@ The input section designates the input source of the logs that Logstash will pro
137
137
138
138
The filter section then flattens each flow log file so that each individual flow tuple and its associated properties becomes a separate Logstash event.
139
139
140
-
Finally, the output section forwards each Logstash event to the ElasticSearch server. Feel free to modify the Logstash config file to suit your specific needs.
140
+
Finally, the output section forwards each Logstash event to the Elasticsearch server. Feel free to modify the Logstash config file to suit your specific needs.
141
141
142
142
### Install the Logstash input plugin for Azure Blob storage
For more information about this plug in, see [Logstash input plugin for Azure Storage Blobs](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-input-azureblob).
151
151
152
-
### Install ElasticSearch
152
+
### Install Elasticsearch
153
153
154
-
You can use the following script to install ElasticSearch. For information about installing ElasticSearch, see [Elastic Stack](https://www.elastic.co/guide/en/elastic-stack/current/index.html).
154
+
You can use the following script to install Elasticsearch. For information about installing Elasticsearch, see [Elastic Stack](https://www.elastic.co/guide/en/elastic-stack/current/index.html).
@@ -177,21 +177,21 @@ sudo service grafana-server start
177
177
178
178
For additional installation information, see [Installing on Debian / Ubuntu](https://docs.grafana.org/installation/debian/).
179
179
180
-
#### Add the ElasticSearch server as a data source
180
+
#### Add the Elasticsearch server as a data source
181
181
182
-
Next, you need to add the ElasticSearch index containing flow logs as a data source. You can add a data source by selecting **Add data source** and completing the form with the relevant information. A sample of this configuration can be found in the following screenshot:
182
+
Next, you need to add the Elasticsearch index containing flow logs as a data source. You can add a data source by selecting **Add data source** and completing the form with the relevant information. A sample of this configuration can be found in the following screenshot:
183
183
184
184

185
185
186
186
#### Create a dashboard
187
187
188
-
Now that you have successfully configured Grafana to read from the ElasticSearch index containing NSG flow logs, you can create and personalize dashboards. To create a new dashboard, select**Create your first dashboard**. The following sample graph configuration shows flows segmented by NSG rule:
188
+
Now that you have successfully configured Grafana to read from the Elasticsearch index containing NSG flow logs, you can create and personalize dashboards. To create a new dashboard, select**Create your first dashboard**. The following sample graph configuration shows flows segmented by NSG rule:
By integrating Network Watcher with ElasticSearch and Grafana, you now have a convenient and centralized way to manage and visualize NSG flow logs as well as other data. Grafana has a number of other powerful graphing features that can also be used to further manage flow logs and better understand your network traffic. Now that you have a Grafana instance set up and connected to Azure, feel free to continue to explore the other functionality that it offers.
194
+
By integrating Network Watcher with Elasticsearch and Grafana, you now have a convenient and centralized way to manage and visualize NSG flow logs as well as other data. Grafana has a number of other powerful graphing features that can also be used to further manage flow logs and better understand your network traffic. Now that you have a Grafana instance set up and connected to Azure, feel free to continue to explore the other functionality that it offers.
0 commit comments