You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/configure-azure-cni-dynamic-ip-allocation.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,6 @@ This article shows you how to use Azure CNI networking for dynamic allocation of
28
28
29
29
* Review the [prerequisites](/configure-azure-cni.md#prerequisites) for configuring basic Azure CNI networking in AKS, as the same prerequisites apply to this article.
30
30
* Review the [deployment parameters](/configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS, as the same parameters apply.
31
-
* Only linux node clusters and node pools are supported.
Copy file name to clipboardExpand all lines: articles/azure-functions/functions-run-local.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -252,7 +252,7 @@ There are no additional considerations for PowerShell.
252
252
253
253
# [TypeScript](#tab/ts)
254
254
255
-
+ To use a `--worker-runtime` value of `node`, specify the `--language` as `javascript`.
255
+
+ To use a `--worker-runtime` value of `node`, specify the `--language` as `typescript`.
256
256
257
257
+ See the [TypeScript section in the JavaScript developer reference](functions-reference-node.md#typescript) for `func init` behaviors specific to TypeScript.
az group create --name $myResourceGroup --location eastus
47
48
```
48
49
49
50
## Get custom location information
@@ -75,7 +76,8 @@ Now that you have the custom location ID, you can query for the connected enviro
75
76
A connected environment is largely the same as a standard Container Apps environment, but network restrictions are controlled by the underlying Arc-enabled Kubernetes cluster.
76
77
77
78
```azure-interactive
78
-
myConnectedEnvironment = az containerapp connected-env list --custom-location customLocationId -o tsv --query '[].id'
79
+
myContainerApp="my-container-app"
80
+
myConnectedEnvironment=$(az containerapp connected-env list --custom-location $customLocationId -o tsv --query '[].id')
79
81
```
80
82
81
83
## Create an app
@@ -84,16 +86,15 @@ The following example creates a Node.js app.
az containerapp browse --resource-group myResourceGroup \
96
-
--name myContainerApp
97
+
az containerapp browse --resource-group $myResourceGroup --name $myContainerApp
97
98
```
98
99
99
100
## Get diagnostic logs using Log Analytics
@@ -112,7 +113,7 @@ let StartTime = ago(72h);
112
113
let EndTime = now();
113
114
ContainerAppsConsoleLogs_CL
114
115
| where TimeGenerated between (StartTime .. EndTime)
115
-
| where AppName_s =~ "myContainerApp"
116
+
| where AppName_s =~ "my-container-app"
116
117
```
117
118
118
119
The application logs for all the apps hosted in your Kubernetes cluster are logged to the Log Analytics workspace in the custom log table named `ContainerAppsConsoleLogs_CL`.
lOG_ANALYTICS_KEY_ENC=$(printf %s $lOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step
@@ -241,13 +239,13 @@ A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
241
239
```azurepowershell
242
240
$LOG_ANALYTICS_WORKSPACE_ID=$(az monitor log-analytics workspace show `
243
241
--resource-group $GROUP_NAME `
244
-
--workspace-name $wORKSPACE_NAME `
242
+
--workspace-name $WORKSPACE_NAME `
245
243
--query customerId `
246
244
--output tsv)
247
245
$LOG_ANALYTICS_WORKSPACE_ID_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_WORKSPACE_ID))# Needed for the next step
@@ -335,7 +333,7 @@ A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
335
333
| - | - |
336
334
| `Microsoft.CustomLocation.ServiceAccount` | The service account created for the custom location. It's recommended that it 's set to the value `default`. |
337
335
| `appsNamespace` | The namespace used to create the app definitions and revisions. It **must** match that of the extension release namespace. |
338
-
| `CLUSTER_NAME` | The name of the Container Apps extension Kubernetes environment that will be created against this extension. |
336
+
| `clusterName` | The name of the Container Apps extension Kubernetes environment that will be created against this extension. |
339
337
| `logProcessor.appLogs.destination` | Optional. Destination for application logs. Accepts `log-analytics` or `none`, choosing none disables platform logs. |
340
338
| `logProcessor.appLogs.logAnalyticsConfig.customerId` | Required only when `logProcessor.appLogs.destination` is set to `log-analytics`. The base64-encoded Log analytics workspace ID. This parameter should be configured as a protected setting. |
341
339
| `logProcessor.appLogs.logAnalyticsConfig.sharedKey` | Required only when `logProcessor.appLogs.destination` is set to `log-analytics`. The base64-encoded Log analytics workspace shared key. This parameter should be configured as a protected setting. |4
@@ -474,7 +472,8 @@ Before you can start creating apps in the custom location, you need an [Azure Co
474
472
az containerapp connected-env create \
475
473
--resource-group $GROUP_NAME \
476
474
--name $CONNECTED_ENVIRONMENT_NAME \
477
-
--custom-location $CUSTOM_LOCATION_ID
475
+
--custom-location $CUSTOM_LOCATION_ID \
476
+
--location $LOCATION
478
477
```
479
478
480
479
# [PowerShell](#tab/azure-powershell)
@@ -483,7 +482,8 @@ Before you can start creating apps in the custom location, you need an [Azure Co
Copy file name to clipboardExpand all lines: articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ This article explains how to use the Azure Cosmos DB SDKs to delete all items by
22
22
23
23
## Feature overview
24
24
25
-
The delete by partition key feature is an asynchronous, background operation that allows you to delete all documents with the same logical partition key value, using the Comsos SDK.
25
+
The delete by partition key feature is an asynchronous, background operation that allows you to delete all documents with the same logical partition key value, using the Cosmos SDK.
26
26
27
27
Because the number of documents to be deleted may be large, the operation runs in the background. Though the physical deletion operation runs in the background, the effects will be available immediately, as the documents to be deleted won't appear in the results of queries or read operations.
Copy file name to clipboardExpand all lines: articles/openshift/howto-create-a-storageclass.md
+20Lines changed: 20 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,6 +86,15 @@ apiVersion: storage.k8s.io/v1
86
86
metadata:
87
87
name: azure-file
88
88
provisioner: kubernetes.io/azure-file
89
+
mountOptions:
90
+
- dir_mode=0777
91
+
- file_mode=0777
92
+
- uid=0
93
+
- gid=0
94
+
- mfsymlinks
95
+
- cache=strict
96
+
- actimeo=30
97
+
- noperm
89
98
parameters:
90
99
location: $LOCATION
91
100
secretNamespace: kube-system
@@ -99,6 +108,17 @@ EOF
99
108
oc create -f azure-storageclass-azure-file.yaml
100
109
```
101
110
111
+
Mount options for Azure Files will generally be dependent on the workload that you are deploying and the requirements of the application. Specifically for Azure files, there are additional parameters that you should consider using.
112
+
113
+
Mandatory parameters:
114
+
- "mfsymlinks" to map symlinks to a form the client can use
115
+
- "noperm" to disable permission checks on the client side
116
+
117
+
Recommended parameters:
118
+
- "nossharesock" to disable reusing sockets if the client is already connected via an existing mount point
119
+
- "actimeo=30" (or higher) to increase the time the CIFS client caches file and directory attributes
120
+
- "nobrl" to disable sending byte range lock requests to the server and for applications which have challenges with posix locks
121
+
102
122
## Change the default StorageClass (optional)
103
123
104
124
The default StorageClass on ARO is called managed-premium and uses the azure-disk provisioner. Change this by issuing patch commands against the StorageClass manifests.
Copy file name to clipboardExpand all lines: articles/openshift/tutorial-create-cluster.md
-8Lines changed: 0 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,14 +70,6 @@ You will also need sufficient Azure Active Directory permissions (either a membe
70
70
az provider register -n Microsoft.Authorization --wait
71
71
```
72
72
73
-
1. Azure Red Hat Openshift is now available as a public preview in Azure government. If you are looking to deploy there, please follow these instructions:
74
-
75
-
> [!IMPORTANT]
76
-
> ARO preview features are available on a self-service, opt-in basis. Preview features are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Preview features are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
77
-
78
-
```azurecli-interactive
79
-
az feature register --namespace Microsoft.RedHatOpenShift --name preview
Copy file name to clipboardExpand all lines: articles/synapse-analytics/sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@ The first step toward loading data is to login as LoaderRC20.
78
78
79
79
3. Select**Connect**.
80
80
81
-
4. When your connection is ready, you will see two server connections in Object Explorer. One connection as ServerAdmin and one connection asMedRCLogin.
81
+
4. When your connection is ready, you will see two server connections in Object Explorer. One connection as ServerAdmin and one connection asLoaderRC20.
82
82
83
83

0 commit comments