Skip to content

Commit 058b1d7

Browse files
Syntax blocks azurecli.
1 parent e878e72 commit 058b1d7

12 files changed

+39
-39
lines changed

articles/germany/germany-get-started-connect-with-cli.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,23 +33,23 @@ There are multiple ways to [install the Azure CLI](https://docs.microsoft.com/cl
3333

3434
To connect to Azure Germany, set the cloud:
3535

36-
```
36+
```azurecli
3737
az cloud set --name AzureGermanCloud
3838
```
3939

4040
After the cloud is set, you can log in:
4141

42-
```
42+
```azurecli
4343
az login --username [email protected]
4444
```
4545

4646
To confirm that the cloud is correctly set to AzureGermanCloud, run either of the following commands and then verify that the `isActive` flag is set to `true` for the AzureGermanCloud item:
4747

48-
```
48+
```azurecli
4949
az cloud list
5050
```
5151

52-
```
52+
```azurecli
5353
az cloud list --output table
5454
```
5555

@@ -74,13 +74,13 @@ sudo npm install -g azure-cli
7474

7575
After Azure CLI is installed, log in to Azure Germany:
7676

77-
```
77+
```console
7878
azure login --username [email protected] --environment AzureGermanCloud
7979
```
8080

8181
After you're logged in, you can run Azure CLI commands as you normally would:
8282

83-
```
83+
```console
8484
azure webapp list my-resource-group
8585
```
8686

articles/hdinsight/hdinsight-administer-use-dotnet-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ The impact of changing the number of data nodes for each type of cluster support
171171
Here is an example how to use the CLI command to rebalance the Storm topology:
172172

173173

174-
```cli
174+
```console
175175
## Reconfigure the topology "mytopology" to use 5 worker processes,
176176
## the spout "blue-spout" to use 3 executors, and
177177
## the bolt "yellow-bolt" to use 10 executors

articles/hdinsight/hdinsight-custom-ambari-db.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Edit the parameters in the `azuredeploy.parameters.json` to specify information
5151

5252
You can begin the deployment using the Azure CLI. Replace `<RESOURCEGROUPNAME>` with the resource group where you want to deploy your cluster.
5353

54-
```azure-cli
54+
```azurecli
5555
az group deployment create --name HDInsightAmbariDBDeployment \
5656
--resource-group <RESOURCEGROUPNAME> \
5757
--template-file azuredeploy.json \

articles/hdinsight/hdinsight-hadoop-create-linux-clusters-curl-rest.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -218,15 +218,15 @@ Follow the steps documented in [Get started with Azure CLI](https://docs.microso
218218
219219
1. From a command line, use the following command to list your Azure subscriptions.
220220

221-
```bash
221+
```azurecli
222222
az account list --query '[].{Subscription_ID:id,Tenant_ID:tenantId,Name:name}' --output table
223223
```
224224

225225
In the list, select the subscription that you want to use and note the **Subscription_ID** and __Tenant_ID__ columns. Save these values.
226226

227227
2. Use the following command to create an application in Azure Active Directory.
228228

229-
```bash
229+
```azurecli
230230
az ad app create --display-name "exampleapp" --homepage "https://www.contoso.org" --identifier-uris "https://www.contoso.org/example" --password <Your password> --query 'appId'
231231
```
232232

@@ -239,15 +239,15 @@ Follow the steps documented in [Get started with Azure CLI](https://docs.microso
239239

240240
3. Use the following command to create a service principal using the **App ID**.
241241

242-
```bash
242+
```azurecli
243243
az ad sp create --id <App ID> --query 'objectId'
244244
```
245245

246246
The value returned from this command is the __Object ID__. Save this value.
247247

248248
4. Assign the **Owner** role to the service principal using the **Object ID** value. Use the **subscription ID** you obtained earlier.
249249

250-
```bash
250+
```azurecli
251251
az role assignment create --assignee <Object ID> --role Owner --scope /subscriptions/<Subscription ID>/
252252
```
253253

articles/hdinsight/hdinsight-sales-insights-etl.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -35,28 +35,28 @@ Download [Power BI Desktop](https://www.microsoft.com/download/details.aspx?id=4
3535
1. List your subscriptions by typing the command `az account list --output table`. Note the ID of the subscription that you will use for this project.
3636
1. Set the subscription you will use for this project and set the subscriptionID variable which will be used later.
3737

38-
```cli
38+
```azurecli
3939
subscriptionID="<SUBSCRIPTION ID>"
4040
az account set --subscription $subscriptionID
4141
```
4242
4343
1. Create a new resource group for the project and set the resourceGroup variable which will be used later.
4444
45-
```cli
45+
```azurecli
4646
resourceGroup="<RESOURCE GROUP NAME>"
4747
az group create --name $resourceGroup --location westus
4848
```
4949
5050
1. Download the data and scripts for this tutorial from the [HDInsight sales insights ETL repository](https://github.com/Azure-Samples/hdinsight-sales-insights-etl) by entering the following commands in Cloud Shell:
5151
52-
```cli
52+
```console
5353
git clone https://github.com/Azure-Samples/hdinsight-sales-insights-etl.git
5454
cd hdinsight-sales-insights-etl
5555
```
5656
5757
1. Enter `ls` at the shell prompt to verify that the following files and directories have been created:
5858
59-
```
59+
```output
6060
salesdata scripts templates
6161
```
6262

@@ -78,7 +78,7 @@ The `resources.sh` script contains the following commands. It is not required fo
7878

7979
* `az group deployment create` - This command uses an Azure Resource Manager template (`resourcestemplate.json`) to create the specified resources with the desired configuration.
8080

81-
```cli
81+
```azurecli
8282
az group deployment create --name ResourcesDeployment \
8383
--resource-group $resourceGroup \
8484
--template-file resourcestemplate.json \
@@ -87,7 +87,7 @@ The `resources.sh` script contains the following commands. It is not required fo
8787
8888
* `az storage blob upload-batch` - This command uploads the sales data .csv files into the newly created Blob storage account by using this command:
8989
90-
```cli
90+
```azurecli
9191
az storage blob upload-batch -d rawdata \
9292
--account-name <BLOB STORAGE NAME> -s ./ --pattern *.csv
9393
```
@@ -111,7 +111,7 @@ The default password for SSH access to the clusters is `Thisisapassword1`. If yo
111111
112112
> [!Note]
113113
> After you know the names of the storage accounts, you can get the account keys by using the following command at the Azure Cloud Shell prompt:
114-
> ```cli
114+
> ```azurecli
115115
> az storage account keys list \
116116
> --account-name <STORAGE NAME> \
117117
> --resource-group $rg \
@@ -174,19 +174,19 @@ For other ways to transform data by using HDInsight, see [this article on using
174174
175175
1. Copy the `query.hql` file to the LLAP cluster by using SCP:
176176
177-
```
177+
```console
178178
scp scripts/query.hql sshuser@<clustername>-ssh.azurehdinsight.net:/home/sshuser/
179179
```
180180
181181
2. Use SSH to access the LLAP cluster by using the following command, and then enter your password. If you haven't altered the `resourcesparameters.json` file, the password is `Thisisapassword1`.
182182
183-
```
183+
```console
184184
ssh sshuser@<clustername>-ssh.azurehdinsight.net
185185
```
186186
187187
3. Use the following command to run the script:
188188
189-
```
189+
```console
190190
beeline -u 'jdbc:hive2://localhost:10001/;transportMode=http' -f query.hql
191191
```
192192
@@ -211,7 +211,7 @@ After the data is loaded, you can experiment with the dashboard that you want to
211211
212212
If you're not going to continue to use this application, delete all resources by using the following command so that you aren't charged for them.
213213
214-
```azurecli-interactive
214+
```azurecli-interactive
215215
az group delete -n $resourceGroup
216216
```
217217

articles/hdinsight/hdinsight-scaling-best-practices.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ The impact of changing the number of data nodes varies for each type of cluster
8383

8484
Here is an example CLI command to rebalance the Storm topology:
8585

86-
```cli
86+
```console
8787
## Reconfigure the topology "mytopology" to use 5 worker processes,
8888
## the spout "blue-spout" to use 3 executors, and
8989
## the bolt "yellow-bolt" to use 10 executors
@@ -137,11 +137,11 @@ When HDFS detects that the expected number of block copies aren't available, HDF
137137

138138
### Example errors when safe mode is turned on
139139

140-
```
140+
```output
141141
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive/hive/819c215c-6d87-4311-97c8-4f0b9d2adcf0. Name node is in safe mode.
142142
```
143143

144-
```
144+
```output
145145
org.apache.http.conn.HttpHostConnectException: Connect to active-headnode-name.servername.internal.cloudapp.net:10001 [active-headnode-name.servername. internal.cloudapp.net/1.1.1.1] failed: Connection refused
146146
```
147147

articles/hdinsight/hdinsight-storage-sharedaccesssignature-permissions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Save the SAS token that is produced at the end of each method. The token will lo
8787

8888
Replace `RESOURCEGROUP`, `STORAGEACCOUNT`, and `STORAGECONTAINER` with the appropriate values for your existing storage container. Change directory to `hdinsight-dotnet-python-azure-storage-shared-access-signature-master` or revise the `-File` parameter to contain the absolute path for `Set-AzStorageblobcontent`. Enter the following PowerShell command:
8989

90-
```PowerShell
90+
```powershell
9191
$resourceGroupName = "RESOURCEGROUP"
9292
$storageAccountName = "STORAGEACCOUNT"
9393
$containerName = "STORAGECONTAINER"
@@ -170,7 +170,7 @@ The use of variables in this section is based on a Windows environment. Slight v
170170
171171
2. Set the retrieved primary key to a variable for later use. Replace `PRIMARYKEY` with the retrieved value in the prior step, and then enter the command below:
172172
173-
```azurecli
173+
```console
174174
#set variable for primary key
175175
set AZURE_STORAGE_KEY=PRIMARYKEY
176176
```

articles/hdinsight/kafka/apache-kafka-azure-container-services.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ Use the following steps to configure Kafka to advertise IP addresses instead of
119119

120120
5. To configure Kafka to advertise IP addresses, add the following text to the bottom of the __kafka-env-template__ field:
121121

122-
```
122+
```bash
123123
# Configure Kafka to advertise IP addresses instead of FQDN
124124
IP_ADDRESS=$(hostname -i)
125125
echo advertised.listeners=$IP_ADDRESS
@@ -171,7 +171,7 @@ At this point, Kafka and Azure Kubernetes Service are in communication through t
171171
172172
5. Log in to your Azure Container Registry (ACR) and find the loginServer name:
173173
174-
```bash
174+
```azurecli
175175
az acr login --name <acrName>
176176
az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table
177177
```

articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ To retrieve IoT hub information used by the connector, use the following steps:
160160
161161
* __From the [Azure CLI](https://docs.microsoft.com/cli/azure/get-started-with-azure-cli)__, use the following command:
162162
163-
```azure-cli
163+
```azurecli
164164
az iot hub show --name myhubname --query "{EventHubCompatibleName:properties.eventHubEndpoints.events.path,EventHubCompatibleEndpoint:properties.eventHubEndpoints.events.endpoint,Partitions:properties.eventHubEndpoints.events.partitionCount}"
165165
```
166166
@@ -184,15 +184,15 @@ To retrieve IoT hub information used by the connector, use the following steps:
184184
185185
1. To get the primary key value, use the following command:
186186
187-
```azure-cli
187+
```azurecli
188188
az iot hub policy show --hub-name myhubname --name service --query "primaryKey"
189189
```
190190
191191
Replace `myhubname` with the name of your IoT hub. The response is the primary key to the `service` policy for this hub.
192192
193193
2. To get the connection string for the `service` policy, use the following command:
194194
195-
```azure-cli
195+
```azurecli
196196
az iot hub show-connection-string --name myhubname --policy-name service --query "connectionString"
197197
```
198198
@@ -272,7 +272,7 @@ For more information on configuring the connector sink, see [https://github.com/
272272
273273
Once the connector starts, send messages to IoT hub from your device(s). As the connector reads messages from the IoT hub and stores them in the Kafka topic, it logs information to the console:
274274
275-
```text
275+
```output
276276
[2017-08-29 20:15:46,112] INFO Polling for data - Obtained 5 SourceRecords from IotHub (com.microsoft.azure.iot.kafka.connect.IotHubSourceTask:39)
277277
[2017-08-29 20:15:54,106] INFO Finished WorkerSourceTask{id=AzureIotHubConnector-0} commitOffsets successfully in 4 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:356)
278278
```
@@ -292,7 +292,7 @@ From an SSH connection to the edge node, use the following command to start the
292292

293293
As the connector runs, information similar to the following text is displayed:
294294

295-
```text
295+
```output
296296
[2017-08-30 17:49:16,150] INFO Started tasks to send 1 messages to devices. (com.microsoft.azure.iot.kafka.connect.sink.
297297
IotHubSinkTask:47)
298298
[2017-08-30 17:49:16,150] INFO WorkerSinkTask{id=AzureIotHubSinkConnector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask:262)
@@ -342,7 +342,7 @@ To send messages through the connector, use the following steps:
342342
343343
If you're using the simulated Raspberry Pi device, and it's running, the following message is logged by the device:
344344
345-
```text
345+
```output
346346
Receive message: Turn On
347347
```
348348

articles/healthcare-apis/fhir-oss-cli-quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ az group deployment create -g $servicename --template-uri https://raw.githubuser
3939

4040
Obtain a capability statement from the FHIR server with:
4141

42-
```azurecli-interactive
42+
```console
4343
metadataurl="https://${servicename}.azurewebsites.net/metadata"
4444
curl --url $metadataurl
4545
```

0 commit comments

Comments
 (0)