Skip to content

Commit 970be8e

Browse files
authored
Merge pull request #118963 from JoeyC-Dev/patch-1
Fix grammar and format in deploy-confidential-containers-default-policy.md
2 parents 767064c + 90076a0 commit 970be8e

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

articles/aks/deploy-confidential-containers-default-policy.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ To configure the workload identity, perform the following steps described in the
182182
183183
The following steps configure end-to-end encryption for Kafka messages using encryption keys managed by [Azure Managed Hardware Security Modules][azure-managed-hsm] (mHSM). The key is only released when the Kafka consumer runs within a Confidential Container with an Azure attestation secret provisioning container injected in to the pod.
184184
185-
This configuration is basedon the following four components:
185+
This configuration is based on the following four components:
186186
187187
* Kafka Cluster: A simple Kafka cluster deployed in the Kafka namespace on the cluster.
188188
* Kafka Producer: A Kafka producer running as a vanilla Kubernetes pod that sends encrypted user-configured messages using a public key to a Kafka topic.
@@ -196,7 +196,7 @@ For this preview release, we recommend for test and evaluation purposes to eithe
196196
>The managed identity is the value you assigned to the `USER_ASSIGNED_IDENTITY_NAME` variable.
197197
198198
>[!NOTE]
199-
>To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac],or [Owner][owner-rbac].
199+
>To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac], or [Owner][owner-rbac].
200200
201201
Run the following command to set the scope:
202202
@@ -313,19 +313,19 @@ For this preview release, we recommend for test and evaluation purposes to eithe
313313
targetPort: kafka-consumer
314314
```
315315
316-
1. Create a Kafka namespace by running the following command:
316+
1. Create a kafka namespace by running the following command:
317317
318318
```bash
319319
kubectl create namespace kafka
320320
```
321321
322-
1. Install the Kafka cluster in the Kafka namespace by running the following command::
322+
1. Install the Kafka cluster in the kafka namespace by running the following command:
323323
324324
```bash
325325
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
326326
```
327327
328-
1. Run the following command to apply the `Kafka` cluster CR file.
328+
1. Run the following command to apply the `kafka` cluster CR file.
329329
330330
```bash
331331
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
@@ -338,7 +338,7 @@ For this preview release, we recommend for test and evaluation purposes to eithe
338338
339339
```
340340
341-
1. Prepare the RSA Encryption/Decryption key by [https://github.com/microsoft/confidential-container-demos/blob/main/kafka/setup-key.sh] the Bash script for the workload from GitHub. Save the file as `setup-key.sh`.
341+
1. Prepare the RSA Encryption/Decryption key by the [bash script](https://github.com/microsoft/confidential-container-demos/raw/main/kafka/setup-key.sh) for the workload from GitHub. Save the file as `setup-key.sh`.
342342
343343
1. Set the `MAA_ENDPOINT` environmental variable to match the value for the `SkrClientMAAEndpoint` from the `consumer.yaml` manifest file by running the following command.
344344
@@ -377,19 +377,19 @@ For this preview release, we recommend for test and evaluation purposes to eithe
377377
kubectl get svc consumer -n kafka
378378
```
379379

380-
Copy and paste the external IP address of the consumer service into your browser and observe the decrypted message.
380+
1. Copy and paste the external IP address of the consumer service into your browser and observe the decrypted message.
381381

382-
The following resemblers the output of the command:
383-
384-
```output
385-
Welcome to Confidential Containers on AKS!
386-
Encrypted Kafka Message:
387-
Msg 1: Azure Confidential Computing
388-
```
382+
The following resembles the output of the command:
383+
384+
```output
385+
Welcome to Confidential Containers on AKS!
386+
Encrypted Kafka Message:
387+
Msg 1: Azure Confidential Computing
388+
```
389389

390-
You should also attempt to run the consumer as a regular Kubernetes pod by removing the `skr container` and `kata-cc runtime class` spec. Since you aren't running the consumer with kata-cc runtime class, you no longer need the policy.
390+
1. You should also attempt to run the consumer as a regular Kubernetes pod by removing the `skr container` and `kata-cc runtime class` spec. Since you aren't running the consumer with kata-cc runtime class, you no longer need the policy.
391391
392-
Remove the entire policy and observe the messages again in the browser after redeploying the workload. Messages appear as base64-encoded ciphertext because the private encryption key can't be retrieved. The key can't be retrieved because the consumer is no longer running in a confidential environment, and the `skr container` is missing, preventing decryption of messages.
392+
1. Remove the entire policy and observe the messages again in the browser after redeploying the workload. Messages appear as base64-encoded ciphertext because the private encryption key can't be retrieved. The key can't be retrieved because the consumer is no longer running in a confidential environment, and the `skr container` is missing, preventing decryption of messages.
393393
394394
## Cleanup
395395

0 commit comments

Comments
 (0)