Skip to content

Commit c391ecd

Browse files
authored
Merge pull request #263925 from MicrosoftDocs/main
1/23 11:00 AM IST Publish
2 parents 594326d + 9e756b0 commit c391ecd

File tree

354 files changed

+1836
-1268
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

354 files changed

+1836
-1268
lines changed

articles/ai-services/policy-reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Built-in policy definitions for Azure AI services
33
description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources.
4-
ms.date: 01/02/2024
4+
ms.date: 01/22/2024
55
author: nitinme
66
ms.author: nitinme
77
ms.service: azure-ai-services

articles/ai-studio/how-to/index-add.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-studio
77
ms.custom:
88
- ignite-2023
99
ms.topic: how-to
10-
ms.date: 11/15/2023
10+
ms.date: 01/15/2024
1111
ms.reviewer: eur
1212
ms.author: eur
1313
author: eric-urban
@@ -117,13 +117,12 @@ If the Azure AI resource the project uses was created through Azure portal:
117117

118118
1. Open your AI Studio project
119119
1. In Flows, create a new Flow or open an existing flow
120-
1. On the top menu of the flow designer, select More tools, and then select Vector Index Lookup
120+
1. On the top menu of the flow designer, select **More tools**, and then select ***Index Lookup***
121121

122122
:::image type="content" source="../media/index-retrieve/vector-index-lookup.png" alt-text="Screenshot of Vector index Lookup from More Tools." lightbox="../media/index-retrieve/vector-index-lookup.png":::
123123

124-
1. Provide a name for your step and select **Add**.
125-
1. The Vector Index Lookup tool is added to the canvas. If you don't see the tool immediately, scroll to the bottom of the canvas
126-
1. Enter the path to your vector index, along with the query that you want to perform against the index.
124+
1. Provide a name for your Index Lookup Tool and select **Add**.
125+
1. Select the **mlindex_content** value box, and select your index. After completing this step, enter the queries and **query_types** to be performed against the index.
127126

128127
:::image type="content" source="../media/index-retrieve/configure-index-lookup.png" alt-text="Screenshot of Configure Vector index Lookup." lightbox="../media/index-retrieve/configure-index-lookup.png":::
129128

articles/aks/app-routing-nginx-prometheus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -210,5 +210,5 @@ Then upload the desired dashboard file and click on **Load**.
210210
[external-dns]: https://github.com/kubernetes-incubator/external-dns
211211
[kubectl]: https://kubernetes.io/docs/reference/kubectl/
212212
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
213-
[grafana-nginx-dashboard]: https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/request-handling-performance.json
213+
[grafana-nginx-dashboard]: https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
214214
[grafana-nginx-request-performance-dashboard]: https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/request-handling-performance.json

articles/aks/deploy-confidential-containers-default-policy.md

Lines changed: 86 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -175,8 +175,8 @@ To configure the workload identity, perform the following steps described in the
175175
* Create Kubernetes service account
176176
* Establish federated identity credential
177177
178-
>[!IMPORTANT]
179-
>For the step to **Export environment variables**, set the value for the variable `SERVICE_ACCOUNT_NAMESPACE` to `kafka`.
178+
> [!IMPORTANT]
179+
> You need to set the *environment variables* from the section **Export environmental variables** in the [Deploy and configure workload identity][deploy-and-configure-workload-identity] article to continue completing this tutorial. Remember to set the variable `SERVICE_ACCOUNT_NAMESPACE` to `kafka`, and execute the command `kubectl create namespace kafka` before configuring workload identity.
180180
181181
## Deploy a trusted application with kata-cc and attestation container
182182
@@ -192,11 +192,13 @@ For this preview release, we recommend for test and evaluation purposes to eithe
192192
193193
1. Grant the managed identity you created earlier, and your account, access to the key vault. [Assign][assign-key-vault-access-cli] both identities the **Key Vault Crypto Officer** and **Key Vault Crypto User** Azure RBAC roles.
194194
195-
>[!NOTE]
196-
>The managed identity is the value you assign to the `USER_ASSIGNED_IDENTITY_NAME` variable.
197-
198-
>[!NOTE]
199-
>To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac], or [Owner][owner-rbac].
195+
> [!NOTE]
196+
>
197+
> - The managed identity is the value you assign to the `USER_ASSIGNED_IDENTITY_NAME` variable.
198+
>
199+
> - To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac], or [Owner][owner-rbac].
200+
>
201+
> - You must use the Key Vault Premium SKU to support HSM-protected keys.
200202
201203
Run the following command to set the scope:
202204
@@ -216,37 +218,36 @@ For this preview release, we recommend for test and evaluation purposes to eithe
216218
az role assignment create --role "Key Vault Crypto User" --assignee "${USER_ASSIGNED_IDENTITY_NAME}" --scope $AKV_SCOPE
217219
``````
218220
219-
1. Copy the following YAML manifest and save it as `producer.yaml`.
221+
1. Install the Kafka cluster in the kafka namespace by running the following command:
220222
221-
```yml
222-
apiVersion: v1
223-
kind: Pod
224-
metadata:
225-
name: kafka-producer
226-
namespace: kafka
227-
spec:
228-
containers:
229-
- image: "mcr.microsoft.com/acc/samples/kafka/producer:1.0"
230-
name: kafka-producer
231-
command:
232-
- /produce
233-
env:
234-
- name: TOPIC
235-
value: kafka-demo-topic
236-
- name: MSG
237-
value: "Azure Confidential Computing"
238-
- name: PUBKEY
239-
value: |-
240-
-----BEGIN PUBLIC KEY-----
241-
MIIBojAN***AE=
242-
-----END PUBLIC KEY-----
243-
resources:
244-
limits:
245-
memory: 1Gi
246-
cpu: 200m
223+
```bash
224+
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
247225
```
248226
249-
Copy the following YAML manifest and save it as `consumer.yaml`. Update the value for the pod environmental variable `SkrClientAKVEndpoint` to match the URL of your Azure Key Vault, excluding the protocol value `https://`. The current value placeholder value is `myKeyVault.vault.azure.net`.
227+
1. Run the following command to apply the `kafka` cluster CR file.
228+
229+
```bash
230+
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
231+
```
232+
233+
1. Prepare the RSA Encryption/Decryption key using the [bash script](https://github.com/microsoft/confidential-container-demos/raw/main/kafka/setup-key.sh) for the workload from GitHub. Save the file as `setup-key.sh`.
234+
235+
1. Set the `MAA_ENDPOINT` environment variable with the FQDN of Attest URI by running the following command.
236+
237+
```bash
238+
export MAA_ENDPOINT="$(az attestation show --name "myattestationprovider" --resource-group "MyResourceGroup" --query 'attestUri' -o tsv | cut -c 9-)"
239+
```
240+
241+
Check if the FQDN of Attest URI is in correct format (the MAA_ENDPOINT should not include the prefix "https://"):
242+
243+
```bash
244+
echo $MAA_ENDPOINT
245+
```
246+
247+
> [!NOTE]
248+
> To set up Microsoft Azure Attestation, see [Quickstart: Set up Azure Attestation with Azure CLI][attestation-quickstart-azure-cli].
249+
250+
1. Copy the following YAML manifest and save it as `consumer.yaml`.
250251

251252
```yml
252253
apiVersion: v1
@@ -313,54 +314,72 @@ For this preview release, we recommend for test and evaluation purposes to eithe
313314
targetPort: kafka-consumer
314315
```
315316
316-
1. Create a kafka namespace by running the following command:
317-
318-
```bash
319-
kubectl create namespace kafka
320-
```
321-
322-
1. Install the Kafka cluster in the kafka namespace by running the following command:
323-
324-
```bash
325-
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
326-
```
327-
328-
1. Run the following command to apply the `kafka` cluster CR file.
329-
330-
```bash
331-
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
332-
```
317+
> [!NOTE]
318+
> Update the value for the pod environment variable `SkrClientAKVEndpoint` to match the URL of your Azure Key Vault, excluding the protocol value `https://`. The current value placeholder value is `myKeyVault.vault.azure.net`.
319+
> Update the value for the pod environment variable `SkrClientMAAEndpoint` with the value of `MAA_ENDPOINT`. You can find the value of `MAA_ENDPOINT` by running the command `echo $MAA_ENDPOINT` or the command `az attestation show --name "myattestationprovider" --resource-group "MyResourceGroup" --query 'attestUri' -o tsv | cut -c 9-`.
333320

334321
1. Generate the security policy for the Kafka consumer YAML manifest and obtain the hash of the security policy stored in the `WORKLOAD_MEASUREMENT` variable by running the following command:
335322

336323
```bash
337-
export WORKLOAD_MEASUREMENT=$(az confcom katapolicygen -y consumer.yaml --print-policy | base64 --decode | sha256sum | cut -d' ' -f1)
338-
324+
export WORKLOAD_MEASUREMENT=$(az confcom katapolicygen -y consumer.yaml --print-policy | base64 -d | sha256sum | cut -d' ' -f1)
339325
```
340-
341-
1. Prepare the RSA Encryption/Decryption key using the [bash script](https://github.com/microsoft/confidential-container-demos/raw/main/kafka/setup-key.sh) for the workload from GitHub. Save the file as `setup-key.sh`.
342-
343-
1. Set the `MAA_ENDPOINT` environmental variable to match the value for the `SkrClientMAAEndpoint` from the `consumer.yaml` manifest file by running the following command.
344-
345-
```bash
346-
export MAA_ENDPOINT="<SkrClientMMAEndpoint value>"
347-
```
348-
349326
1. To generate an RSA asymmetric key pair (public and private keys), run the `setup-key.sh` script using the following command. The `<Azure Key Vault URL>` value should be `<your-unique-keyvault-name>.vault.azure.net`
350327

351328
```bash
329+
export MANAGED_IDENTITY=${USER_ASSIGNED_CLIENT_ID}
352330
bash setup-key.sh "kafka-encryption-demo" <Azure Key Vault URL>
353331
```
332+
> [!NOTE]
333+
>
334+
> - The envionment variable `MANAGED_IDENTITY` is required by the bash script `setup-key.sh`.
335+
>
336+
> - The public key will be saved as `kafka-encryption-demo-pub.pem` after executing the bash script.
337+
338+
> [!IMPORTANT]
339+
> If you receive the error `ForbiddenByRbac`,you might need to wait up to 24 hours as the backend services for managed identities maintain a cache per resource URI for up to 24 hours. See also: [Troubleshoot Azure RBAC][symptom-role-assignment-changes-are-not-being-detected].
354340

355-
Once the public key is downloaded, replace the `PUBKEY` environmental variable in the `producer.yaml` manifest with the public key. Paste the contents between the `-----BEGIN PUBLIC KEY-----` and `-----END PUBLIC KEY-----` strings.
356341

357342
1. To verify the keys have been successfully uploaded to the key vault, run the following commands:
358343

359344
```azurecli-interactive
360345
az account set --subscription <Subscription ID>
361-
az keyvault key list --vault-name <Name of vault> -o table
346+
az keyvault key list --vault-name <KeyVault Name> -o table
347+
```
348+
349+
1. Copy the following YAML manifest and save it as `producer.yaml`.
350+
351+
```yml
352+
apiVersion: v1
353+
kind: Pod
354+
metadata:
355+
name: kafka-producer
356+
namespace: kafka
357+
spec:
358+
containers:
359+
- image: "mcr.microsoft.com/acc/samples/kafka/producer:1.0"
360+
name: kafka-producer
361+
command:
362+
- /produce
363+
env:
364+
- name: TOPIC
365+
value: kafka-demo-topic
366+
- name: MSG
367+
value: "Azure Confidential Computing"
368+
- name: PUBKEY
369+
value: |-
370+
-----BEGIN PUBLIC KEY-----
371+
MIIBojAN***AE=
372+
-----END PUBLIC KEY-----
373+
resources:
374+
limits:
375+
memory: 1Gi
376+
cpu: 200m
362377
```
363378

379+
> [!NOTE]
380+
> Update the value which begin with `-----BEGIN PUBLIC KEY-----` and ends with `-----END PUBLIC KEY-----` strings with the content from `kafka-encryption-demo-pub.pem` which was created in the previous step.
381+
382+
364383
1. Deploy the `consumer` and `producer` YAML manifests using the files you saved earlier.
365384

366385
```bash
@@ -447,3 +466,5 @@ kubectl delete pod pod-name
447466
[user-access-admin-rbac]: ../role-based-access-control/built-in-roles.md#user-access-administrator
448467
[owner-rbac]: ../role-based-access-control/built-in-roles.md#owner
449468
[az-attestation-show]: /cli/azure/attestation#az-attestation-show
469+
[attestation-quickstart-azure-cli]: ../attestation/quickstart-azure-cli.md
470+
[symptom-role-assignment-changes-are-not-being-detected]: ../role-based-access-control/troubleshooting.md#symptom---role-assignment-changes-are-not-being-detected

0 commit comments

Comments
 (0)