Skip to content

Commit 2151765

Browse files
committed
Misc updates
1 parent 41ff1e7 commit 2151765

File tree

3 files changed

+16
-2
lines changed

3 files changed

+16
-2
lines changed

articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ This section provides steps to create clusters in validated environments on Linu
8484

8585
To prepare a K3s Kubernetes cluster on Ubuntu:
8686

87-
1. Install K3s following the instructions in the [K3s quick-start guide](https://docs.k3s.io/quick-start).
87+
1. Create a single-node or multi-node K3s cluster. For examples, see the [K3s quick-start guide](https://docs.k3s.io/quick-start) or [K3s related projects](https://docs.k3s.io/related-projects).
8888

8989
1. Check to see that kubectl was installed as part of K3s. If not, follow the instructions to [Install kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/).
9090

articles/iot-operations/get-started-end-to-end-sample/quickstart-deploy.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -139,6 +139,18 @@ To connect your cluster to Azure Arc:
139139
>[!TIP]
140140
>The value of `$CLUSTER_NAME` is automatically set to the name of your codespace. Replace the environment variable if you want to use a different name.
141141
142+
1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses in your tenant and save it as an environment variable. Run the following command exactly as written, without changing the GUID value.
143+
144+
```azurecli
145+
export OBJECT_ID=$(az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv)
146+
```
147+
148+
1. Use the [az connectedk8s enable-features](/cli/azure/connectedk8s#az-connectedk8s-enable-features) command to enable custom location support on your cluster. This command uses the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses. Run this command on the machine where you deployed the Kubernetes cluster:
149+
150+
```azurecli
151+
az connectedk8s enable-features -n <CLUSTER_NAME> -g <RESOURCE_GROUP> --custom-locations-oid $OBJECT_ID --features cluster-connect custom-locations
152+
```
153+
142154
## Create storage account and schema registry
143155

144156
Schema registry is a synchronized repository that stores message definitions both in the cloud and at the edge. Azure IoT Operations requires a schema registry on your cluster. Schema registry requires an Azure storage account for the schema information stored in the cloud.

articles/iot-operations/troubleshoot/known-issues.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,4 +103,6 @@ kubectl delete pod aio-opc-opc.tcp-1-f95d76c54-w9v9c -n azure-iot-operations
103103
- Deserializing and validating messages using a schema is not supported yet. Specifying a schema in the source configuration only allows the operations experience portal to display the list of data points, but the data points are not validated against the schema.
104104

105105
<!-- TODO: double check -->
106-
- Creating a X.509 secret in the operations experience portal results in a secret with incorrectly encoded data. To work around this issue, create the [multi-line secrets through Azure Key Vault](/azure/key-vault/secrets/multiline-secrets), then select it from the list of secrets in the operations experience portal.
106+
- Creating a X.509 secret in the operations experience portal results in a secret with incorrectly encoded data. To work around this issue, create the [multi-line secrets through Azure Key Vault](/azure/key-vault/secrets/multiline-secrets), then select it from the list of secrets in the operations experience portal.
107+
108+
- Errors occur when two Azure Iot Operations instances connect to the same Event Grid MQTT namespace because of client ID conflict. The client IDs for connecting to Event Grid are derived from dataflow resource names. If you deploy dataflows using Bicep templates, the dataflow resource names are likely the same, so the client IDs are the same and cause connections to fail. The workaround is to add some randomness to your dataflow names in the Bicep template.

0 commit comments

Comments
 (0)