Skip to content

Commit 0d89418

Browse files
author
Jill Grant
authored
Apply suggestions from code review
Acrolinx fixes
1 parent 5c85bc8 commit 0d89418

File tree

4 files changed

+5
-5
lines changed

4 files changed

+5
-5
lines changed

articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ spec:
9797
9898
---
9999
100-
Similarly, you can create multiple dataflows that use the same MQTT endpoint for other endpoints and topics. For example, you can use the same MQTT endpoint for a dataflow that sends data to a Event Hub endpoint.
100+
Similarly, you can create multiple dataflows that use the same MQTT endpoint for other endpoints and topics. For example, you can use the same MQTT endpoint for a dataflow that sends data to an Event Hub endpoint.
101101
102102
# [Portal](#tab/portal)
103103

articles/iot-operations/connect-to-cloud/howto-configure-dataflow-profile.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ You can scale the dataflow profile to adjust the number of instances that run th
105105
106106
Scaling can also improve the resiliency of the dataflows by providing redundancy in case of failures.
107107
108-
To manually scale the dataflow profile, specify the maximum number of instances you want to run.For example, to set the instance count to 3:
108+
To manually scale the dataflow profile, specify the maximum number of instances you want to run. For example, to set the instance count to 3:
109109
110110
# [Bicep](#tab/bicep)
111111

articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -698,8 +698,8 @@ The compression field enables compression for the messages sent to Kafka topics.
698698
| ----- | ----------- |
699699
| `None` | No compression or batching is applied. None is the default value if no compression is specified. |
700700
| `Gzip` | GZIP compression and batching are applied. GZIP is a general-purpose compression algorithm that offers a good balance between compression ratio and speed. Only [GZIP compression is supported in Azure Event Hubs premium and dedicated tiers](../../event-hubs/azure-event-hubs-kafka-overview.md#compression) currently. |
701-
| `Snappy` | Snappy compression and batching are applied. Snappy is a fast compression algorithm that offers moderate compression ratio and speed. This compressio mode isn't supported by Azure Event Hubs. |
702-
| `Lz4` | LZ4 compression and batching are applied. LZ4 is a fast compression algorithm that offers low compression ratio and high speed. This compressio mode isn't supported by Azure Event Hubs. |
701+
| `Snappy` | Snappy compression and batching are applied. Snappy is a fast compression algorithm that offers moderate compression ratio and speed. This compression mode isn't supported by Azure Event Hubs. |
702+
| `Lz4` | LZ4 compression and batching are applied. LZ4 is a fast compression algorithm that offers low compression ratio and high speed. This compression mode isn't supported by Azure Event Hubs. |
703703

704704
To configure compression:
705705

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -791,7 +791,7 @@ Similar to data sources, data destination is a concept that is used to keep the
791791
| Azure Data Lake Storage | Container | The container in the storage account. Not the table. |
792792
| Microsoft Fabric OneLake | Table or Folder | Corresponds to the configured [path type for the endpoint](howto-configure-fabric-endpoint.md#onelake-path-type). |
793793
| Azure Data Explorer | Table | The table in the Azure Data Explorer database. |
794-
| Local Storage | Folder | The folder or directory name in the local storage persistent volume mount. When using [Azure Container Storage enabled by Azure Arc Cloud Ingest Edge Volumes](/azure/azure-arc/container-storage/cloud-ingest-edge-volume-configuration), this must match the must match the `spec.path` parameter for the subvolume you created. |
794+
| Local Storage | Folder | The folder or directory name in the local storage persistent volume mount. When using [Azure Container Storage enabled by Azure Arc Cloud Ingest Edge Volumes](/azure/azure-arc/container-storage/cloud-ingest-edge-volume-configuration), this must match the `spec.path` parameter for the subvolume you created. |
795795

796796
To configure the data destination:
797797

0 commit comments

Comments
 (0)