Skip to content

Commit 751b6a1

Browse files
committed
Clarity
1 parent 3901f0d commit 751b6a1

File tree

4 files changed

+19
-18
lines changed

4 files changed

+19
-18
lines changed

articles/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,11 +132,11 @@ If you need to override the system-assigned managed identity audience, see the [
132132

133133
Follow the steps in the [access token](#access-token) section to get a SAS token for the storage account and store it in a Kubernetes secret.
134134

135-
Then, create the *DataflowEndpoint* resource and specify the access token authentication method. Here, replace `<SAS_SECRET_NAME>` with name of the secret containing the SAS token as well as other placeholder values.
135+
Then, create the *DataflowEndpoint* resource and specify the access token authentication method. Here, replace `<SAS_SECRET_NAME>` with name of the secret containing the SAS token and other placeholder values.
136136

137137
# [Portal](#tab/portal)
138138

139-
1. In the Azure IoT Operations Preview portal, create a new dataflow or edit an existing dataflow by selecting the **Dataflows** tab on the left. If creating a new dataflow, select a source for the dataflow.
139+
1. In the Azure IoT Operations Preview portal, create a new dataflow or edit an existing dataflow by selecting the **Dataflows** tab. If creating a new dataflow, select a source for the dataflow.
140140
1. In the editor, select the destination dataflow endpoint.
141141
1. Choose the Azure Data Lake Storage Gen2 endpoint that you created previously.
142142

articles/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -255,7 +255,7 @@ You can set advanced settings for the Fabric OneLake endpoint, such as the batch
255255

256256
### OneLake path type
257257

258-
The `oneLakePathType` setting determines the type of path to use in the OneLake path. The default value is `Tables`, which is the recommended path type for the most common use cases. The `Tables` path type is a table in the OneLake lakehouse that is used to store the data. It can also be set as `Files`, which is a file in the OneLake lakehouse that is used to store the data. The `Files` path type is useful when you want to store the data in a file format that is not supported by the `Tables` path type.
258+
The `oneLakePathType` setting determines the type of path to use in the OneLake path. The default value is `Tables`, which is the recommended path type for the most common use cases. The `Tables` path type is a table in the OneLake lakehouse that is used to store the data. It can also be set as `Files`, which is a file in the OneLake lakehouse that is used to store the data. The `Files` path type is useful when you want to store the data in a file format that isn't supported by the `Tables` path type.
259259

260260
# [Portal](#tab/portal)
261261

articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ spec:
284284
285285
---
286286
287-
To customize the endpoint settings, see the following sections for more information.
287+
To customize the endpoint settings, use the following sections for more information.
288288
289289
290290
### Available authentication methods
@@ -293,7 +293,7 @@ The following authentication methods are available for Kafka broker dataflow end
293293
294294
#### SASL
295295
296-
To use SASL for authentication, specify the SASL authentication method and configure SASL type as well as a secret reference with the name of the secret that contains the SASL token.
296+
To use SASL for authentication, specify the SASL authentication method and configure SASL type and a secret reference with the name of the secret that contains the SASL token.
297297
298298
# [Portal](#tab/portal)
299299
@@ -738,8 +738,8 @@ Aside from compression, you can also configure batching for messages before send
738738
| ----- | ----------- | -------- |
739739
| `mode` | Enable batching or not. If not set, the default value is Enabled because Kafka doesn't have a notion of *unbatched* messaging. If set to Disabled, the batching is minimized to create a batch with a single message each time. | No |
740740
| `latencyMs` | The maximum time interval in milliseconds that messages can be buffered before being sent. If this interval is reached, then all buffered messages are sent as a batch, regardless of how many or how large they are. If not set, the default value is 5. | No |
741-
| `maxMessages` | The maximum number of messages that can be buffered before being sent. If this number is reached, then all buffered messages are sent as a batch, regardless of how large they are or how long they are buffered. If not set, the default value is 100000. | No |
742-
| `maxBytes` | The maximum size in bytes that can be buffered before being sent. If this size is reached, then all buffered messages are sent as a batch, regardless of how many they are or how long they are buffered. The default value is 1000000 (1 MB). | No |
741+
| `maxMessages` | The maximum number of messages that can be buffered before being sent. If this number is reached, then all buffered messages are sent as a batch, regardless of how large or how long they're buffered. If not set, the default value is 100000. | No |
742+
| `maxBytes` | The maximum size in bytes that can be buffered before being sent. If this size is reached, then all buffered messages are sent as a batch, regardless of how many or how long they're buffered. The default value is 1000000 (1 MB). | No |
743743

744744
For example, if you set latencyMs to 1000, maxMessages to 100, and maxBytes to 1024, messages are sent either when there are 100 messages in the buffer, or when there are 1,024 bytes in the buffer, or when 1,000 milliseconds elapse since the last send, whichever comes first.
745745

@@ -829,20 +829,20 @@ This setting takes effect only if the endpoint is used as a destination (that is
829829

830830
| Value | Description |
831831
| ----- | ----------- |
832-
| `None` | The dataflow doesn't wait for any acknowledgments from the Kafka broker. This is the fastest but least durable option. |
833-
| `All` | The dataflow waits for the message to be written to the leader partition and all follower partitions. This is the slowest but most durable option. This is also the default option|
832+
| `None` | The dataflow doesn't wait for any acknowledgments from the Kafka broker. This setting is the fastest but least durable option. |
833+
| `All` | The dataflow waits for the message to be written to the leader partition and all follower partitions. This setting is the slowest but most durable option. This setting is also the default option|
834834
| `One` | The dataflow waits for the message to be written to the leader partition and at least one follower partition. |
835835
| `Zero` | The dataflow waits for the message to be written to the leader partition but doesn't wait for any acknowledgments from the followers. This is faster than `One` but less durable. |
836836

837837
<!-- TODO: double check for accuracy -->
838838

839-
For example, if you set the Kafka acknowledgement to `All`, the dataflow waits for the message to be written to the leader partition and all follower partitions before sending the next message.
839+
For example, if you set the Kafka acknowledgment to `All`, the dataflow waits for the message to be written to the leader partition and all follower partitions before sending the next message.
840840

841841
To configure the Kafka acknowledgments:
842842

843843
# [Portal](#tab/portal)
844844

845-
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Kafka acknowledgement** field to specify the Kafka acknowledgement level.
845+
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Kafka acknowledgment** field to specify the Kafka acknowledgment level.
846846

847847
# [Bicep](#tab/bicep)
848848

@@ -915,7 +915,7 @@ When a dataflow receives an MQTT message with the Message Expiry Interval specif
915915

916916
* Records the time the message was received.
917917
* Before the message is emitted to the destination, time is subtracted from the message has been queued from the original expiry interval time.
918-
* If the message has not yet expired (the operation above is > 0), then the message is emitted to the destination and contains the updated Message Expiry Time.
918+
* If the message hasn't expired (the operation above is > 0), then the message is emitted to the destination and contains the updated Message Expiry Time.
919919
* If the message has expired (the operation above is <= 0), then the message isn't emitted by the Target.
920920

921921
Examples:
@@ -1003,7 +1003,7 @@ Properties =
10031003

10041004
The dataflow endpoint discards packets that contain the `"float-value"` field.
10051005

1006-
Not all event data properties including propertyEventData.correlationId are not forwarded. For more information, see [Event User Properties](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/interop#event-user-properties),
1006+
Not all event data properties including propertyEventData.correlationId are forwarded. For more information, see [Event User Properties](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/interop#event-user-properties),
10071007

10081008
### CloudEvents
10091009

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -245,7 +245,7 @@ For more information about the default MQTT endpoint and creating an MQTT endpoi
245245

246246
### Use custom MQTT or Kafka dataflow endpoint as source
247247

248-
If you created a custom MQTT or Kafka dataflow endpoint (for example, to use with Event Grid or Event Hubs), you can use it as the source for the dataflow. Remember that storage type endpoints, like Data Lake or Fabric OneLake, cannot be used as source.
248+
If you created a custom MQTT or Kafka dataflow endpoint (for example, to use with Event Grid or Event Hubs), you can use it as the source for the dataflow. Remember that storage type endpoints, like Data Lake or Fabric OneLake, can't be used as source.
249249

250250
To configure, use Kubernetes YAML or Bicep. Replace placeholder values with your custom endpoint name and topics.
251251

@@ -511,7 +511,7 @@ You can load sample data into the DSS by using the [DSS set tool sample](https:/
511511
In the operations experience, the *Enrich* stage is currently supported using the **Rename** and **New property** transforms.
512512

513513
1. In the operations experience, select a dataflow then **Add transform (optional)**.
514-
1. Chose **Rename** or **New property** transforms then select **Add**.
514+
1. Choose **Rename** or **New property** transforms then select **Add**.
515515

516516
:::image type="content" source="media/howto-create-dataflow/dataflow-enrich.png" alt-text="Screenshot using operations experience to rename a datapoint and add a new property.":::
517517

@@ -589,7 +589,8 @@ builtInTransformationSettings: {
589589
}
590590
]
591591
}
592-
``` -->
592+
```
593+
-->
593594

594595
For more information about condition syntax, see [Enrich data by using dataflows](concept-dataflow-enrich.md) and [Convert data using dataflows](concept-dataflow-conversions.md).
595596

@@ -775,7 +776,7 @@ destinationSettings:
775776

776777
### Configure data destination (topic, container, or table)
777778

778-
Similar to data sources, data destination is a concept that is used to keep the dataflow endpoints reusable across multiple dataflows. Essentially, it represents the sub-directory in the dataflow endpoint configuration. For example, if the dataflow endpoint is a storage endpoint, the data destination is the table in the storage account. If the dataflow endpoint is a Kafka endpoint, the data destination is the Kafka topic.
779+
Similar to data sources, data destination is a concept that is used to keep the dataflow endpoints reusable across multiple dataflows. Essentially, it represents the subdirectory in the dataflow endpoint configuration. For example, if the dataflow endpoint is a storage endpoint, the data destination is the table in the storage account. If the dataflow endpoint is a Kafka endpoint, the data destination is the Kafka topic.
779780

780781
| Endpoint type | Data destination meaning | Description |
781782
| - | - | - |
@@ -790,7 +791,7 @@ To configure the data destination:
790791

791792
# [Portal](#tab/portal)
792793

793-
When using the operations experience, the data destination field is automatically interpreted based on the endpoint type. For example, if the dataflow endpoint is a storage endpoint, the destination details page prompts you to enter the container name. If the dataflow endpoint is a MQTT endpoint, the destination details page prompts you to enter the topic, and so on.
794+
When using the operations experience, the data destination field is automatically interpreted based on the endpoint type. For example, if the dataflow endpoint is a storage endpoint, the destination details page prompts you to enter the container name. If the dataflow endpoint is an MQTT endpoint, the destination details page prompts you to enter the topic, and so on.
794795

795796
:::image type="content" source="media/howto-create-dataflow/data-destination.png" alt-text="Screenshot showing the operations experience prompting the user to enter an MQTT topic given the endpoint type.":::
796797

0 commit comments

Comments
 (0)