Skip to content

Commit 18cdfd8

Browse files
author
Pat Altimore
committed
Edit pass
1 parent af8585e commit 18cdfd8

File tree

3 files changed

+28
-28
lines changed

3 files changed

+28
-28
lines changed

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -536,7 +536,7 @@ sourceSettings:
536536

537537
---
538538

539-
If the instance count in the [data flow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all data flows that use a message broker source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a data flow profile with an instance count of 3, and your data flow uses a message broker endpoint as source configured with topics `topic1` and `topic2`, they are automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
539+
If the instance count in the [data flow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all data flows that use a message broker source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a data flow profile with an instance count of 3, and your data flow uses a message broker endpoint as source configured with topics `topic1` and `topic2`, they're automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
540540

541541
You can explicitly create a topic named `$shared/mygroup/topic` in your configuration. However, adding the `$shared` topic explicitly isn't recommended since the `$shared` prefix is automatically added when needed. Data flows can make optimizations with the group name if it isn't set. For example, `$share` isn't set and data flows only has to operate over the topic name.
542542

@@ -545,7 +545,7 @@ You can explicitly create a topic named `$shared/mygroup/topic` in your configur
545545

546546
#### Kafka topics
547547

548-
When the source is a Kafka (Event Hubs included) endpoint, specify the individual Kafka topics to subscribe to for incoming messages. Wildcards are not supported, so you must specify each topic statically.
548+
When the source is a Kafka (Event Hubs included) endpoint, specify the individual Kafka topics to subscribe to for incoming messages. Wildcards aren't supported, so you must specify each topic statically.
549549

550550
> [!NOTE]
551551
> When using Event Hubs via the Kafka endpoint, each individual event hub within the namespace is the Kafka topic. For example, if you have an Event Hubs namespace with two event hubs, `thermostats` and `humidifiers`, you can specify each event hub as a Kafka topic.
@@ -631,7 +631,7 @@ In operations experience data flow **Source details**, select **Message broker**
631631

632632
# [Bicep](#tab/bicep)
633633

634-
Once you have used the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the data flow configuration.
634+
Once you use the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the data flow configuration.
635635

636636
```bicep
637637
sourceSettings: {
@@ -642,7 +642,7 @@ sourceSettings: {
642642

643643
# [Kubernetes (preview)](#tab/kubernetes)
644644

645-
Once you have used the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the data flow configuration.
645+
Once you use the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the data flow configuration.
646646

647647
```yaml
648648
sourceSettings:
@@ -656,20 +656,20 @@ To learn more, see [Understand message schemas](concept-schema-registry.md).
656656

657657
## Request disk persistence (preview)
658658

659-
Request disk persistence allows data flows to maintain state across restarts. When you enable this feature, the graph can recover processing state if connected broker restarts. This feature is useful for stateful processing scenarios where losing intermediate data would be problematic. When you enable request disk persistence, the broker persists the MQTT data, like messages in the subscriber queue, to disk. This approach ensures that your data flow's data source doesn't experience data loss during power outages or broker restarts. The broker maintains optimal performance because persistence is configured per data flow, so only the data flows that need persistence use this feature.
659+
Request disk persistence lets data flows keep state across restarts. When you enable this feature, the graph recovers processing state if the connected broker restarts. This feature is useful for stateful processing scenarios where losing intermediate data is a problem. When you enable request disk persistence, the broker persists the MQTT data, like messages in the subscriber queue, to disk. This approach makes sure your data flow's data source doesn't lose data during power outages or broker restarts. The broker maintains optimal performance because persistence is configured per data flow, so only the data flows that need persistence use this feature.
660660

661-
The data flow graph makes this persistence request during subscription using an MQTTv5 user property. This feature only works when:
661+
The data flow graph requests this persistence during subscription by using an MQTTv5 user property. This feature works only when:
662662

663-
- The data flow uses the MQTT broker or asset as source
663+
- The data flow uses the MQTT broker or asset as the source
664664
- The MQTT broker has persistence enabled with dynamic persistence mode set to `Enabled` for the data type, like subscriber queues
665665

666-
This configuration allows MQTT clients like data flows to request disk persistence for their subscriptions using MQTTv5 user properties. For detailed MQTT broker persistence configuration, see [Configure MQTT broker persistence](../manage-mqtt-broker/howto-broker-persistence.md).
666+
This configuration lets MQTT clients like data flows request disk persistence for their subscriptions by using MQTTv5 user properties. For details about MQTT broker persistence configuration, see [Configure MQTT broker persistence](../manage-mqtt-broker/howto-broker-persistence.md).
667667

668-
The setting accepts `Enabled` or `Disabled`, with `Disabled` as the default.
668+
The setting accepts `Enabled` or `Disabled`. `Disabled` is the default.
669669

670670
# [Operations experience](#tab/portal)
671671

672-
When creating or editing a data flow, select **Edit**, then check **Yes** next to **Request data persistence**.
672+
When you create or edit a data flow, select **Edit**, and then select **Yes** next to **Request data persistence**.
673673

674674
# [Azure CLI](#tab/cli)
675675

@@ -1522,7 +1522,7 @@ For MQTT endpoints, you can use dynamic topic variables in the `dataDestination`
15221522

15231523
For example, `processed/factory/${inputTopic.2}` routes messages from `factory/1/data` to `processed/factory/1`. Topic segments are 1-indexed, and leading/trailing slashes are ignored.
15241524

1525-
If a topic variable cannot be resolved (for example, `${inputTopic.5}` when the input topic only has 3 segments), the message is dropped and a warning is logged. Wildcard characters (`#` and `+`) are not allowed in destination topics.
1525+
If a topic variable cannot be resolved (for example, `${inputTopic.5}` when the input topic only has three segments), the message is dropped and a warning is logged. Wildcard characters (`#` and `+`) aren't allowed in destination topics.
15261526

15271527
> [!NOTE]
15281528
> The characters `$`, `{`, and `}` are valid in MQTT topic names, so a topic like `factory/$inputTopic.2` is acceptable but incorrect if you intended to use the dynamic topic variable.

articles/iot-operations/connect-to-cloud/howto-dataflow-graph-wasm.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -210,11 +210,11 @@ For detailed instructions, see [Assign Azure roles using the Azure portal](/azur
210210

211211
## Example 1: Basic deployment with one WASM module
212212

213-
This example converts temperature data from Fahrenheit to Celsius using a WASM module. The [temperature module source code](https://github.com/Azure-Samples/explore-iot-operations/tree/wasm/samples/wasm/operators/temperature) is available on GitHub. Use the precompiled version `graph-simple:1.0.0` that you pushed to your container registry.
213+
This example converts temperature data from Fahrenheit to Celsius by using a WASM module. The [temperature module source code](https://github.com/Azure-Samples/explore-iot-operations/tree/wasm/samples/wasm/operators/temperature) is available on GitHub. Use the precompiled version `graph-simple:1.0.0` that you pushed to your container registry.
214214

215215
### How it works
216216

217-
The [graph definition](https://github.com/Azure-Samples/explore-iot-operations/blob/wasm/samples/wasm/graph-simple.yaml) creates a simple three-stage pipeline:
217+
The [graph definition](https://github.com/Azure-Samples/explore-iot-operations/blob/wasm/samples/wasm/graph-simple.yaml) creates a simple, three-stage pipeline:
218218

219219
1. **Source**: Receives temperature data from MQTT
220220
2. **Map**: Processes data with the temperature WASM module
@@ -229,7 +229,7 @@ operations:
229229
module: "temperature:1.0.0"
230230
```
231231
232-
The [temperature module](https://github.com/Azure-Samples/explore-iot-operations/blob/wasm/samples/wasm/operators/temperature/src/lib.rs) converts Fahrenheit to Celsius using the standard formula `(F - 32) × 5/9 = C`:
232+
The [temperature module](https://github.com/Azure-Samples/explore-iot-operations/blob/wasm/samples/wasm/operators/temperature/src/lib.rs) converts Fahrenheit to Celsius by using the standard formula `(F - 32) × 5/9 = C`:
233233

234234
```rust
235235
if measurement.unit == MeasurementTemperatureUnit::Fahrenheit {
@@ -257,10 +257,10 @@ This configuration defines three nodes that implement the temperature conversion
257257
The data flow graph resource "wraps" the graph definition artifact and connects its abstract source/sink operations to concrete endpoints:
258258

259259
- The graph definition's `source` operation connects to the data flow's source node (MQTT topic)
260-
- The graph definition's `sink` operation connects to the data flow's destination node (MQTT topic)
260+
- The graph definition's `sink` operation connects to the data flow's destination node (MQTT topic)
261261
- The graph definition's processing operations run within the graph processing node
262262

263-
This separation allows the same graph definition to be deployed with different endpoints across environments while keeping the processing logic unchanged.
263+
This separation lets you deploy the same graph definition with different endpoints across environments while keeping the processing logic unchanged.
264264

265265
# [Bicep](#tab/bicep)
266266

articles/iot-operations/connect-to-cloud/howto-develop-wasm-modules.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ai-usage: ai-assisted
1818
>
1919
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or not yet released into general availability.
2020
21-
This article shows you how to develop custom WebAssembly (WASM) modules and graph definitions for Azure IoT Operations data flow graphs. You can create modules in Rust or Python to implement custom processing logic. You can also define graph configurations that specify how your modules connect into complete processing workflows.
21+
This article shows you how to develop custom WebAssembly (WASM) modules and graph definitions for Azure IoT Operations data flow graphs. Create modules in Rust or Python to implement custom processing logic. Define graph configurations that specify how your modules connect into complete processing workflows.
2222

2323
## Overview
2424

@@ -42,9 +42,9 @@ Data flow graphs build on the [Timely dataflow](https://docs.rs/timely/latest/ti
4242

4343
### Why timely dataflow?
4444

45-
Traditional stream processing systems face challenges with several issues. Out-of-order data can cause events to arrive later than expected. Partial results make it difficult to know when computations are complete. Coordination issues arise when synchronizing distributed processing.
45+
Traditional stream processing systems have several challenges. Out-of-order data means events can arrive later than expected. Partial results make it hard to know when computations finish. Coordination issues happen when synchronizing distributed processing.
4646

47-
Timely dataflow solves problems through:
47+
Timely dataflow solves these problems through:
4848

4949
#### Timestamps and progress tracking
5050

@@ -89,10 +89,10 @@ Operators are the fundamental processing units based on [Timely dataflow operato
8989

9090
Modules are the implementation of operator logic as WASM code. A single module can implement multiple operator types. For example, a temperature module might provide:
9191

92-
- A map operator for unit conversion
93-
- A filter operator for threshold checking
94-
- A branch operator for routing decisions
95-
- An accumulate operator for statistical aggregation
92+
- A map operator for unit conversion.
93+
- A filter operator for threshold checking.
94+
- A branch operator for routing decisions.
95+
- An accumulate operator for statistical aggregation.
9696

9797
### The relationship
9898

@@ -206,7 +206,7 @@ source ~/.bashrc
206206

207207
# [Python](#tab/python)
208208

209-
Python development uses componentize-py with WebAssembly Interface Types (WIT) for code generation. No other environment configuration is required beyond installing the prerequisites.
209+
Python development uses componentize-py with WebAssembly Interface Types (WIT) for code generation. You don't need any other environment configuration beyond installing the prerequisites.
210210

211211
---
212212

@@ -382,7 +382,7 @@ fn my_branch(input: DataModel, timestamp: HybridLogicalClock) -> bool {
382382

383383
#### Host APIs
384384

385-
Access distributed services through the SDK:
385+
Use the SDK to work with distributed services:
386386

387387
State store for persistent data:
388388

@@ -422,7 +422,7 @@ metrics::record_to_histogram("processing_duration", duration_ms, Some(labels))?;
422422

423423
# [Python](#tab/python)
424424

425-
Python WASM development doesn't use a traditional SDK. Instead, you work with generated bindings from WebAssembly Interface Types (WIT). These bindings provide:
425+
Python WASM development doesn't use a traditional SDK. Instead, you use generated bindings from WebAssembly Interface Types (WIT). These bindings give you:
426426

427427
Typed interfaces for operators:
428428
```python
@@ -560,7 +560,7 @@ The Python examples demonstrate working implementations that show the complete s
560560

561561
## Graph definitions and WASM integration
562562

563-
Graph definitions are central to WASM development as they define how your modules connect into processing workflows. Understanding the relationship between graph definitions and data flow graphs is essential for effective development.
563+
Graph definitions are central to WASM development because they define how your modules connect to processing workflows. Understanding the relationship between graph definitions and data flow graphs helps you develop effectively.
564564

565565
### Graph definition structure
566566

@@ -613,7 +613,7 @@ For working examples, see:
613613

614614
### How graph definitions become data flows
615615

616-
The relationship between graph definitions and Azure IoT Operations data flow graphs works as follows:
616+
Here's how graph definitions and Azure IoT Operations data flow graphs relate:
617617

618618
- **Graph definition artifact**: Your YAML file defines the internal processing logic with source/sink operations as abstract endpoints
619619
- **WASM modules**: Referenced modules implement the actual processing operators

0 commit comments

Comments
 (0)