Skip to content

Commit 38dd92b

Browse files
committed
Fix end of file issues
1 parent 89da21d commit 38dd92b

9 files changed

+76
-6
lines changed

articles/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -355,4 +355,17 @@ dataLakeStorageSettings: {
355355
}
356356
```
357357

358-
---
358+
# [Kubernetes](#tab/kubernetes)
359+
360+
```yaml
361+
fabricOneLakeSettings:
362+
batching:
363+
latencySeconds: 100
364+
maxMessages: 1000
365+
```
366+
367+
---
368+
369+
## Next steps
370+
371+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-adx-endpoint.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -247,4 +247,17 @@ dataExplorerSettings: {
247247
}
248248
```
249249

250-
---
250+
# [Kubernetes](#tab/kubernetes)
251+
252+
```yaml
253+
dataExplorerSettings:
254+
batching:
255+
latencySeconds: 100
256+
maxMessages: 1000
257+
```
258+
259+
---
260+
261+
## Next steps
262+
263+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,6 @@ ms.date: 09/17/2024
1717

1818
To get started with dataflows, first create dataflow endpoints. A dataflow endpoint is the connection point for the dataflow. You can use an endpoint as a source or destination for the dataflow. Some endpoint types can be used as both sources and destinations, while others are for destinations only. A dataflow needs at least one source endpoint and one destination endpoint.
1919

20-
## Get started
21-
2220
To get started, use the following table to choose the endpoint type to configure:
2321

2422
| Endpoint type | Description | Can be used as a source | Can be used as a destination |
@@ -96,3 +94,12 @@ spec:
9694
---
9795
9896
Similar to the MQTT example, you can create multiple dataflows that use the same Kafka endpoint for different topics, or the same Data Lake endpoint for different tables.
97+
98+
## Next steps
99+
100+
- Create a dataflow endpoint:
101+
- [MQTT or Event Grid](howto-configure-mqtt-endpoint.md)
102+
- [Kafka or Event Hubs](howto-configure-kafka-endpoint.md)
103+
- [Data Lake](howto-configure-adlsv2-endpoint.md)
104+
- [Microsoft Fabric OneLake](howto-configure-fabric-endpoint.md)
105+
- [Local storage](howto-configure-local-storage-endpoint.md)

articles/iot-operations/connect-to-cloud/howto-configure-dataflow-profile.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,3 +64,7 @@ spec:
6464
diagnostics:
6565
# ...
6666
```
67+
68+
## Next steps
69+
70+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -263,4 +263,17 @@ fabricOneLakeSettings: {
263263
}
264264
```
265265

266-
---
266+
# [Kubernetes](#tab/kubernetes)
267+
268+
```yaml
269+
fabricOneLakeSettings:
270+
batching:
271+
latencySeconds: 100
272+
maxMessages: 1000
273+
```
274+
275+
---
276+
277+
## Next steps
278+
279+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1063,3 +1063,7 @@ CloudEvent properties are passed through for messages that contain the required
10631063
| `time` | No | `ce-time` | Generated as RFC 3339 in the target client |
10641064
| `datacontenttype` | No | `ce-datacontenttype` | Changed to the output data content type after the optional transform stage |
10651065
| `dataschema` | No | `ce-dataschema` | Schema defined in the schema registry |
1066+
1067+
## Next steps
1068+
1069+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-local-storage-endpoint.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,4 +95,8 @@ The PersistentVolumeClaim (PVC) must be in the same namespace as the *DataflowEn
9595

9696
## Supported serialization formats
9797

98-
The only supported serialization format is Parquet.
98+
The only supported serialization format is Parquet.
99+
100+
## Next steps
101+
102+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -981,3 +981,7 @@ CloudEvent properties are passed through for messages that contain the required
981981
| `time` | No | Generated as RFC 3339 in the target client |
982982
| `datacontenttype` | No | Changed to the output data content type after the optional transform stage |
983983
| `dataschema` | No | Schema defined in the schema registry |
984+
985+
## Next steps
986+
987+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -937,3 +937,11 @@ kubectl get dataflow my-dataflow -o yaml > my-dataflow.yaml
937937
```
938938

939939
---
940+
941+
## Next steps
942+
943+
- [Map data by using dataflows](concept-dataflow-mapping.md)
944+
- [Convert data by using dataflows](concept-dataflow-conversions.md)
945+
- [Enrich data by using dataflows](concept-dataflow-enrich.md)
946+
- [Understand message schemas](concept-schema-registry.md)
947+
- [Manage dataflow profiles](howto-configure-dataflow-profile.md)

0 commit comments

Comments
 (0)