Skip to content

Commit e068567

Browse files
committed
Remove comment
1 parent 9492535 commit e068567

File tree

1 file changed

+0
-37
lines changed

1 file changed

+0
-37
lines changed

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -774,43 +774,6 @@ builtInTransformationSettings:
774774

775775
To learn more, see [Map data by using dataflows](concept-dataflow-mapping.md) and [Convert data by using dataflows](concept-dataflow-conversions.md).
776776

777-
<!-- TODO: DOE content for this -->
778-
779-
<!-- #### Passthrough operation
780-
781-
Using map, you can apply a passthrough operation that takes all the input fields and maps them to the output field, essentially passing through all fields.
782-
783-
# [Portal](#tab/portal)
784-
785-
TBD
786-
787-
# [Bicep](#tab/bicep)
788-
789-
```bicep
790-
builtInTransformationSettings: {
791-
map: [
792-
{
793-
inputs: [ '*' ]
794-
output: '*'
795-
}
796-
]
797-
}
798-
```
799-
800-
# [Kubernetes (preview)](#tab/kubernetes)
801-
802-
```yaml
803-
builtInTransformationSettings:
804-
map:
805-
- inputs:
806-
- '*'
807-
output: '*'
808-
```
809-
810-
---
811-
812-
-->
813-
814777
### Serialize data according to a schema
815778

816779
If you want to serialize the data before sending it to the destination, you need to specify a schema and serialization format. Otherwise, the data is serialized in JSON with the types inferred. Storage endpoints like Microsoft Fabric or Azure Data Lake require a schema to ensure data consistency. Supported serialization formats are Parquet and Delta.

0 commit comments

Comments
 (0)