You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This runs a transform job against all the files under ``s3://mybucket/path/to/my/csv/data``, transforming the input
1003
+
data in order with each model container in the pipeline. For each inputfile that was successfully transformed, one output filein``s3://my-output-bucket/path/to/my/output/data/``
1004
+
will be created with the same name, appended with'.out'.
1005
+
1006
+
This transform job will split CSV files by newline separators, which is especially useful if the input files are large. The Transform Job will
1007
+
assemble the outputs with line separators when writing each inputfile's corresponding output file.
1008
+
1009
+
Each payload entering the first model container will be up to six megabytes, and up to eight inference requests will be sent at the
1010
+
same time to the first model container. Since each payload will consist of a mini-batch of multiple CSV records, the model
1011
+
containers will transform each mini-batch of records.
1012
+
981
1013
For comprehensive examples on how to use Inference Pipelines please refer to the following notebooks:
0 commit comments