Skip to content

Commit a635995

Browse files
committed
Clean up
1 parent 8904c50 commit a635995

File tree

3 files changed

+25
-4
lines changed

3 files changed

+25
-4
lines changed

sdks/python/apache_beam/yaml/examples/testing/examples_test.py

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,20 @@ def test_kafka_read(
120120
auto_offset_reset_config,
121121
consumer_config):
122122
"""
123-
...
123+
This PTransform simulates the behavior of the ReadFromKafka transform
124+
with the RAW format by simply using some fixed sample text data and
125+
encode it to raw bytes.
126+
127+
Args:
128+
pcoll: The input PCollection.
129+
format:
130+
topic:
131+
bootstrap_servers:
132+
auto_offset_reset_config:
133+
consumer_config:
134+
135+
Returns:
136+
A PCollection containing the sample text data in bytes.
124137
"""
125138

126139
return (
@@ -131,7 +144,15 @@ def test_kafka_read(
131144
@beam.ptransform.ptransform_fn
132145
def test_run_inference(pcoll, inference_tag, model_handler):
133146
"""
134-
...
147+
This PTransform simulates the behavior of the RunInference transform.
148+
149+
Args:
150+
pcoll: The input PCollection.
151+
inference_tag: The tag to use for the returned inference.
152+
model_handler: A configuration for the respective ML model handler
153+
154+
Returns:
155+
A PCollection containing the enriched data.
135156
"""
136157

137158
from apache_beam.ml.inference.base import PredictionResult

sdks/python/apache_beam/yaml/examples/transforms/ml/inference/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ create/write to a table. See [here](
6161
https://cloud.google.com/bigquery/docs/datasets) for how to create
6262
BigQuery datasets.
6363

64-
Then pipeline first reads the YouTube comments .csv dataset from
64+
The pipeline first reads the YouTube comments .csv dataset from
6565
GCS bucket and performs some clean-up before writing it to a Kafka
6666
topic. The pipeline then reads from that Kafka topic and applies
6767
various transformation logic before `RunInference` transform performs

sdks/python/apache_beam/yaml/examples/transforms/ml/inference/streaming_sentiment_analysis.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
# limitations under the License.
1717
#
1818

19-
# Then pipeline first reads the YouTube comments .csv dataset from GCS bucket
19+
# The pipeline first reads the YouTube comments .csv dataset from GCS bucket
2020
# and performs necessary clean-up before writing it to a Kafka topic.
2121
# The pipeline then reads from that Kafka topic and applies various transformation
2222
# logic before RunInference transform performs remote inference with the Vertex AI

0 commit comments

Comments
 (0)