@@ -272,7 +272,8 @@ inference-time behavior of your SavedModels.
272272Providing Python scripts for pre/pos-processing
273273~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
274274
275- You can add your customized Python code to process your input and output data:
275+ You can add your customized Python code to process your input and output data.
276+ This customized Python code must be named ``inference.py `` and specified through the ``entry_point `` parameter:
276277
277278.. code ::
278279
@@ -285,8 +286,9 @@ You can add your customized Python code to process your input and output data:
285286 How to implement the pre- and/or post-processing handler(s)
286287^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
287288
288- Your entry point file should implement either a pair of ``input_handler ``
289- and ``output_handler `` functions or a single ``handler `` function.
289+ Your entry point file must be named ``inference.py `` and should implement
290+ either a pair of ``input_handler `` and ``output_handler `` functions or
291+ a single ``handler `` function.
290292 Note that if ``handler `` function is implemented, ``input_handler ``
291293 and ``output_handler `` are ignored.
292294
@@ -453,6 +455,7 @@ processing. There are 2 ways to do this:
453455 model_data='s3://mybucket/model.tar.gz',
454456 role='MySageMakerRole')
455457
458+ For more information, see: https://github.com/aws/sagemaker-tensorflow-serving-container#prepost-processing
456459
457460Deploying more than one model to your Endpoint
458461~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0 commit comments