@@ -272,7 +272,8 @@ inference-time behavior of your SavedModels.
272
272
Providing Python scripts for pre/pos-processing
273
273
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
274
274
275
- You can add your customized Python code to process your input and output data:
275
+ You can add your customized Python code to process your input and output data.
276
+ This customized Python code must be named ``inference.py `` and specified through the ``entry_point `` parameter:
276
277
277
278
.. code ::
278
279
@@ -285,8 +286,9 @@ You can add your customized Python code to process your input and output data:
285
286
How to implement the pre- and/or post-processing handler(s)
286
287
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
287
288
288
- Your entry point file should implement either a pair of ``input_handler ``
289
- and ``output_handler `` functions or a single ``handler `` function.
289
+ Your entry point file must be named ``inference.py `` and should implement
290
+ either a pair of ``input_handler `` and ``output_handler `` functions or
291
+ a single ``handler `` function.
290
292
Note that if ``handler `` function is implemented, ``input_handler ``
291
293
and ``output_handler `` are ignored.
292
294
@@ -453,6 +455,7 @@ processing. There are 2 ways to do this:
453
455
model_data='s3://mybucket/model.tar.gz',
454
456
role='MySageMakerRole')
455
457
458
+ For more information, see: https://github.com/aws/sagemaker-tensorflow-serving-container#prepost-processing
456
459
457
460
Deploying more than one model to your Endpoint
458
461
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0 commit comments