@@ -32,13 +32,15 @@ Table of Contents
32
32
4. `TensorFlow SageMaker Estimators <#tensorflow-sagemaker-estimators >`__
33
33
5. `Chainer SageMaker Estimators <#chainer-sagemaker-estimators >`__
34
34
6. `PyTorch SageMaker Estimators <#pytorch-sagemaker-estimators >`__
35
- 7. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
36
- 8. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
37
- 9. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
38
- 10. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
39
- 11. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
40
- 12. `BYO Model <#byo-model >`__
41
- 13. `SageMaker Workflow <#sagemaker-workflow >`__
35
+ 7. `SageMaker SparkML Serving <#sagemaker-sparkml-serving >`__
36
+ 8. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
37
+ 9. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
38
+ 10. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
39
+ 11. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
40
+ 12. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
41
+ 13. `BYO Model <#byo-model >`__
42
+ 14. `Inference Pipelines <#inference-pipelines >`__
43
+ 15. `SageMaker Workflow <#sagemaker-workflow >`__
42
44
43
45
44
46
Installing the SageMaker Python SDK
@@ -374,7 +376,7 @@ For more information, see `TensorFlow SageMaker Estimators and Models`_.
374
376
375
377
376
378
Chainer SageMaker Estimators
377
- ------------------------------ -
379
+ ----------------------------
378
380
379
381
By using Chainer SageMaker `` Estimators`` , you can train and host Chainer models on Amazon SageMaker.
380
382
@@ -390,7 +392,7 @@ For more information about Chainer SageMaker ``Estimators``, see `Chainer SageM
390
392
391
393
392
394
PyTorch SageMaker Estimators
393
- ------------------------------ -
395
+ ----------------------------
394
396
395
397
With PyTorch SageMaker `` Estimators`` , you can train and host PyTorch models on Amazon SageMaker.
396
398
@@ -408,6 +410,39 @@ For more information about PyTorch SageMaker ``Estimators``, see `PyTorch SageMa
408
410
.. _PyTorch SageMaker Estimators and Models: src/ sagemaker/ pytorch/ README .rst
409
411
410
412
413
+ SageMaker SparkML Serving
414
+ ------------------------ -
415
+
416
+ With SageMaker SparkML Serving, you can now perform predictions against a SparkML Model in SageMaker.
417
+ In order to host a SparkML model in SageMaker, it should be serialized with `` MLeap`` library.
418
+
419
+ For more information on MLeap, see https:// github.com/ combust/ mleap .
420
+
421
+ Supported major version of Spark: 2.2 (MLeap version - 0.9 .6)
422
+
423
+ Here is an example on how to create an instance of `` SparkMLModel`` class and use `` deploy()`` method to create an
424
+ endpoint which can be used to perform prediction against your trained SparkML Model.
425
+
426
+ .. code:: python
427
+
428
+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
429
+ model_name = ' sparkml-model'
430
+ endpoint_name = ' sparkml-endpoint'
431
+ predictor = sparkml_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c4.xlarge' , endpoint_name = endpoint_name)
432
+
433
+ Once the model is deployed, we can invoke the endpoint with a `` CSV `` payload like this:
434
+
435
+ .. code:: python
436
+
437
+ payload = ' field_1,field_2,field_3,field_4,field_5'
438
+ predictor.predict(payload)
439
+
440
+
441
+ For more information about the different `` content- type `` and `` Accept`` formats as well as the structure of the
442
+ `` schema`` that SageMaker SparkML Serving recognizes, please see `SageMaker SparkML Serving Container` _.
443
+
444
+ .. _SageMaker SparkML Serving Container: https:// github.com/ aws/ sagemaker- sparkml- serving- container
445
+
411
446
AWS SageMaker Estimators
412
447
------------------------
413
448
Amazon SageMaker provides several built- in machine learning algorithms that you can use to solve a variety of problems.
@@ -709,11 +744,45 @@ This returns a predictor the same way an ``Estimator`` does when ``deploy()`` is
709
744
A full example is available in the `Amazon SageMaker examples repository < https:// github.com/ awslabs/ amazon- sagemaker- examples/ tree/ master/ advanced_functionality/ mxnet_mnist_byom> ` __.
710
745
711
746
747
+ Inference Pipelines
748
+ ------------------ -
749
+ You can create a Pipeline for realtime or batch inference comprising of one or multiple model containers. This will help
750
+ you to deploy an ML pipeline behind a single endpoint and you can have one API call perform pre- processing, model- scoring
751
+ and post- processing on your data before returning it back as the response.
752
+
753
+ For this, you have to create a `` PipelineModel`` which will take a list of `` Model`` objects. Calling `` deploy()`` on the
754
+ `` PipelineModel`` will provide you with an endpoint which can be invoked to perform the prediction on a data point against
755
+ the ML Pipeline.
756
+
757
+ .. code:: python
758
+
759
+ xgb_image = get_image_uri(sess.boto_region_name, ' xgboost' , repo_version = " latest" )
760
+ xgb_model = Model(model_data = ' s3://path/to/model.tar.gz' , image = xgb_image)
761
+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
762
+
763
+ model_name = ' inference-pipeline-model'
764
+ endpoint_name = ' inference-pipeline-endpoint'
765
+ sm_model = PipelineModel(name = model_name, role = sagemaker_role, models = [sparkml_model, xgb_model])
766
+
767
+ This will define a `` PipelineModel`` consisting of SparkML model and an XGBoost model stacked sequentially. For more
768
+ information about how to train an XGBoost model, please refer to the XGBoost notebook here_.
769
+
770
+ .. _here: https:// docs.aws.amazon.com/ sagemaker/ latest/ dg/ xgboost.html# xgboost-sample-notebooks
771
+
772
+ .. code:: python
773
+
774
+ sm_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c5.xlarge' , endpoint_name = endpoint_name)
775
+
776
+ This returns a predictor the same way an `` Estimator`` does when `` deploy()`` is called. Whenever you make an inference
777
+ request using this predictor, you should pass the data that the first container expects and the predictor will return the
778
+ output from the last container.
779
+
780
+
712
781
SageMaker Workflow
713
782
------------------
714
783
715
784
You can use Apache Airflow to author, schedule and monitor SageMaker workflow.
716
785
717
786
For more information, see `SageMaker Workflow in Apache Airflow` _.
718
787
719
- .. _SageMaker Workflow in Apache Airflow: src/ sagemaker/ workflow/ README .rst
788
+ .. _SageMaker Workflow in Apache Airflow: src/ sagemaker/ workflow/ README .rst
0 commit comments