Skip to content

Commit d289765

Browse files
committed
revamp sagemaker
1 parent 9fc4db9 commit d289765

File tree

1 file changed

+25
-26
lines changed

1 file changed

+25
-26
lines changed

src/content/docs/aws/services/sagemaker.md

Lines changed: 25 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
title: "SageMaker"
3-
linkTitle: "SageMaker"
43
description: Get started with SageMaker on LocalStack
54
tags: ["Ultimate"]
65
---
@@ -11,13 +10,13 @@ Amazon SageMaker is a fully managed service provided by Amazon Web Services (AWS
1110
It streamlines the machine learning development process, reduces the time and effort required to build and deploy models, and offers the scalability and flexibility needed for large-scale machine learning projects in the AWS cloud.
1211

1312
LocalStack provides a local version of the SageMaker API, which allows running jobs to create machine learning models (e.g., using PyTorch) and to deploy them.
14-
The supported APIs are available on our [API coverage page]({{< ref "coverage_sagemaker" >}}), which provides information on the extent of Sagemaker's integration with LocalStack.
13+
The supported APIs are available on our [API coverage page](), which provides information on the extent of Sagemaker's integration with LocalStack.
1514

16-
{{< callout >}}
15+
:::note
1716
LocalStack supports custom-built models in SageMaker.
1817
You can push your Docker image to LocalStack's Elastic Container Registry (ECR) and use it in SageMaker.
1918
LocalStack will use the local ECR image to create a SageMaker model.
20-
{{< /callout >}}
19+
:::
2120

2221
## Getting started
2322

@@ -29,46 +28,46 @@ We will demonstrate an application illustrating running a machine learning job u
2928
- Creates a SageMaker Endpoint for accessing the model
3029
- Invokes the endpoint directly on the container via Boto3
3130

32-
{{< callout >}}
31+
:::note
3332
SageMaker is a fairly comprehensive API for now.
3433
Currently a subset of the functionality is provided locally, but new features are being added on a regular basis.
35-
{{< /callout >}}
34+
:::
3635

3736
### Download the sample application
3837

3938
You can download the sample application from [GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/sagemaker-inference) or by running the following commands:
4039

41-
{{< command >}}
42-
$ mkdir localstack-samples && cd localstack-samples
43-
$ git init
44-
$ git remote add origin -f [email protected]:localstack/localstack-pro-samples.git
45-
$ git config core.sparseCheckout true
46-
$ echo sagemaker-inference >> .git/info/sparse-checkout
47-
$ git pull origin master
48-
{{< /command >}}
40+
```bash
41+
mkdir localstack-samples && cd localstack-samples
42+
git init
43+
git remote add origin -f [email protected]:localstack/localstack-pro-samples.git
44+
git config core.sparseCheckout true
45+
echo sagemaker-inference >> .git/info/sparse-checkout
46+
git pull origin master
47+
```
4948

5049
### Set up the environment
5150

5251
After downloading the sample application, you can set up your Docker Client to pull the AWS Deep Learning images by running the following command:
5352

54-
{{< command >}}
55-
$ aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
56-
{{< /command >}}
53+
```bash
54+
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
55+
```
5756

5857
Since the images are quite large (several gigabytes), it's a good idea to pull the images using Docker in advance.
5958

60-
{{< command >}}
61-
$ docker pull 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.5.0-cpu-py3
62-
{{< /command >}}
59+
```bash
60+
docker pull 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.5.0-cpu-py3
61+
```
6362

6463
### Run the sample application
6564

6665
Start your LocalStack container using your preferred method.
6766
Run the sample application by executing the following command:
6867

69-
{{< command >}}
70-
$ python3 main.,py
71-
{{< /command >}}
68+
```bash
69+
python3 main.py
70+
```
7271

7372
You should see the following output:
7473

@@ -92,19 +91,19 @@ You can also invoke a serverless endpoint, by navigating to `main.py` and uncomm
9291

9392
## Resource Browser
9493

95-
The LocalStack Web Application provides a [Resource Browser]({{< ref "resource-browser" >}}) for managing Lambda resources.
94+
The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing Sagemaker resources.
9695
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Sagemaker** under the **Compute** section.
9796

9897
The Resource Browser displays Models, Endpoint Configurations and Endpoint.
9998
You can click on individual resources to view their details.
10099

101-
<img src="sagemaker-resource-browser.png" alt="Sagemaker Resource Browser" title="Lambda Resource Browser" width="900" />
100+
![Sagemaker Resource Browser](/images/aws/sagemaker-resource-browser.png)
102101

103102
The Resource Browser allows you to perform the following actions:
104103

105104
- **Create and Remove Models**: You can remove existing model and create a new model with the required configuration
106105

107-
<img src="sagemaker-create-model.png" alt="Sagemaker Resource Browser" title="Lambda Resource Browser" width="900" />
106+
![Sagemaker Create Model](/images/aws/sagemaker-create-model.png)
108107

109108
- **Endpoint Configurations & Endpoints**: You can create endpoints from the resource browser that hosts your deployed machine learning model.
110109
You can also create endpoint configuration that specifies the type and number of instances that will be used to serve your model on an endpoint.

0 commit comments

Comments
 (0)