Skip to content

Commit 63f1a9f

Browse files
authored
Merge pull request #78598 from jpe316/patch-27
Update how-to-deploy-and-where.md
2 parents 4658b66 + 96d02ac commit 63f1a9f

File tree

1 file changed

+21
-0
lines changed

1 file changed

+21
-0
lines changed

articles/machine-learning/service/how-to-deploy-and-where.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,16 @@ The script contains two functions that load and run the model:
105105

106106
* `run(input_data)`: This function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization. You can also work with raw binary data. You can transform the data before sending to the model, or before returning to the client.
107107

108+
#### What is get_model_path?
109+
When you register a model, you provide a model name used for managing the model in the registry. You use this name in the get_model_path API which returns the path of the model file(s) on the local file system. If you register a folder or a collection of files, this API returns the path to the directory which contains those files.
110+
111+
When you register a model, you give it a name which corresponds to where the model is placed, either locally or during service deployment.
112+
113+
The below example will return a path to a single file called 'sklearn_mnist_model.pkl' (which was registered with the name 'sklearn_mnist')
114+
```
115+
model_path = Model.get_model_path('sklearn_mnist')
116+
```
117+
108118
#### (Optional) Automatic Swagger schema generation
109119

110120
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects, and the type and sample are used to automatically create the schema. Azure Machine Learning service then creates an [OpenAPI](https://swagger.io/docs/specification/about/) (Swagger) specification for the web service during deployment.
@@ -262,6 +272,17 @@ The following table provides an example of creating a deployment configuration f
262272

263273
The following sections demonstrate how to create the deployment configuration, and then use it to deploy the web service.
264274

275+
### Optional: Profile your model
276+
Prior to deploying your model as a service, you may want to profile it to determine optimal CPU and memory requirements.
277+
You can do this via the SDK or CLI.
278+
279+
For more information, you can check out our SDK documentation here:
280+
https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#profile-workspace--profile-name--models--inference-config--input-data-
281+
282+
Model profiling results are emitted as a Run object.
283+
Specifics on the Model Profile schema can be found here:
284+
https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.profile.modelprofile?view=azure-ml-py
285+
265286
## Deploy to target
266287

267288
### <a id="local"></a> Local deployment

0 commit comments

Comments
 (0)