You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/how-to-deploy-and-where.md
+21Lines changed: 21 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -105,6 +105,16 @@ The script contains two functions that load and run the model:
105
105
106
106
*`run(input_data)`: This function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization. You can also work with raw binary data. You can transform the data before sending to the model, or before returning to the client.
107
107
108
+
#### What is get_model_path?
109
+
When you register a model, you provide a model name used for managing the model in the registry. You use this name in the get_model_path API which returns the path of the model file(s) on the local file system. If you register a folder or a collection of files, this API returns the path to the directory which contains those files.
110
+
111
+
When you register a model, you give it a name which corresponds to where the model is placed, either locally or during service deployment.
112
+
113
+
The below example will return a path to a single file called 'sklearn_mnist_model.pkl' (which was registered with the name 'sklearn_mnist')
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects, and the type and sample are used to automatically create the schema. Azure Machine Learning service then creates an [OpenAPI](https://swagger.io/docs/specification/about/) (Swagger) specification for the web service during deployment.
@@ -262,6 +272,17 @@ The following table provides an example of creating a deployment configuration f
262
272
263
273
The following sections demonstrate how to create the deployment configuration, and then use it to deploy the web service.
264
274
275
+
### Optional: Profile your model
276
+
Prior to deploying your model as a service, you may want to profile it to determine optimal CPU and memory requirements.
277
+
You can do this via the SDK or CLI.
278
+
279
+
For more information, you can check out our SDK documentation here:
0 commit comments