You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this example, the `metric`and`iteration` parameters aren't specified, so the iteration with the best primary metric will be registered. The `model_id` value returned from the run is used instead of a model name.
@@ -170,8 +170,18 @@ For more information, see the documentation for the [Model class](/python/api/az
The entry script receives data submitted to a deployed web service and passes it to the model. It then returns the model's response to the client. *The script is specific to your model*. The entryscript must understand the data that the model expects and returns.
174
174
175
+
The two things you need to accomplish in your entry script are:
176
+
177
+
1. Loading your model (using a function called `init()`)
178
+
1. Running your model on input data (using a function called `run()`)
179
+
180
+
For your initial deployment, use a dummy entry script that prints the data it receives.
Save this fileas`echo_score.py` inside of a directory called `source_dir`. This dummy script returns the data you send to it, so it doesn't use the model. But it is useful for testing that the scoring script is running.
For more information, see the documentation for [LocalWebservice](/python/api/azureml-core/azureml.core.webservice.local.localwebservice), [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-), and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
0 commit comments