Skip to content

Commit ddc4876

Browse files
Merge pull request #32834 from datashinobi/yassinek/update_deploy_doc
update local-deploy script section
2 parents 3c8382a + 11fde81 commit ddc4876

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/machine-learning/service/how-to-troubleshoot-deployment.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -160,12 +160,12 @@ If you encounter problems deploying a model to ACI or AKS, try deploying it as a
160160
To deploy locally, modify your code to use `LocalWebservice.deploy_configuration()` to create a deployment configuration. Then use `Model.deploy()` to deploy the service. The following example deploys a model (contained in the `model` variable) as a local web service:
161161

162162
```python
163-
from azureml.core.model import InferenceConfig
163+
from azureml.core.model import InferenceConfig,Model
164164
from azureml.core.webservice import LocalWebservice
165165

166166
# Create inference configuration. This creates a docker image that contains the model.
167167
inference_config = InferenceConfig(runtime= "python",
168-
execution_script="score.py",
168+
entry_script="score.py",
169169
conda_file="myenv.yml")
170170

171171
# Create a local deployment, using port 8890 for the web service endpoint

0 commit comments

Comments
 (0)