You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-mlflow-batch.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -262,7 +262,7 @@ Azure Machine Learning supports no-code deployment for batch inference in [manag
262
262
263
263
### How work is distributed on workers
264
264
265
-
Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
265
+
Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets (v1 API)](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
266
266
267
267
> [!WARNING]
268
268
> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-private-endpoint-integration-synapse.md
-3Lines changed: 0 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,9 +17,6 @@ ms.date: 11/16/2022
17
17
18
18
In this article, learn how to securely integrate with Azure Machine Learning from Azure Synapse. This integration enables you to use Azure Machine Learning from notebooks in your Azure Synapse workspace. Communication between the two workspaces is secured using an Azure Virtual Network.
19
19
20
-
> [!TIP]
21
-
> You can also perform integration in the opposite direction, using Azure Synapse spark pool from Azure Machine Learning. For more information, see [Link Azure Synapse and Azure Machine Learning](v1/how-to-link-synapse-ml-workspaces.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-troubleshoot-protobuf-descriptor-error.md
+16-5Lines changed: 16 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,14 +10,15 @@ ms.author: larryfr
10
10
ms.reviewer: larryfr
11
11
ms.topic: troubleshooting
12
12
ms.date: 11/04/2022
13
+
monikerRange: 'azureml-api-1 || azureml-api-2'
13
14
---
14
15
15
16
# Troubleshoot `descriptors cannot not be created directly` error
16
17
17
18
When using Azure Machine Learning, you may receive the following error:
18
19
19
20
```
20
-
TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.” It is followed by the proposition to install the appropriate version of protobuf library.
21
+
TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0." It is followed by the proposition to install the appropriate version of protobuf library.
21
22
22
23
If you cannot immediately regenerate your protos, some other possible workarounds are:
23
24
1. Downgrade the protobuf package to 3.20.x or lower.
0 commit comments