Skip to content

Commit 5835dc5

Browse files
committed
bookmarks
1 parent 4bfd06f commit 5835dc5

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -276,17 +276,17 @@ Batch endpoints provide a durable API that consumers can use to create batch job
276276

277277
Batch endpoints support two types of inputs:
278278

279-
- [Data inputs](#data-inputs): Pointers to a specific storage location or Azure Machine Learning asset.
280-
- [Literal inputs](#literal-inputs): Literal values like numbers or strings that you want to pass to the job.
279+
- [Data inputs](#explore-data-inputs): Pointers to a specific storage location or Azure Machine Learning asset.
280+
- [Literal inputs](#explore-literal-inputs): Literal values like numbers or strings that you want to pass to the job.
281281

282282
The number and type of inputs and outputs depend on the [type of batch deployment](concept-endpoints-batch.md#batch-deployments). Model deployments always require one data input and produce one data output. Literal inputs aren't supported. However, pipeline component deployments provide a more general construct to build endpoints and allow you to specify any number of inputs (data and literal) and outputs.
283283

284284
The following table summarizes the inputs and outputs for batch deployments:
285285

286286
| Deployment type | Number of inputs | Supported input types | Number of outputs | Supported output types |
287287
| --- | --- | --- | --- | --- |
288-
| [Model deployment](concept-endpoints-batch.md#model-deployment) | 1 | [Data inputs](#data-inputs) | 1 | [Data outputs](#data-outputs) |
289-
| [Pipeline component deployment](concept-endpoints-batch.md#pipeline-component-deployment) | [0..N] | [Data inputs](#data-inputs) and [literal inputs](#literal-inputs) | [0..N] | [Data outputs](#data-outputs) |
288+
| [Model deployment](concept-endpoints-batch.md#model-deployment) | 1 | [Data inputs](#explore-data-inputs) | 1 | [Data outputs](#explore-data-outputs) |
289+
| [Pipeline component deployment](concept-endpoints-batch.md#pipeline-component-deployment) | [0..N] | [Data inputs](#explore-data-inputs) and [literal inputs](#explore-literal-inputs) | [0..N] | [Data outputs](#explore-data-outputs) |
290290

291291
> [!TIP]
292292
> Inputs and outputs are always named. The names serve as keys to identify the data and pass the actual value during invocation. Because model deployments always require one input and output, the name is ignored during invocation. You can assign the name that best describes your use case, such as "sales_estimation."
@@ -297,8 +297,8 @@ Data inputs refer to inputs that point to a location where data is placed. Becau
297297

298298
Batch endpoints support reading files located in the following storage options:
299299

300-
- [Azure Machine Learning Data Assets](#input-data-from-a-data-asset), including Folder (`uri_folder`) and File (`uri_file`).
301-
- [Azure Machine Learning Data Stores](#input-data-from-data-stores), including Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2.
300+
- [Azure Machine Learning Data Assets](#use-input-data-from-data-asset), including Folder (`uri_folder`) and File (`uri_file`).
301+
- [Azure Machine Learning Data Stores](#use-input-data-from-data-stores), including Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2.
302302
- [Azure Storage Accounts](#input-data-from-azure-storage-accounts), including Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, and Azure Blob Storage.
303303
- Local data folders/files (Azure Machine Learning CLI or Azure Machine Learning SDK for Python). However, that operation results in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you're working on.
304304

@@ -325,9 +325,9 @@ Data outputs refer to the location where the results of a batch job should be pl
325325
326326
## Create jobs with data inputs
327327

328-
The following examples show how to create jobs, taking data inputs from [data assets](#input-data-from-a-data-asset), [data stores](#input-data-from-data-stores), and [Azure Storage Accounts](#input-data-from-azure-storage-accounts).
328+
The following examples show how to create jobs, taking data inputs from [data assets](#use-input-data-from-data-asset), [data stores](#use-input-data-from-data-stores), and [Azure Storage Accounts](#input-data-from-azure-storage-accounts).
329329

330-
### Input data from data asset
330+
### Use input data from data asset
331331

332332
Azure Machine Learning data assets (formerly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job by using data stored in a registered data asset in Azure Machine Learning.
333333

@@ -491,7 +491,7 @@ Azure Machine Learning data assets (formerly known as datasets) are supported as
491491
Content-Type: application/json
492492
```
493493

494-
### Input data from data stores
494+
### Use input data from data stores
495495

496496
You can directly reference data from Azure Machine Learning registered data stores with batch deployments jobs. In this example, you first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store.
497497

0 commit comments

Comments
 (0)