Skip to content

Commit d8d3023

Browse files
committed
bookmark links
1 parent b0f42d3 commit d8d3023

File tree

2 files changed

+2
-6
lines changed

2 files changed

+2
-6
lines changed

articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.subservice: inferencing
88
ms.topic: how-to
99
author: msakande
1010
ms.author: mopeakande
11-
ms.date: 07/31/2024
11+
ms.date: 08/21/2024
1212
ms.reviewer: cacrest
1313
ms.custom:
1414
- devplatv2
@@ -271,8 +271,6 @@ Content-Type: application/json
271271

272272
---
273273

274-
<a name="understanding-inputs-and-outputs"></a>
275-
276274
## Understand inputs and outputs
277275

278276
Batch endpoints provide a durable API that consumers can use to create batch jobs. The same interface can be used to specify the inputs and outputs your deployment expects. Use inputs to pass any information your endpoint needs to perform the job.
@@ -296,8 +294,6 @@ The following table summarizes the inputs and outputs for batch deployments:
296294
> [!TIP]
297295
> Inputs and outputs are always named. The names serve as keys to identify the data and pass the actual value during invocation. Because model deployments always require one input and output, the name is ignored during invocation. You can assign the name that best describes your use case, such as "sales_estimation."
298296
299-
<a name="data-inputs"></a>
300-
301297
### Explore data inputs
302298

303299
Data inputs refer to inputs that point to a location where data is placed. Because batch endpoints usually consume large amounts of data, you can't pass the input data as part of the invocation request. Instead, you specify the location where the batch endpoint should go to look for the data. Input data is mounted and streamed on the target compute to improve performance.

articles/machine-learning/how-to-use-batch-fabric.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ In this section, you create a Fabric-to-batch inferencing pipeline in your exist
166166

167167
In this section, you configure inputs and outputs from the batch endpoint. **Inputs** to batch endpoints supply data and parameters needed to run the process. The Azure Machine Learning batch pipeline in Fabric supports both [model deployments](how-to-use-batch-model-deployments.md) and [pipeline deployments](how-to-use-batch-pipeline-deployments.md). The number and type of inputs you provide depend on the deployment type. In this example, you use a model deployment that requires exactly one input and produces one output.
168168

169-
For more information on batch endpoint inputs and outputs, see [Understanding inputs and outputs in Batch Endpoints](how-to-access-data-batch-endpoints-jobs.md#understanding-inputs-and-outputs).
169+
For more information on batch endpoint inputs and outputs, see [Understanding inputs and outputs in Batch Endpoints](how-to-access-data-batch-endpoints-jobs.md#understand-inputs-and-outputs).
170170

171171
#### Configure the input section
172172

0 commit comments

Comments
 (0)