Skip to content

Commit 13962c3

Browse files
committed
edit
1 parent 4b61008 commit 13962c3

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

articles/machine-learning/how-to-use-batch-model-deployments.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -643,19 +643,19 @@ Once you've identified the data store you want to use, configure the output as f
643643

644644
## Overwrite deployment configuration for each job
645645

646-
When you invoke a batch endpoint, some settings can be overwritten to make best use of the compute resources and to improve performance. This is useful when you need different settings for different jobs without modifying the deployment permanently.
646+
When you invoke a batch endpoint, some settings can be overwritten to make best use of the compute resources and improve performance. This capability is useful when you need different settings for different jobs without permanently modifying the deployment.
647647

648648
### Which settings can be overridden?
649649

650-
The following settings can be configured on a per-job basis:
650+
You can configure the following settings on a per-job basis:
651651

652652
| Setting | When to use | Example scenario |
653653
|---------|-------------|-------------------|
654-
| __Instance count__ | When you have varying data volumes | Use more instances for larger datasets (e.g., 10 instances for 1M files vs 2 instances for 100K files) |
655-
| __Mini-batch size__ | When you need to balance throughput and memory | Smaller batches (10-50 files) for large images, larger batches (100-500 files) for small text files |
656-
| __Max retries__ | When data quality varies | Higher retries (5-10) for noisy data, lower retries (1-3) for clean data |
657-
| __Timeout__ | When processing time varies by data type | Longer timeout (300s) for complex models, shorter timeout (30s) for simple models |
658-
| __Error threshold__ | When you need different failure tolerance | Strict threshold (-1) for critical jobs, lenient threshold (10%) for experimental jobs |
654+
| __Instance count__ | When you have varying data volumes | Use more instances for larger datasets (10 instances for 1M files vs. 2 instances for 100K files) |
655+
| __Mini-batch size__ | When you need to balance throughput and memory usage | Smaller batches (10-50 files) for large images; larger batches (100-500 files) for small text files |
656+
| __Max retries__ | When data quality varies | Higher retries (5-10) for noisy data; lower retries (1-3) for clean data |
657+
| __Timeout__ | When processing time varies by data type | Longer timeout (300s) for complex models; shorter timeout (30s) for simple models |
658+
| __Error threshold__ | When you need different failure tolerance levels | Strict threshold (-1) for critical jobs; lenient threshold (10%) for experimental jobs |
659659

660660
### How to override settings
661661

@@ -681,7 +681,7 @@ The following settings can be configured on a per-job basis:
681681

682682
1. Select the option __Override deployment settings__.
683683

684-
1. Configure the job parameters. Only the current job execution will be affected by this configuration.
684+
1. Configure the job parameters. Only the current job execution is affected by this configuration.
685685

686686
1. Select __Next__.
687687

0 commit comments

Comments
 (0)