Skip to content

Commit 2cb2b9c

Browse files
committed
edit pass: cognitive-services-face-articles-batch1
1 parent 21c5ebd commit 2cb2b9c

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-use-large-scale.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -201,13 +201,13 @@ and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/56
201201

202202
To better utilize the large-scale feature, we recommend the following strategies.
203203

204-
## Step 3.1: Customize time interval
204+
### Step 3.1: Customize time interval
205205

206206
As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList.
207207

208208
The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
209209

210-
## Step 3.2: Small-scale buffer
210+
### Step 3.2: Small-scale buffer
211211

212212
Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
213213

@@ -222,7 +222,7 @@ An example workflow:
222222
1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection.
223223
1. Delete the old buffer collection after the Train operation finishes on the master collection.
224224

225-
## Step 3.3: Standalone training
225+
### Step 3.3: Standalone training
226226

227227
If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency.
228228

0 commit comments

Comments
 (0)