Skip to content

Commit a23cc6f

Browse files
committed
edit for pub
1 parent 22d73b8 commit a23cc6f

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22
"redirections": [
33
{
44
"source_path_from_root": "/articles/ai-services/computer-vision/how-to/install-sdk.md",
5-
"redirect_url": "/articles/ai-services/computer-vision/sdk/install-sdk",
5+
"redirect_url": "/azure/ai-services/computer-vision/sdk/install-sdk",
66
"redirect_document_id": false
77
},
88
{
99
"source_path_from_root": "/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md",
10-
"redirect_url": "/articles/ai-services/document-intelligence/studio-overview",
10+
"redirect_url": "/azure/ai-services/document-intelligence/studio-overview",
1111
"redirect_document_id": false
1212
}
1313
]

articles/ai-services/document-intelligence/concept-model-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ The following table shows the available models for each current preview and stab
8080

8181
### Latency
8282

83-
Latency is the amount of time it takes for an API server to handle and process an incoming request and deliver the outgoing response to the client. The time to analyze a document depends on the size (for example, number of pages) and associated content on each page. Document Intelligence is a multi tenant service where latency for similar documents is comparable but not always identical. Occasional variability in latency and performance is inherent in any microservice-based, stateless, asynchronous service that processes images and large documents at scale. Although we're continuously scaling up the hardware and capacity and scaling capabilities, you might still have latency issues at runtime.
83+
Latency is the amount of time it takes for an API server to handle and process an incoming request and deliver the outgoing response to the client. The time to analyze a document depends on the size (for example, number of pages) and associated content on each page. Document Intelligence is a multitenant service where latency for similar documents is comparable but not always identical. Occasional variability in latency and performance is inherent in any microservice-based, stateless, asynchronous service that processes images and large documents at scale. Although we're continuously scaling up the hardware and capacity and scaling capabilities, you might still have latency issues at runtime.
8484

8585
|**Add-on Capability**| **Add-On/Free**|&bullet; [2024-02-29-preview](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2024-02-29-preview&preserve-view=true&branch=docintelligence&tabs=HTTP) <br>&bullet [2023-10-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[v2.1 (GA)](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)|
8686
|----------------|-----------|---|--|---|---|

0 commit comments

Comments
 (0)