You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/speech-container-faq.yml
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -202,6 +202,7 @@ sections:
202
202
> Also, the first run of either container might take longer because models are being paged into memory.
203
203
204
204
- Real-time performance varies depending on concurrency. With a concurrency of 1, an NTTS container instance can achieve 10x real-time performance. However, when concurrency increases to 5, real-time performance drops to 3x or lower. We recommended sending less than 5 concurrent requests in one container. Start more containers for increased concurrency.
205
+
- If you intend to use our TTS Image in your online service, to maintain the stability of the service, please ensure that these pods are restarted weekly. In this context, 'restart' means terminating the existing pods and replacing them by launching new ones.
Copy file name to clipboardExpand all lines: articles/machine-learning/prompt-flow/how-to-trace-local-sdk.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,9 @@ ms.reviewer: chenlujiao
14
14
15
15
# How to trace your application with prompt flow SDK | Azure Machine Learning
16
16
17
+
> [!CAUTION]
18
+
> **Deprecation notice:** The prompt flow tracing SDK has been has been deprecated in favor of [tracing with Azure AI Foundry project library](../../ai-foundry/how-to/develop/trace-local-sdk.md). The prompt flow config `set trace.destination` attribute is not supported to send traces to Azure Machine Learning workspaces, use the Azure AI Inference SDK with Azure AI Foundry project library to trace your application code.
Tracing is a powerful tool that offers developers an in-depth understanding of the execution process of their generative AI applications such as agents, [AutoGen](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat), and retrieval augmented generation (RAG) use cases. It provides a detailed view of the execution flow, including the inputs and outputs of each node within the application. This essential information proves critical while debugging complex applications or optimizing performance.
0 commit comments