You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-Explore new semantic conventions with multi-agent observability
24
+
-Integrate with popular agent frameworks
25
25
- View traces in the AI Foundry portal and Azure Monitor
26
+
- View agent threads in the Agents playground
26
27
27
28
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for many reasons:
28
29
@@ -679,6 +680,17 @@ with tracer.start_as_current_span("agent_session[openai.agents]"):
679
680
pass
680
681
```
681
682
683
+
## View traces in the Azure AI Foundry portal
684
+
685
+
In your project, go to **Tracing** to filter your traces as you see fit.
686
+
687
+
By selecting a trace, you can step through each span and identify issues while observing how your application is responding. This can help you debug and pinpoint issues in your application.
688
+
689
+
## View traces in Azure Monitor
690
+
691
+
If you logged traces using the previous code snippet, then you're all set to view your traces in Azure Monitor Application Insights. You can open Application Insights from **Manage data source** and use the **End-to-end transaction details view** to further investigate.
692
+
693
+
For more information on how to send Azure AI Inference traces to Azure Monitor and create Azure Monitor resource, see [Azure Monitor OpenTelemetry documentation](/azure/azure-monitor/app/opentelemetry-enable).
682
694
683
695
## View thread results in the Azure AI Foundry Agents playground
684
696
@@ -705,26 +717,13 @@ After selecting **Thread logs**, review:
705
717
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
706
718
707
719
> [!TIP]
708
-
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
720
+
> If you want to view thread results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
709
721
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
710
-
> You'll be able to see the **Thread logs** button at the top of the screen to view the trace results.
711
-
722
+
> You'll be able to see the **Thread logs** button at the top of the screen to view the thread results.
712
723
713
724
> [!NOTE]
714
725
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
715
726
716
-
## View traces in the Azure AI Foundry portal
717
-
718
-
In your project, go to **Tracing** to filter your traces as you see fit.
719
-
720
-
By selecting a trace, you can step through each span and identify issues while observing how your application is responding. This can help you debug and pinpoint issues in your application.
721
-
722
-
## View traces in Azure Monitor
723
-
724
-
If you logged traces using the previous code snippet, then you're all set to view your traces in Azure Monitor Application Insights. You can open Application Insights from **Manage data source** and use the **End-to-end transaction details view** to further investigate.
725
-
726
-
For more information on how to send Azure AI Inference traces to Azure Monitor and create Azure Monitor resource, see [Azure Monitor OpenTelemetry documentation](/azure/azure-monitor/app/opentelemetry-enable).
727
-
728
727
## Related content
729
728
730
729
-[Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/concepts/fine-tuning-considerations.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ Azure AI Foundry offers multiple types of fine -tuning techniques:
62
62
63
63
***Reinforcement fine-tuning**: This is a model customization technique, beneficial for optimizing model behavior in highly complex or dynamic environments, enabling the model to learn and adapt through iterative feedback and decision-making. For example, financial services providers can optimize the model for faster, more accurate risk assessments or personalized investment advice. In healthcare and pharmaceuticals, o3-mini can be tailored to accelerate drug discovery, enabling more efficient data analysis, hypothesis generation, and identification of promising compounds. RFT is a great way to fine-tune when there are infinite or high number of ways to solve a problem. The grader rewards the model incrementally and makes reasoning better.
64
64
65
-
***Direct Preference Optimization (DPO)**: This is another new alignment technique for large language models, designed to adjust model weights based on human preferences. Unlike Reinforcement Learning from Human Feedback (RLHF), DPO doesn't require fitting a reward model and uses binary preferences for training. This method is computationally lighter and faster, making it equally effective at alignment while being more efficient. You share thenon-preferred and preferred response to the training set and use the DPO technique.
65
+
***Direct Preference Optimization (DPO)**: This is another new alignment technique for large language models, designed to adjust model weights based on human preferences. Unlike Reinforcement Learning from Human Feedback (RLHF), DPO doesn't require fitting a reward model and uses binary preferences for training. This method is computationally lighter and faster, making it equally effective at alignment while being more efficient. You share the non-preferred and preferred response to the training set and use the DPO technique.
66
66
67
67
You can also stack techniques: first using SFT to create a customized model – optimized for your use case – then using preference fine tuning to align the responses to your specific preferences. During the SFT step, you focus on data quality and representativeness of the tasks, while the DPO step adjusts responses with specific comparisons.
0 commit comments