Skip to content

Commit 53e6b22

Browse files
Merge pull request #7564 from MicrosoftDocs/main
Auto Publish – main to live - 2025-10-08 22:05 UTC
2 parents e0ca3ae + f80a777 commit 53e6b22

File tree

11 files changed

+32
-201
lines changed

11 files changed

+32
-201
lines changed

articles/ai-foundry/how-to/develop/trace-agents-sdk.md

Lines changed: 17 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,12 @@ ms.custom: references_regions
1818

1919
In this article, you learn how to:
2020

21-
- Trace key concepts
21+
- Understand key tracing concepts
2222
- Trace and observe AI agents in AI Foundry
23-
- Interpret spans (steps, tool calls, nested operations).
24-
- View agent threads in the Agents playground.
23+
- Explore new semantic conventions with multi-agent observability
24+
- Integrate with popular agent frameworks
2525
- View traces in the AI Foundry portal and Azure Monitor
26+
- View agent threads in the Agents playground
2627

2728
Determining the reasoning behind your agent's executions is important for troubleshooting and debugging. However, it can be difficult for complex agents for many reasons:
2829

@@ -679,6 +680,17 @@ with tracer.start_as_current_span("agent_session[openai.agents]"):
679680
pass
680681
```
681682

683+
## View traces in the Azure AI Foundry portal
684+
685+
In your project, go to **Tracing** to filter your traces as you see fit.
686+
687+
By selecting a trace, you can step through each span and identify issues while observing how your application is responding. This can help you debug and pinpoint issues in your application.
688+
689+
## View traces in Azure Monitor
690+
691+
If you logged traces using the previous code snippet, then you're all set to view your traces in Azure Monitor Application Insights. You can open Application Insights from **Manage data source** and use the **End-to-end transaction details view** to further investigate.
692+
693+
For more information on how to send Azure AI Inference traces to Azure Monitor and create Azure Monitor resource, see [Azure Monitor OpenTelemetry documentation](/azure/azure-monitor/app/opentelemetry-enable).
682694

683695
## View thread results in the Azure AI Foundry Agents playground
684696

@@ -705,26 +717,13 @@ After selecting **Thread logs**, review:
705717
:::image type="content" source="../../agents/media/thread-trace.png" alt-text="A screenshot of a trace." lightbox="../../agents/media/thread-trace.png":::
706718

707719
> [!TIP]
708-
> If you want to view trace results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
720+
> If you want to view thread results from a previous thread, select **My threads** in the **Agents** screen. Choose a thread, and then select **Try in playground**.
709721
> :::image type="content" source="../../agents/media/thread-highlight.png" alt-text="A screenshot of the threads screen." lightbox="../../agents/media/thread-highlight.png":::
710-
> You'll be able to see the **Thread logs** button at the top of the screen to view the trace results.
711-
722+
> You'll be able to see the **Thread logs** button at the top of the screen to view the thread results.
712723
713724
> [!NOTE]
714725
> Observability features such as Risk and Safety Evaluation are billed based on consumption as listed in the [Azure pricing page](https://azure.microsoft.com/pricing/details/ai-foundry/).
715726
716-
## View traces in the Azure AI Foundry portal
717-
718-
In your project, go to **Tracing** to filter your traces as you see fit.
719-
720-
By selecting a trace, you can step through each span and identify issues while observing how your application is responding. This can help you debug and pinpoint issues in your application.
721-
722-
## View traces in Azure Monitor
723-
724-
If you logged traces using the previous code snippet, then you're all set to view your traces in Azure Monitor Application Insights. You can open Application Insights from **Manage data source** and use the **End-to-end transaction details view** to further investigate.
725-
726-
For more information on how to send Azure AI Inference traces to Azure Monitor and create Azure Monitor resource, see [Azure Monitor OpenTelemetry documentation](/azure/azure-monitor/app/opentelemetry-enable).
727-
728727
## Related content
729728

730729
- [Python samples](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/samples/sample_chat_completions_with_tracing.py) containing fully runnable Python code for tracing using synchronous and asynchronous clients.
-41 KB
Loading
1.64 KB
Loading
2.05 KB
Loading
2.13 KB
Loading
-3.48 KB
Loading

articles/ai-foundry/openai/concepts/fine-tuning-considerations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Azure AI Foundry offers multiple types of fine -tuning techniques:
6262

6363
* **Reinforcement fine-tuning**: This is a model customization technique, beneficial for optimizing model behavior in highly complex or dynamic environments, enabling the model to learn and adapt through iterative feedback and decision-making. For example, financial services providers can optimize the model for faster, more accurate risk assessments or personalized investment advice. In healthcare and pharmaceuticals, o3-mini can be tailored to accelerate drug discovery, enabling more efficient data analysis, hypothesis generation, and identification of promising compounds. RFT is a great way to fine-tune when there are infinite or high number of ways to solve a problem. The grader rewards the model incrementally and makes reasoning better.
6464

65-
* **Direct Preference Optimization (DPO)**: This is another new alignment technique for large language models, designed to adjust model weights based on human preferences. Unlike Reinforcement Learning from Human Feedback (RLHF), DPO doesn't require fitting a reward model and uses binary preferences for training. This method is computationally lighter and faster, making it equally effective at alignment while being more efficient. You share thenon-preferred and preferred response to the training set and use the DPO technique.
65+
* **Direct Preference Optimization (DPO)**: This is another new alignment technique for large language models, designed to adjust model weights based on human preferences. Unlike Reinforcement Learning from Human Feedback (RLHF), DPO doesn't require fitting a reward model and uses binary preferences for training. This method is computationally lighter and faster, making it equally effective at alignment while being more efficient. You share the non-preferred and preferred response to the training set and use the DPO technique.
6666

6767
You can also stack techniques: first using SFT to create a customized model – optimized for your use case – then using preference fine tuning to align the responses to your specific preferences. During the SFT step, you focus on data quality and representativeness of the tasks, while the DPO step adjusts responses with specific comparisons.
6868

0 commit comments

Comments
 (0)