Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 112 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,7 @@ Console.WriteLine($"Image token usage: {raw?.Usage?.ImageTokens}");
- [Tool Calling](#tool-calling)
- [Prefix Completion](#prefix-completion)
- [Long Context (Qwen-Long)](#long-context-qwen-long)
- [Code Interpreter](#code-interpreter)
- [Multimodal](#multimodal) - QWen-VL, QVQ, etc. Supports reasoning/visual understanding/OCR/audio understanding
- [Upload file for multimodal usage](#upload-file-for-multimodal-usage)
- [Image Recognition/Thinking](#image-recognition/thinking)
Expand Down Expand Up @@ -595,7 +596,118 @@ Console.WriteLine(completion.Output.Choices[0].Message.Content);
await dashScopeClient.OpenAiCompatibleDeleteFileAsync(uploadedFile.Id);
```

### Code Interpreter

**This capability is mutually exclusive with Function Call and cannot be used simultaneously.**

Use the `EnableCodeInterpreter` parameter in `Parameters` to allow the model to write code and call an internal code interpreter for calculations.

Example Request:

```csharp
var completion = client.GetTextCompletionStreamAsync(
new ModelRequest<TextGenerationInput, ITextGenerationParameters>()
{
Model = "qwen3-max-preview",
Input = new TextGenerationInput() { Messages = messages },
Parameters = new TextGenerationParameters()
{
ResultFormat = "message",
EnableThinking = true,
EnableCodeInterpreter = true,
IncrementalOutput = true
}
});
```

The code that model generated will be included in `chunk.Output.ToolInfo.CodeInterpreter`. The invocation process can be considered part of the reasoning process.

Full example,

```csharp
var messages = new List<TextChatMessage>();
const string input = "123的21次方是多少?";
Console.Write($"User > {input}");
messages.Add(TextChatMessage.User(input));
var completion = client.GetTextCompletionStreamAsync(
new ModelRequest<TextGenerationInput, ITextGenerationParameters>
{
Model = "qwen3-max-preview",
Input = new TextGenerationInput { Messages = messages },
Parameters = new TextGenerationParameters
{
ResultFormat = "message",
EnableThinking = true,
EnableCodeInterpreter = true,
IncrementalOutput = true
}
});
var reply = new StringBuilder();
var codeGenerated = false;
var reasoning = false;
TextGenerationTokenUsage? usage = null;
await foreach (var chunk in completion)
{
var choice = chunk.Output.Choices![0];
var tool = chunk.Output.ToolInfo?.FirstOrDefault();
if (codeGenerated == false && tool?.CodeInterpreter != null)
{
Console.WriteLine($"Code > {tool.CodeInterpreter.Code}");
codeGenerated = true;
}

if (string.IsNullOrEmpty(choice.Message.ReasoningContent) == false)
{
// reasoning
if (reasoning == false)
{
Console.WriteLine();
Console.Write("Reasoning > ");
reasoning = true;
}

Console.Write(choice.Message.ReasoningContent);
continue;
}

if (reasoning && string.IsNullOrEmpty(choice.Message.Content.Text) == false)
{
reasoning = false;
Console.WriteLine();
Console.Write("Assistant > ");
}

Console.Write(choice.Message.Content);
reply.Append(choice.Message.Content);
usage = chunk.Usage;
}

Console.WriteLine();
messages.Add(TextChatMessage.Assistant(reply.ToString()));
if (usage != null)
{
Console.WriteLine(
$"Usage: in({usage.InputTokens})/out({usage.OutputTokens})/reasoning({usage.OutputTokensDetails?.ReasoningTokens})/plugins({usage.Plugins?.CodeInterpreter?.Count})/total({usage.TotalTokens})");
}

/*
User > 123的21次方是多少?
Reasoning > 用户问的是123的21次方是多少。这是一个大数计算问题,我需要使用代码计算器来计算这个值。

我需要调用code_interpreter函数,传入计算123**21的Python代码。
123**21
用户询问123的21次方是多少,我使用代码计算器计算出了结果。结果是一个非常大的数字:77269364466549865653073473388030061522211723

我应该直接给出这个结果,因为这是一个精确的数学计算问题,不需要额外的解释或
Assistant > 123的21次方是:77269364466549865653073473388030061522211723
Usage: in(704)/out(234)/reasoning(142)/plugins(1)/total(938)
*/
```



## Multimodal

Use `GetMultimodalGenerationAsync`/`GetMultimodalGenerationStreamAsync`
[Official Documentation](https://help.aliyun.com/zh/model-studio/multimodal)

Expand Down
Loading