Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions apps/site/docs/en/model-provider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ Some advanced configs are also supported. Usually you don't need to use them.
| `MIDSCENE_PREFERRED_LANGUAGE` | Optional. The preferred language for the model response. The default is `Chinese` if the current timezone is GMT+8 and `English` otherwise. |
| `MIDSCENE_REPLANNING_CYCLE_LIMIT` | Optional. The maximum number of replanning cycles, default is 10 |
| `OPENAI_MAX_TOKENS` | Optional. Maximum tokens for model response, default is 2048 |
| `MIDSCENE_MODEL_TEMPERATURE` | Optional. Temperature used for model responses. Defaults to 0.1 (0.0 when using UI-TARS). |

### Debug configs

Expand Down
1 change: 1 addition & 0 deletions apps/site/docs/zh/model-provider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ Midscene 默认集成了 OpenAI SDK 调用 AI 服务。使用这个 SDK 限定
| `MIDSCENE_PREFERRED_LANGUAGE` | 可选。模型响应的语言。如果当前时区是 GMT+8 则默认是 `Chinese`,否则是 `English` |
| `MIDSCENE_REPLANNING_CYCLE_LIMIT` | 可选。最大重规划次数限制,默认是 10 |
| `OPENAI_MAX_TOKENS` | 可选。模型响应的 max_tokens 数,默认是 2048 |
| `MIDSCENE_MODEL_TEMPERATURE` | 可选。控制模型回复的 temperature,默认是 0.1(使用 UI-TARS 时为 0.0) |

### 调试配置

Expand Down
13 changes: 12 additions & 1 deletion packages/core/src/ai-model/service-caller/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import {
MIDSCENE_API_TYPE,
MIDSCENE_LANGSMITH_DEBUG,
OPENAI_MAX_TOKENS,
MIDSCENE_MODEL_TEMPERATURE,
type TVlModeTypes,
type UITarsModelVersion,
globalConfigManager,
Expand Down Expand Up @@ -204,6 +205,8 @@ export async function callAI(
const responseFormat = getResponseFormat(modelName, AIActionTypeValue);

const maxTokens = globalConfigManager.getEnvConfigValue(OPENAI_MAX_TOKENS);
const temperatureConfig =
globalConfigManager.getEnvConfigValue(MIDSCENE_MODEL_TEMPERATURE);
const debugCall = getDebug('ai:call');
const debugProfileStats = getDebug('ai:profile:stats');
const debugProfileDetail = getDebug('ai:profile:detail');
Expand All @@ -216,8 +219,16 @@ export async function callAI(
let usage: OpenAI.CompletionUsage | undefined;
let timeCost: number | undefined;

const defaultTemperature = vlMode === 'vlm-ui-tars' ? 0.0 : 0.1;
const parsedTemperature =
temperatureConfig && temperatureConfig.trim() !== ''
? Number.parseFloat(temperatureConfig)
: undefined;
const commonConfig = {
temperature: vlMode === 'vlm-ui-tars' ? 0.0 : 0.1,
temperature:
typeof parsedTemperature === 'number' && Number.isFinite(parsedTemperature)
? parsedTemperature
: defaultTemperature,
stream: !!isStreaming,
max_tokens:
typeof maxTokens === 'number'
Expand Down
2 changes: 2 additions & 0 deletions packages/shared/src/env/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ export const MIDSCENE_OPENAI_HTTP_PROXY = 'MIDSCENE_OPENAI_HTTP_PROXY';
export const OPENAI_API_KEY = 'OPENAI_API_KEY';
export const OPENAI_BASE_URL = 'OPENAI_BASE_URL';
export const OPENAI_MAX_TOKENS = 'OPENAI_MAX_TOKENS';
export const MIDSCENE_MODEL_TEMPERATURE = 'MIDSCENE_MODEL_TEMPERATURE';

export const MIDSCENE_ADB_PATH = 'MIDSCENE_ADB_PATH';
export const MIDSCENE_ADB_REMOTE_HOST = 'MIDSCENE_ADB_REMOTE_HOST';
Expand Down Expand Up @@ -204,6 +205,7 @@ export const NUMBER_ENV_KEYS = [

export const STRING_ENV_KEYS = [
OPENAI_MAX_TOKENS,
MIDSCENE_MODEL_TEMPERATURE,
MIDSCENE_ADB_PATH,
MIDSCENE_ADB_REMOTE_HOST,
MIDSCENE_ADB_REMOTE_PORT,
Expand Down
Loading