Skip to content

Commit 5dbe1ec

Browse files
committed
fix: image generate failed
1 parent 740e088 commit 5dbe1ec

File tree

5 files changed

+153
-22
lines changed

5 files changed

+153
-22
lines changed

README.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,70 @@ npx mock-openai-api
2727

2828
The server will start at `http://localhost:3000`.
2929

30+
## ⚙️ CLI Options
31+
32+
The mock server supports various command line options for customization:
33+
34+
### Basic Usage
35+
36+
```bash
37+
# Start with default settings
38+
npx mock-openai-api
39+
40+
# Start on custom port
41+
npx mock-openai-api -p 8080
42+
43+
# Start on custom host and port
44+
npx mock-openai-api -H localhost -p 8080
45+
46+
# Enable verbose request logging
47+
npx mock-openai-api -v
48+
49+
# Combine multiple options
50+
npx mock-openai-api -p 8080 -H 127.0.0.1 -v
51+
```
52+
53+
### Available Options
54+
55+
| Option | Short | Description | Default |
56+
| ------------------ | ----- | --------------------------------- | --------- |
57+
| `--port <number>` | `-p` | Server port | `3000` |
58+
| `--host <address>` | `-H` | Server host address | `0.0.0.0` |
59+
| `--verbose` | `-v` | Enable request logging to console | `false` |
60+
| `--version` | | Show version number | |
61+
| `--help` | | Show help information | |
62+
63+
### Examples
64+
65+
```bash
66+
# Development setup with verbose logging
67+
npx mock-openai-api -v -p 3001
68+
69+
# Production-like setup
70+
npx mock-openai-api -H 0.0.0.0 -p 80
71+
72+
# Local testing setup
73+
npx mock-openai-api -H localhost -p 8080 -v
74+
75+
# Check version
76+
npx mock-openai-api --version
77+
78+
# Get help
79+
npx mock-openai-api --help
80+
```
81+
82+
When the server starts, it will display the configuration being used:
83+
84+
```
85+
🚀 Mock OpenAI API server started successfully!
86+
📍 Server address: http://0.0.0.0:3000
87+
⚙️ Configuration:
88+
• Port: 3000
89+
• Host: 0.0.0.0
90+
• Verbose logging: DISABLED
91+
• Version: 1.0.1
92+
```
93+
3094
### Basic Usage
3195

3296
```bash

README.zh.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,70 @@ npx mock-openai-api
2727

2828
服务器将在 `http://localhost:3000` 启动。
2929

30+
## ⚙️ CLI 选项
31+
32+
模拟服务器支持多种命令行选项进行自定义配置:
33+
34+
### 基本用法
35+
36+
```bash
37+
# 使用默认设置启动
38+
npx mock-openai-api
39+
40+
# 指定自定义端口启动
41+
npx mock-openai-api -p 8080
42+
43+
# 指定自定义主机和端口
44+
npx mock-openai-api -H localhost -p 8080
45+
46+
# 启用详细请求日志
47+
npx mock-openai-api -v
48+
49+
# 组合多个选项
50+
npx mock-openai-api -p 8080 -H 127.0.0.1 -v
51+
```
52+
53+
### 可用选项
54+
55+
| 选项 | 简写 | 描述 | 默认值 |
56+
| ------------------ | ---- | ------------------------ | --------- |
57+
| `--port <number>` | `-p` | 服务器端口 | `3000` |
58+
| `--host <address>` | `-H` | 服务器主机地址 | `0.0.0.0` |
59+
| `--verbose` | `-v` | 启用请求日志输出到控制台 | `false` |
60+
| `--version` | | 显示版本号 | |
61+
| `--help` | | 显示帮助信息 | |
62+
63+
### 使用示例
64+
65+
```bash
66+
# 开发环境设置,启用详细日志
67+
npx mock-openai-api -v -p 3001
68+
69+
# 生产环境设置
70+
npx mock-openai-api -H 0.0.0.0 -p 80
71+
72+
# 本地测试设置
73+
npx mock-openai-api -H localhost -p 8080 -v
74+
75+
# 查看版本
76+
npx mock-openai-api --version
77+
78+
# 获取帮助
79+
npx mock-openai-api --help
80+
```
81+
82+
服务器启动时,会显示正在使用的配置:
83+
84+
```
85+
🚀 Mock OpenAI API server started successfully!
86+
📍 Server address: http://0.0.0.0:3000
87+
⚙️ Configuration:
88+
• Port: 3000
89+
• Host: 0.0.0.0
90+
• Verbose logging: DISABLED
91+
• Version: 1.0.1
92+
```
93+
3094
### 基本使用
3195

3296
```bash

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "mock-openai-api",
3-
"version": "1.0.0",
3+
"version": "1.0.1",
44
"description": "A mock OpenAI Compatible Provider API server",
55
"keywords": [
66
"openai",

src/cli.ts

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
import { Command } from 'commander';
44
import app from './app';
5-
5+
import { version } from '../package.json'
66
// 扩展全局对象类型
77
declare global {
88
var verboseLogging: boolean;
@@ -13,7 +13,7 @@ const program = new Command();
1313
program
1414
.name('mock-openai-api')
1515
.description('Mock OpenAI Compatible Provider API server')
16-
.version('1.0.0')
16+
.version(version)
1717
.option('-p, --port <number>', 'Server port', '3000')
1818
.option('-H, --host <address>', 'Server host address', '0.0.0.0')
1919
.option('-v, --verbose', 'Enable request logging to console', false)
@@ -30,6 +30,11 @@ global.verboseLogging = options.verbose;
3030
app.listen(PORT, HOST, () => {
3131
console.log(`🚀 Mock OpenAI API server started successfully!`);
3232
console.log(`📍 Server address: http://${HOST}:${PORT}`);
33+
console.log(`⚙️ Configuration:`);
34+
console.log(` • Port: ${PORT}`);
35+
console.log(` • Host: ${HOST}`);
36+
console.log(` • Verbose logging: ${options.verbose ? 'ENABLED' : 'DISABLED'}`);
37+
console.log(` • Version: ${version}`);
3338
console.log(`📖 API Documentation:`);
3439
console.log(` • GET /health - Health check`);
3540
console.log(` • GET /v1/models - Get model list`);
@@ -47,10 +52,9 @@ app.listen(PORT, HOST, () => {
4752
console.log(` "model": "gpt-4-mock",`);
4853
console.log(` "messages": [{"role": "user", "content": "Hello"}]`);
4954
console.log(` }'`);
50-
51-
if (options.verbose) {
52-
console.log(`\n📝 Request logging: ENABLED`);
53-
} else {
54-
console.log(`\n📝 Request logging: DISABLED (use -v to enable)`);
55-
}
55+
console.log(`\n💡 CLI Options:`);
56+
console.log(` • Use --help to see all available options`);
57+
console.log(` • Use -v or --verbose to enable request logging`);
58+
console.log(` • Use -p <port> to specify custom port`);
59+
console.log(` • Use -H <host> to specify custom host address`);
5660
});

src/services/openaiService.ts

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@ import {
1919
randomChoice,
2020
formatErrorResponse
2121
} from '../utils/helpers';
22-
2322
/**
2423
* Get model list
2524
*/
@@ -59,7 +58,7 @@ export function createChatCompletion(request: ChatCompletionRequest): ChatComple
5958

6059
// Select test case
6160
const testCase = selectTestCase(model, lastUserMessage.content || '');
62-
61+
6362
const id = generateChatCompletionId();
6463
const timestamp = getCurrentTimestamp();
6564

@@ -147,7 +146,7 @@ export function* createChatCompletionStream(request: ChatCompletionRequest): Gen
147146

148147
// Select test case
149148
const testCase = selectTestCase(model, lastUserMessage.content || '');
150-
149+
151150
const id = generateChatCompletionId();
152151
const timestamp = getCurrentTimestamp();
153152
const systemFingerprint = `fp_${Math.random().toString(36).substr(2, 10)}_prod0425fp8`;
@@ -158,7 +157,7 @@ export function* createChatCompletionStream(request: ChatCompletionRequest): Gen
158157
// Handle tool calls first (tool-calls model type)
159158
if (model.type === 'tool-calls' && testCase.toolCall) {
160159
// 第一阶段:发送tool call
161-
160+
162161
// Send first chunk - role and empty content
163162
const firstChunk: ChatCompletionStreamChunk = {
164163
id,
@@ -269,7 +268,7 @@ export function* createChatCompletionStream(request: ChatCompletionRequest): Gen
269268

270269
if (model.type === 'thinking') {
271270
// Thinking mode: output reasoning_content first, then content
272-
271+
273272
// Send first chunk - role and empty reasoning_content
274273
const firstChunk: ChatCompletionStreamChunk = {
275274
id,
@@ -613,8 +612,8 @@ export function* createChatCompletionStream(request: ChatCompletionRequest): Gen
613612
* 这个函数处理tool call执行后的第二阶段流式响应
614613
*/
615614
export function* createToolCallResponseStream(
616-
request: ChatCompletionRequest,
617-
toolCallId: string,
615+
request: ChatCompletionRequest,
616+
toolCallId: string,
618617
toolResult: string
619618
): Generator<string, void, unknown> {
620619
// Validate model
@@ -639,7 +638,7 @@ export function* createToolCallResponseStream(
639638

640639
// Select test case
641640
const testCase = selectTestCase(model, lastUserMessage.content || '');
642-
641+
643642
const id = generateChatCompletionId();
644643
const timestamp = getCurrentTimestamp();
645644
const systemFingerprint = `fp_${Math.random().toString(36).substr(2, 10)}_prod0425fp8`;
@@ -756,11 +755,11 @@ export function generateImage(request: ImageGenerationRequest): ImageGenerationR
756755
const n = request.n || 1;
757756
const timestamp = getCurrentTimestamp();
758757
const size = request.size || '1024x1024';
759-
758+
760759
// Choose different images based on model
761760
const model = request.model || 'gpt-4o-image';
762761
let imageUrls = mockImageUrls;
763-
762+
764763
// If gpt-4o-image model is specified, use higher quality placeholder images
765764
if (model === 'gpt-4o-image') {
766765
imageUrls = [
@@ -774,14 +773,14 @@ export function generateImage(request: ImageGenerationRequest): ImageGenerationR
774773
`https://placehold.co/${size}/FFA07A/000000?text=GPT-4O+Image+8`
775774
];
776775
}
777-
776+
778777
const data = Array.from({ length: n }, () => {
779778
const imageUrl = randomChoice(imageUrls);
780-
779+
781780
if (request.response_format === 'b64_json') {
782781
// Simulate base64 encoded image (in actual applications this would be real base64)
783782
return {
784-
b64_json: 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=='
783+
b64_json: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGQAAABkCAYAAABw4pVUAAAAAXNSR0IArs4c6QAAB6BJREFUeF7t3HmITW8YB/DnWsYfGtsITQ1/WUJSJFmKImtKiSgRTbIVprElyx/+nIhkyZJ9KyWyb8kMchMp2S+DCRkJWSbcX9/n173u3Dn3zLnnvOu95y01555z3/c9z+d9n+fcwY1Eo9F4UVERFRQUUNj0RaCuro5qa2spEovF4vihW7duVFhYqG9GeTzy169f6cmTJ4SNEXn79m0cEHghRFG/KhIYiD1+ZpDi4mI+CFHUgqTHvKam5h8IphKiqANxinUDkBBFDUimhe8IEqLIRXHLQhlBQhQ5KI2VBFeQEEUsSmMYGK1RkBBFDIoXDM8gIUowFK8YWYGEKP5QssHIGiREyQ4lWwxfICGKNxQ/GL5BQhR3FL8YgUBCFGeUIBiBQUKU+ihBMYSAhCj/o4jAEAYickLeSqZZV4nCEAqSrygiMYSD5BuKaAwpIPmCIgNDGkiuo8jCkAqSqygyMaSD5BqKbAwlILmCogJDGYjtKKowlILYiqISQzmIbSiqMbSA2IKiA0MbiOkoujC0gpiKohNDO4hpKLoxjAAxBcUEDGNAdKOYgmEUiC4UkzCMA1GNYhqGkSCqUEzEMBZENoqpGEaDyEIxGcN4ENEopmNYASIKxQYMa0CCotiCYRWIXxSbMKwDyRbFNgwrQbyi2IhhLUhjKLZiWA2SCcVmDOtB0lFwbPuX53j6f+q4UZNbYldgjrZ/vVQIYthKsx4ktWaEKUvz6nIq4GFR14TiFnibUaxMWV4C7uUaTWvJdVjrQLIJdDbXmoJjFYifAPt5j04ca0CCBDbIe1XjWAEiIqAi+lCBYzyIyECK7EsWjtEgMgIoo0+ROMaCyAyczL6D4hgJoiJgKsbwg2MciMpAqRzLK45RIDoCpGNMNxxjQHQGRufY6ThGgJgQEBPmYMRf4ZoSiMb+4YTXGhD0Oq07xCSMRCB1z0kbiO4bd1vJOuemBUTnDXtNKbrmqBxE1416hUi9TsdclYLouEE/EDpRlIHYiKGj0CsBsRlDNYp0kFzAUIkiFSSXMFShSAPJRQwVKFJAchlDNopwkHzAkIkiFCSfMGShCAPJRwwZKEJA8hlDNEpgkBDj3y9aRMQiEIiICQT9XZNp7w8aE98gQQc2LZAi5xMkNr5Aggwo8sZN7stvjLIG8TuQycGTNTc/scoKxM8Asm42U7+fPn2ib9++UefOnRtc8uzZM2rXrh3/SW/v37+nnz9/UpcuXbKa8q9fv+j58+fUs2dPxz5ra2vpx48fDf679vfv36m6upq6du1KTZs2Tb7XM4gNGLirOXPmMMjBgweTN4kbHzlyJH+pANrMmTNp165d1KRJE/r9+zdNmjSJTp48yef69OlDV69edURzkjp8+DCP+eXLl+Tp9D579+5NmzZtov79+1NhYSGtX7+eVq1axdfjGOP169ePjz2B2IBx5MgROnHiBB0/fpymTZtWD2TChAn07t07Po/VPGzYMNq2bRsHcuPGjbR69Wq6fPkytW/fnkaPHk19+/alo0ePuu6UW7du0d69e5PjpII49QmUlStX8mLB+Hv27KExY8bQkiVL6NKlS/TmzRtq3rx54yA2YCByy5Yto5cvX9K1a9doxIgRyUB9+PCBOnbsSBcvXuTX0SZPnsxA169fp169evEOWbduHZ/bunUrzZs3j5D6ZsyYwSls8+bNfG7u3LmcfhDMAwcO0OnTp+nRo0cUi8Xq7ZBMfWJMALx48YJu3rzJfT548IB35ZUrV2j48OHuILZgpC7lKVOmULNmzZIgVVVVNHjwYPr8+TO1bt2aL12zZg1t2LCBg45VeebMGV6taFitSG8INGrO+PHj6dixY4RaMX36dLp9+zYNGDAgOSR22tKlS5MgSFdufQK7pKSEkZGuUEtatmxJO3fupNmzZ2cGsREDUUoHQY5HCvv79y9FIhEOJFb4rFmz6NWrV7wDKisradCgQXzu6dOnXIDv3LnDOX/+/Pm0f/9+PoeUs3z58nqpLB0E9cqtz4kTJ1JpaSmNGzcuWeixg9Hv4sWLnUFsxXACSeyQjx8/UlFREQdzy5YttHv3bk4bLVq04IKOOoN2//59riHYPW3btqVEysNqRh8FBQWuIHV1da59Tp06ldPk2rVrk99c1KpVK05/QGpQ1G3GcAJJpITUVINa8OfPH9qxYwcNGTKEsGrLyso40Hg4KC8vp9evX/PxwoULuXgjLhUVFVwDUlv6DsE5tz4BEY1GGQB93rhxg8aOHcv1DzurHojtGE4geA05v1OnThzYc+fOcQo7dOgQYbWuWLGCdwvAUHSRxxHQ7du387WoLQjew4cPuVZgB6EIJ5oTiFufZ8+eZYDz589T9+7dGfzu3bvcP3ZKEgRb0vYv/0qAIK0k8j5ew83iCQbpB23BggXJJyc8riJAqCNoAwcOpAsXLvDnkx49etCoUaNo3759fDx06FAu3vfu3ePCjQY47KjUx95MfSLG8XicFi1axJ9L0Dp06ECnTp3iD4eoXdgUkVgsFscnStu//KteLkk7QIp6/PgxFRcXU5s2bRpcigKPD4p4AhLV3PpEjcJvB4COh41EdkKdi0Sj0Th+SC9YoiYW9uMtAnggwMb4D6b4LCpJtO3XAAAAAElFTkSuQmCC'
785784
};
786785
} else {
787786
return {

0 commit comments

Comments
 (0)