Skip to content

Commit fb1eb85

Browse files
[Release]: v0.1.1 (#59)
* feature: add templates for PR and issues (#58) * feature: add _resolve_schema_refs (#57) * feature: add _resolve_schema_refs * feature: add anyOf/oneOf in json schema in _resolve_schema_refs * Update coverage badge for release/v0.1.1 * feature: properties missing finally works * chore: fix tests, bump version * Update coverage badge for release/v0.1.1 * chore: bump to beta * chore: delete print * Bugfix/fix responses streaming (#60) * bugfix: fix responses API streaming * chore: fix tests * chore: ruff format * chore: ruff format * Update coverage badge for release/v0.1.1 * chore: split request_mapper.py to better maintainability * bugfix: fix tests * Update coverage badge for release/v0.1.1 * bugfix: responses API streaming function call in openclaw * chore: bump version --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
1 parent f316e00 commit fb1eb85

File tree

16 files changed

+2345
-682
lines changed

16 files changed

+2345
-682
lines changed
Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
---
2+
name: Bug Report
3+
about: Report a bug to help us improve gpt2giga
4+
title: "[BUG] "
5+
labels: bug
6+
assignees: ''
7+
---
8+
9+
## Bug Description
10+
11+
<!-- A clear and concise description of the bug -->
12+
13+
## Environment
14+
15+
### gpt2giga Setup
16+
17+
- **gpt2giga version**: <!-- e.g., 0.5.0 -->
18+
- **Installation method**:
19+
- [ ] pip (`pip install gpt2giga`)
20+
- [ ] uv (`uv tool install gpt2giga/ uv add gpt2giga`)
21+
- [ ] Docker (`docker compose up`)
22+
- [ ] From source (`pip install git+...`)
23+
24+
- **Python version**: <!-- e.g., 3.10 -->
25+
- **OS**: <!-- e.g., Ubuntu 22.04, macOS 14.0, Windows 11 -->
26+
27+
### GigaChat Configuration
28+
29+
- **GigaChat model**: <!-- e.g., GigaChat, GigaChat-2-Max -->
30+
- **Auth settings**: <!-- e.g., OAuth(scope+creds), Basic(user+password) -->
31+
32+
## How to Reproduce
33+
34+
### Method Used
35+
36+
- [ ] OpenAI Python SDK
37+
- [ ] curl
38+
- [ ] Other: <!-- specify -->
39+
40+
### Request Payload
41+
42+
<!--
43+
Provide the full request you're sending.
44+
Remove any sensitive data (credentials, tokens, etc.)
45+
-->
46+
47+
<details>
48+
<summary>Request</summary>
49+
50+
**For OpenAI SDK:**
51+
52+
```python
53+
from openai import OpenAI
54+
55+
client = OpenAI(base_url="http://localhost:8090", api_key="your-key")
56+
57+
# Your request here
58+
completion = client.chat.completions.create(
59+
model="gpt-4",
60+
messages=[
61+
{"role": "user", "content": "Your message"}
62+
],
63+
# ... other parameters
64+
)
65+
```
66+
67+
**For curl:**
68+
69+
```bash
70+
curl -X POST http://localhost:8090/v1/chat/completions \
71+
-H "Content-Type: application/json" \
72+
-H "Authorization: Bearer your-key" \
73+
-d '{
74+
"model": "gpt-4",
75+
"messages": [
76+
{"role": "user", "content": "Your message"}
77+
]
78+
}'
79+
```
80+
81+
</details>
82+
83+
### Steps to Reproduce
84+
85+
1. Start gpt2giga with: `...`
86+
2. Send request: `...`
87+
3. See error
88+
89+
## Expected Behavior
90+
91+
<!-- What you expected to happen -->
92+
93+
## Actual Behavior
94+
95+
<!-- What actually happened -->
96+
97+
## Error Output
98+
99+
<details>
100+
<summary>Error message / Traceback</summary>
101+
102+
```
103+
Paste your error or traceback here
104+
```
105+
106+
</details>
107+
108+
## Logs
109+
110+
<!--
111+
Set GPT2GIGA_LOG_LEVEL=DEBUG and provide relevant logs.
112+
Remove any sensitive information!
113+
-->
114+
115+
<details>
116+
<summary>gpt2giga logs (DEBUG level)</summary>
117+
118+
```
119+
Paste relevant logs here
120+
```
121+
122+
</details>
123+
124+
## Configuration
125+
126+
<!-- Provide your .env file content (remove sensitive values!) -->
127+
128+
<details>
129+
<summary>.env configuration</summary>
130+
131+
```dotenv
132+
GPT2GIGA_HOST=localhost
133+
GPT2GIGA_PORT=8090
134+
GPT2GIGA_LOG_LEVEL=DEBUG
135+
# ... other settings
136+
```
137+
138+
</details>
139+
140+
## Additional Context
141+
142+
<!-- Add any other context about the problem here -->
143+
144+
## Possible Solution
145+
146+
<!-- Optional: If you have any ideas on how to fix this -->

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
## Description
2+
3+
<!-- Provide a clear and concise description of what this PR does -->
4+
5+
## Motivation
6+
7+
<!-- Why is this change needed? Link to related issues if applicable -->
8+
9+
Closes #<!-- issue number -->
10+
11+
## Type of Change
12+
13+
<!-- Mark the relevant option with an "x" -->
14+
15+
- [ ] Bug fix (non-breaking change that fixes an issue)
16+
- [ ] New feature (non-breaking change that adds functionality)
17+
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
18+
- [ ] Documentation update
19+
- [ ] Code refactoring (no functional changes)
20+
- [ ] Performance improvement
21+
- [ ] Test coverage improvement
22+
- [ ] CI/CD or tooling change
23+
24+
## Changes Made
25+
26+
<!-- List the main changes made in this PR -->
27+
28+
-
29+
-
30+
-
31+
32+
## Testing
33+
34+
<!-- Describe the tests you ran and how to reproduce them -->
35+
36+
### Test Coverage
37+
38+
- [ ] Unit tests added/updated
39+
- [ ] Integration tests added/updated (if applicable)
40+
- [ ] All existing tests pass locally
41+
42+
### Manual Testing
43+
44+
<!-- Describe any manual testing performed -->
45+
46+
#### Method Used
47+
48+
- [ ] OpenAI Python SDK
49+
- [ ] curl
50+
- [ ] Docker
51+
- [ ] Other: <!-- specify -->
52+
53+
<details>
54+
<summary>Test commands / code</summary>
55+
56+
**Example with OpenAI SDK:**
57+
58+
```python
59+
from openai import OpenAI
60+
61+
client = OpenAI(base_url="http://localhost:8090", api_key="your-key")
62+
63+
completion = client.chat.completions.create(
64+
model="gpt-4",
65+
messages=[
66+
{"role": "user", "content": "Test message"}
67+
],
68+
)
69+
print(completion.choices[0].message.content)
70+
```
71+
72+
**Example with curl:**
73+
74+
```bash
75+
curl -X POST http://localhost:8090/v1/chat/completions \
76+
-H "Content-Type: application/json" \
77+
-H "Authorization: Bearer your-key" \
78+
-d '{
79+
"model": "gpt-4",
80+
"messages": [{"role": "user", "content": "Test"}]
81+
}'
82+
```
83+
84+
</details>
85+
86+
## Checklist
87+
88+
<!-- Mark completed items with an "x" -->
89+
90+
### Code Quality
91+
92+
- [ ] Code follows the project's style guidelines
93+
- [ ] I have performed a self-review of my code
94+
- [ ] I have commented my code, particularly in hard-to-understand areas
95+
- [ ] My changes generate no new linter warnings (`make lint`)
96+
- [ ] Type checking passes (`make mypy`)
97+
- [ ] All tests pass (`make test`)
98+
99+
### Documentation
100+
101+
- [ ] I have updated the documentation accordingly
102+
- [ ] Docstrings follow Google style with imperative mood
103+
- [ ] I have added examples for new features (if applicable)
104+
- [ ] README.md updated (if applicable)
105+
106+
### Dependencies
107+
108+
- [ ] No new dependencies added
109+
- [ ] If dependencies added, they are justified and minimal
110+
- [ ] `uv.lock` updated (if dependencies changed)
111+
112+
### Compatibility
113+
114+
- [ ] Changes are compatible with Python 3.10-3.14
115+
- [ ] No use of `type[...]` syntax (use `Type[...]` for Python 3.8)
116+
- [ ] Async/sync variants both work correctly (if applicable)
117+
118+
### Commits
119+
120+
- [ ] Commit messages are clear and follow conventional commits style
121+
- [ ] Commits are logically organized
122+
- [ ] No debug code or commented-out code left in
123+
124+
## Additional Context
125+
126+
<!-- Add any other context, screenshots, or information about the PR here -->
127+
128+
## Pre-merge Actions
129+
130+
<!-- For maintainers -->
131+
132+
- [ ] Changelog updated (if applicable)
133+
- [ ] Version bump considered (if applicable)

.pre-commit-config.yaml

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,20 +15,20 @@ repos:
1515
hooks:
1616
- id: ruff
1717
name: ruff check
18-
entry: poetry run ruff check
18+
entry: uv run ruff check
1919
language: system
2020
types_or: [python, pyi]
2121
args: [--fix]
2222
- id: ruff-format
2323
name: ruff format
24-
entry: poetry run ruff format
24+
entry: uv run ruff format
2525
language: system
2626
types_or: [python, pyi]
2727
args: [--config, pyproject.toml]
2828

29-
# - id: black
30-
# name: black check
31-
# entry: poetry run black .
32-
# language: system
33-
# types_or: [ python, pyi ]
34-
# args: [--check]
29+
- id: black
30+
name: black check
31+
entry: uv run black .
32+
language: system
33+
types_or: [ python, pyi ]
34+
args: [--check]

gpt2giga/protocol/__init__.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,8 @@
22
from .request_mapper import RequestTransformer
33
from .response_mapper import ResponseProcessor
44

5-
__all__ = ["AttachmentProcessor", "RequestTransformer", "ResponseProcessor"]
5+
__all__ = [
6+
"AttachmentProcessor",
7+
"RequestTransformer",
8+
"ResponseProcessor",
9+
]

gpt2giga/protocol/content_utils.py

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
import ast
2+
import json
3+
from typing import Any
4+
5+
6+
def ensure_json_object_str(value: Any) -> str:
7+
"""
8+
Ensures the value is a valid JSON object string.
9+
10+
GigaChat requires function/tool results to be valid JSON objects.
11+
The SDK `gigachat.models.Messages` expects `content: str`.
12+
13+
OpenAI-compatible clients often send:
14+
- dict (ok)
15+
- JSON string (needs json.loads)
16+
- double JSON string (needs json.loads multiple times)
17+
- python-like string (single quotes) — try ast.literal_eval
18+
19+
Args:
20+
value: Any value that needs to be converted to JSON object string
21+
22+
Returns:
23+
A valid JSON object string
24+
"""
25+
if value is None:
26+
return "{}"
27+
28+
if isinstance(value, bytes):
29+
try:
30+
value = value.decode("utf-8", errors="ignore")
31+
except Exception:
32+
return json.dumps({"result": str(value)}, ensure_ascii=False)
33+
34+
if isinstance(value, dict):
35+
return json.dumps(value, ensure_ascii=False)
36+
37+
if isinstance(value, str):
38+
s: Any = value.strip()
39+
for _ in range(3):
40+
if not isinstance(s, str):
41+
break
42+
if s == "":
43+
return "{}"
44+
try:
45+
s = json.loads(s)
46+
continue
47+
except json.JSONDecodeError:
48+
break
49+
50+
if isinstance(s, dict):
51+
return json.dumps(s, ensure_ascii=False)
52+
if isinstance(s, (list, int, float, bool)) or s is None:
53+
return json.dumps({"result": s}, ensure_ascii=False)
54+
55+
if isinstance(s, str):
56+
try:
57+
lit = ast.literal_eval(s)
58+
if isinstance(lit, dict):
59+
return json.dumps(lit, ensure_ascii=False)
60+
return json.dumps({"result": lit}, ensure_ascii=False)
61+
except Exception:
62+
return json.dumps({"result": s}, ensure_ascii=False)
63+
64+
return json.dumps({"result": value}, ensure_ascii=False)

0 commit comments

Comments
 (0)