Skip to content

Commit b49a251

Browse files
committed
Merge remote-tracking branch 'origin/develop' into xyc/embedding_reinforcement
# Conflicts: # frontend/app/[locale]/setup/models/components/model/EmbedderCheckModal.tsx # frontend/app/[locale]/setup/models/components/modelConfig.tsx # frontend/app/[locale]/setup/page.tsx # frontend/public/locales/en/common.json # frontend/public/locales/zh/common.json
2 parents 1b94a1d + 80a4f03 commit b49a251

File tree

84 files changed

+4881
-1305
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+4881
-1305
lines changed
Lines changed: 283 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,283 @@
1+
---
2+
description:
3+
globs: test/*.py
4+
alwaysApply: false
5+
---
6+
# Pytest Unit Test Rules
7+
8+
## Framework Requirements
9+
- **MANDATORY: Use pytest exclusively** - Do not use unittest framework
10+
- All tests must be written using pytest syntax and features
11+
- Use pytest fixtures instead of unittest setUp/tearDown methods
12+
- Use pytest assertions instead of unittest assert methods
13+
14+
## File Naming Conventions
15+
- Test files must start with `test_`
16+
- Test class names start with `Test`, test method names start with `test_`
17+
- File names should reflect the module or functionality being tested, e.g., `test_user_service.py`
18+
19+
## File Structure Standards
20+
21+
### Directory Organization
22+
```
23+
test/
24+
├── backend/ # Backend service tests
25+
│ ├── apps/ # Application layer tests
26+
│ │ ├── test_app_layer.py
27+
│ │ ├── test_app_layer_contract.py
28+
│ │ ├── test_app_layer_validation.py
29+
│ │ └── test_app_layer_errors.py
30+
│ ├── services/ # Service layer tests
31+
│ │ ├── test_user_service.py
32+
│ │ └── test_auth_service.py
33+
│ └── database/ # Database layer tests
34+
│ ├── test_models.py
35+
│ └── test_crud.py
36+
├── sdk/ # SDK tests
37+
│ ├── test_embedding_models.py
38+
│ └── test_client.py
39+
├── web_test/ # Web application tests
40+
├── assets/ # Test assets and fixtures
41+
├── pytest.ini # Pytest configuration
42+
├── .coveragerc # Coverage configuration
43+
└── requirements.txt # Test dependencies
44+
```
45+
46+
### File Splitting Guidelines
47+
- **When a test file exceeds 500 lines or 50 test methods**, split it into multiple files
48+
- Split by functionality or feature area, not by test type
49+
- Create a dedicated subdirectory for the split test files:
50+
```
51+
test/backend/services/
52+
├── test_auth_service.py # Single file for smaller services
53+
├── test_user_service/ # Directory for split user service tests
54+
│ ├── __init__.py # Required for Python package
55+
│ ├── test_user_service_core.py # Core user operations
56+
│ ├── test_user_service_auth.py # User authentication
57+
│ ├── test_user_service_permissions.py # User permissions
58+
│ └── test_user_service_validation.py # Input validation
59+
└── test_order_service/ # Directory for split order service tests
60+
├── __init__.py
61+
├── test_order_service_core.py
62+
├── test_order_service_payment.py
63+
└── test_order_service_shipping.py
64+
```
65+
- Use consistent naming pattern: `test_<module>_<feature>.py`
66+
- Each subdirectory must contain an `__init__.py` file
67+
- Maintain logical grouping within the same directory
68+
- Keep the original module name as the directory name for clarity
69+
70+
### Import Organization
71+
```python
72+
# 1. Standard library imports first
73+
import sys
74+
import os
75+
import types
76+
from typing import Any, Dict, List
77+
78+
# 2. Third-party library imports
79+
import pytest
80+
from pytest_mock import MockFixture # Use pytest-mock instead of unittest.mock
81+
82+
# 3. Project internal imports (after mocking dependencies if needed)
83+
from sdk.nexent.core.models.embedding_model import OpenAICompatibleEmbedding
84+
```
85+
86+
### Test Class Organization
87+
- Each test class corresponds to one class or module being tested
88+
- Use pytest fixtures instead of `setUp` and `tearDown` methods
89+
- Group test methods by functionality with descriptive method names
90+
91+
### Test Method Structure
92+
- Each test method tests only one functionality point
93+
- Use pytest assertions (`assert` statements)
94+
- Test method names should describe the test scenario, e.g., `test_create_user_success`
95+
96+
## Test Content Standards
97+
98+
### Coverage Requirements
99+
- Test normal flow and exception flow
100+
- Test boundary conditions and error handling
101+
- Use `@pytest.mark.parametrize` for parameterized testing
102+
103+
### Mocking Guidelines
104+
- Use `pytest-mock` plugin instead of `unittest.mock`
105+
- Mock database operations, API calls, and other external services
106+
- Use `side_effect` to simulate exception scenarios
107+
108+
### Assertion Standards
109+
- Use pytest assertions with clear error messages
110+
- Use `assert` statements with descriptive messages: `assert result.status == "success", f"Expected success, got {result.status}"`
111+
- Tests should fail fast and provide clear error location
112+
113+
## Code Examples
114+
115+
### Basic Test Structure (pytest-only)
116+
```python
117+
import pytest
118+
from pytest_mock import MockFixture
119+
120+
# ---
121+
# Fixtures
122+
# ---
123+
124+
@pytest.fixture()
125+
def sample_instance():
126+
"""Return a sample instance with minimal viable attributes for tests."""
127+
return SampleClass(
128+
param1="value1",
129+
param2="value2"
130+
)
131+
132+
# ---
133+
# Tests for method_name
134+
# ---
135+
136+
@pytest.mark.asyncio
137+
async def test_method_success(sample_instance, mocker: MockFixture):
138+
"""method_name should return expected result when no exception is raised."""
139+
140+
expected_result = {"status": "success"}
141+
mock_dependency = mocker.patch(
142+
"module.path.external_dependency",
143+
return_value=expected_result,
144+
)
145+
146+
result = await sample_instance.method_name()
147+
148+
assert result == expected_result
149+
mock_dependency.assert_called_once()
150+
151+
@pytest.mark.asyncio
152+
async def test_method_failure(sample_instance, mocker: MockFixture):
153+
"""method_name should handle exceptions gracefully."""
154+
155+
mocker.patch(
156+
"module.path.external_dependency",
157+
side_effect=Exception("connection error"),
158+
)
159+
160+
result = await sample_instance.method_name()
161+
162+
assert result is None # or expected error handling
163+
```
164+
165+
### Complex Mocking Example (pytest-mock)
166+
```python
167+
import pytest
168+
from pytest_mock import MockFixture
169+
170+
def test_complex_mocking(mocker: MockFixture):
171+
"""Test with complex dependency mocking."""
172+
# Mock external modules
173+
mock_external_module = mocker.MagicMock()
174+
mock_external_module.ExternalClass = mocker.MagicMock()
175+
176+
# Mock complex dependencies
177+
class DummyExternalClass:
178+
def __init__(self, *args, **kwargs):
179+
pass
180+
181+
def method_needed_by_tests(self, *args, **kwargs):
182+
return {}
183+
184+
mock_external_module.ExternalClass = DummyExternalClass
185+
186+
# Test the actual functionality
187+
# ... test implementation
188+
```
189+
190+
### Parameterized Testing
191+
```python
192+
@pytest.mark.parametrize("input_value,expected_output", [
193+
("valid_input", {"status": "success"}),
194+
("invalid_input", {"status": "error"}),
195+
("", {"status": "error"}),
196+
])
197+
async def test_method_with_different_inputs(sample_instance, input_value, expected_output):
198+
"""Test method with various input scenarios."""
199+
result = await sample_instance.method(input_value)
200+
assert result["status"] == expected_output["status"]
201+
```
202+
203+
### Exception Testing
204+
```python
205+
@pytest.mark.asyncio
206+
async def test_method_raises_exception(sample_instance):
207+
"""Test that method raises appropriate exception."""
208+
with pytest.raises(ValueError, match="Invalid input") as exc_info:
209+
await sample_instance.method("invalid_input")
210+
211+
assert "Invalid input" in str(exc_info.value)
212+
```
213+
214+
### State Management
215+
```python
216+
@pytest.fixture(autouse=True)
217+
def reset_state():
218+
"""Reset global state between tests."""
219+
global_state.clear()
220+
mock_objects.reset_mock()
221+
222+
@pytest.fixture
223+
def test_context():
224+
"""Provide test context with required attributes."""
225+
return TestContext(
226+
request_id="req-1",
227+
tenant_id="tenant-1",
228+
user_id="user-1"
229+
)
230+
```
231+
232+
## Best Practices
233+
234+
### 1. Test Isolation
235+
- Each test should be independent
236+
- Use `autouse=True` fixtures for state reset
237+
- Mock external dependencies completely
238+
239+
### 2. Async Testing
240+
- Use `@pytest.mark.asyncio` for async tests
241+
- Use `mocker.patch` for async operations
242+
- Use `assert_called_once()` for async assertions
243+
244+
### 3. Mock Design
245+
- Create minimal viable mock objects
246+
- Use `DummyClass` pattern for complex dependencies
247+
- Record method calls for verification
248+
249+
### 4. Test Organization
250+
- Group related tests with comment separators
251+
- Use descriptive test names
252+
- Include docstrings explaining test purpose
253+
- Split large test files into logical subdirectories
254+
255+
### 5. Error Handling
256+
- Test both success and failure scenarios
257+
- Use `pytest.raises` for exception testing
258+
- Verify error messages and types with `match` parameter
259+
260+
### 6. File Management
261+
- Keep test files under 500 lines or 50 test methods
262+
- Split large files by functionality, not by test type
263+
- Use consistent naming patterns for split files
264+
- Maintain logical grouping within directories
265+
266+
## Migration from unittest
267+
- Replace `unittest.TestCase` with plain functions and pytest fixtures
268+
- Replace `self.assertTrue()` with `assert` statements
269+
- Replace `unittest.mock` with `pytest-mock` plugin
270+
- Replace `setUp`/`tearDown` with pytest fixtures
271+
- Use `pytest.raises()` instead of `self.assertRaises()`
272+
273+
## Validation Checklist
274+
- [ ] All tests use pytest framework exclusively
275+
- [ ] No unittest imports or usage
276+
- [ ] External dependencies are mocked with pytest-mock
277+
- [ ] Tests cover normal and exception flows
278+
- [ ] Async tests use proper decorators
279+
- [ ] Assertions are specific and descriptive
280+
- [ ] Test names clearly describe scenarios
281+
- [ ] Fixtures provide necessary test data
282+
- [ ] State is properly reset between tests
283+
- [ ] Large test files are split into logical subdirectories

.github/workflows/docker-deploy.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89,6 +89,10 @@ jobs:
8989
run: |
9090
cd $HOME/nexent/docker
9191
cp .env.example .env
92+
93+
sed -i "s/APPID=.*/APPID=${{ secrets.VOICE_APPID }}/" .env
94+
sed -i "s/TOKEN=.*/TOKEN=${{ secrets.VOICE_TOKEN }}/" .env
95+
9296
if [ "$DEPLOYMENT_MODE" = "production" ]; then
9397
./deploy.sh --mode 3 --is-mainland N --enable-terminal N --version 2 --root-dir "$HOME/nexent-production-data"
9498
else

backend/agents/agent_run_manager.py

Lines changed: 23 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -21,39 +21,46 @@ def __new__(cls):
2121

2222
def __init__(self):
2323
if not self._initialized:
24-
# conversation_id -> agent_run_info
25-
self.agent_runs: Dict[int, AgentRunInfo] = {}
24+
# user_id:conversation_id -> agent_run_info
25+
self.agent_runs: Dict[str, AgentRunInfo] = {}
2626
self._initialized = True
2727

28-
def register_agent_run(self, conversation_id: int, agent_run_info):
28+
def _get_run_key(self, conversation_id: int, user_id: str) -> str:
29+
"""Generate unique key for agent run using user_id and conversation_id"""
30+
return f"{user_id}:{conversation_id}"
31+
32+
def register_agent_run(self, conversation_id: int, agent_run_info, user_id: str):
2933
"""register agent run instance"""
3034
with self._lock:
31-
self.agent_runs[conversation_id] = agent_run_info
35+
run_key = self._get_run_key(conversation_id, user_id)
36+
self.agent_runs[run_key] = agent_run_info
3237
logger.info(
33-
f"register agent run instance, conversation_id: {conversation_id}")
38+
f"register agent run instance, user_id: {user_id}, conversation_id: {conversation_id}")
3439

35-
def unregister_agent_run(self, conversation_id: int):
40+
def unregister_agent_run(self, conversation_id: int, user_id: str):
3641
"""unregister agent run instance"""
3742
with self._lock:
38-
if conversation_id in self.agent_runs:
39-
del self.agent_runs[conversation_id]
43+
run_key = self._get_run_key(conversation_id, user_id)
44+
if run_key in self.agent_runs:
45+
del self.agent_runs[run_key]
4046
logger.info(
41-
f"unregister agent run instance, conversation_id: {conversation_id}")
47+
f"unregister agent run instance, user_id: {user_id}, conversation_id: {conversation_id}")
4248
else:
4349
logger.info(
44-
f"no agent run instance found for conversation_id: {conversation_id}")
50+
f"no agent run instance found for user_id: {user_id}, conversation_id: {conversation_id}")
4551

46-
def get_agent_run_info(self, conversation_id: int):
52+
def get_agent_run_info(self, conversation_id: int, user_id: str):
4753
"""get agent run instance"""
48-
return self.agent_runs.get(conversation_id)
54+
run_key = self._get_run_key(conversation_id, user_id)
55+
return self.agent_runs.get(run_key)
4956

50-
def stop_agent_run(self, conversation_id: int) -> bool:
51-
"""stop agent run for specified conversation_id"""
52-
agent_run_info = self.get_agent_run_info(conversation_id)
57+
def stop_agent_run(self, conversation_id: int, user_id: str) -> bool:
58+
"""stop agent run for specified conversation_id and user_id"""
59+
agent_run_info = self.get_agent_run_info(conversation_id, user_id)
5360
if agent_run_info is not None:
5461
agent_run_info.stop_event.set()
5562
logger.info(
56-
f"agent run stopped, conversation_id: {conversation_id}")
63+
f"agent run stopped, user_id: {user_id}, conversation_id: {conversation_id}")
5764
return True
5865
return False
5966

backend/apps/agent_app.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,11 +45,12 @@ async def agent_run_api(agent_request: AgentRequest, http_request: Request, auth
4545

4646

4747
@router.get("/stop/{conversation_id}")
48-
async def agent_stop_api(conversation_id: int):
48+
async def agent_stop_api(conversation_id: int, authorization: Optional[str] = Header(None)):
4949
"""
5050
stop agent run and preprocess tasks for specified conversation_id
5151
"""
52-
if stop_agent_tasks(conversation_id).get("status") == "success":
52+
user_id, _ = get_current_user_id(authorization)
53+
if stop_agent_tasks(conversation_id, user_id).get("status") == "success":
5354
return {"status": "success", "message": "agent run and preprocess tasks stopped successfully"}
5455
else:
5556
raise HTTPException(status_code=HTTPStatus.BAD_REQUEST,

backend/consts/const.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -239,3 +239,7 @@
239239
"PROCESS_FAILED": "PROCESS_FAILED",
240240
"FORWARD_FAILED": "FORWARD_FAILED",
241241
}
242+
243+
# Deep Thinking Constants
244+
THINK_START_PATTERN = "<think>"
245+
THINK_END_PATTERN = "</think>"

backend/database/tool_db.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,8 @@ def add_tool_field(tool_info):
164164
# add tool params
165165
tool_params = tool.params
166166
for ele in tool_params:
167-
ele["default"] = tool_info["params"][ele["name"]]
167+
param_name = ele["name"]
168+
ele["default"] = tool_info["params"].get(param_name)
168169

169170
tool_dict = as_dict(tool)
170171
tool_dict["params"] = tool_params

0 commit comments

Comments
 (0)