Skip to content

Commit f55d235

Browse files
committed
initial commit
1 parent 0c8a6c8 commit f55d235

File tree

5 files changed

+1729
-0
lines changed

5 files changed

+1729
-0
lines changed

.cursor/bdd-rules.mdc

Lines changed: 234 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,234 @@
1+
---
2+
globs: ["**/features/**/*.feature", "**/features/**/*.py", "**/steps/**/*.py"]
3+
description: "BDD testing rules using Behave for Python projects"
4+
---
5+
6+
## Behavior-Driven Development (BDD) with Behave
7+
8+
### Overview
9+
Use [Behave](https://behave.readthedocs.io/en/stable/) for BDD testing in Python. Behave uses Gherkin syntax for feature files and Python for step implementations.
10+
11+
---
12+
13+
## Project Structure
14+
15+
Organize BDD tests following this structure:
16+
```
17+
features/
18+
├── environment.py # Hooks and fixtures (before_all, after_all, etc.)
19+
├── steps/ # Step definition modules
20+
│ ├── __init__.py
21+
│ ├── common_steps.py # Shared steps across features
22+
│ └── <domain>_steps.py # Domain-specific steps
23+
├── <feature_name>.feature # Feature files
24+
└── fixtures/ # Test fixtures and data (optional)
25+
```
26+
27+
---
28+
29+
## Feature File Guidelines
30+
31+
### Writing Feature Files
32+
- **Test WHAT, not HOW**: Focus on business behavior, not implementation details
33+
- **Technology-agnostic**: Feature files should be independent of the system under test (SUT) implementation
34+
- **Use declarative language**: Describe intended outcomes, not UI interactions
35+
36+
**Good Example:**
37+
```gherkin
38+
Feature: User Authentication
39+
As a registered user
40+
I want to sign in to my account
41+
So that I can access my dashboard
42+
43+
Scenario: Successful login with valid credentials
44+
Given a registered user with email "[email protected]"
45+
When the user authenticates with valid credentials
46+
Then the user should be granted access
47+
And the user should see their dashboard
48+
```
49+
50+
**Bad Example (too implementation-specific):**
51+
```gherkin
52+
Scenario: Login via UI
53+
Given I am on the login page
54+
When I type "[email protected]" into the email field
55+
And I click the submit button
56+
Then I should be redirected to "/dashboard"
57+
```
58+
59+
### Scenario Organization
60+
- Use `Background` for common preconditions shared across scenarios
61+
- Use `Scenario Outline` with `Examples` for data-driven tests
62+
- Keep scenarios focused on a single behavior
63+
- Use tags for filtering and categorization (`@wip`, `@slow`, `@api`, `@ui`)
64+
65+
---
66+
67+
## Step Definitions
68+
69+
### Best Practices
70+
- **Reusable steps**: Write generic, parameterized steps that can be reused
71+
- **Thin steps**: Keep step implementations thin; delegate to helper functions or services
72+
- **Use context**: Store shared state in `context` object, not module-level variables
73+
74+
**Example Step Definition:**
75+
```python
76+
# -- FILE: features/steps/user_steps.py
77+
from behave import given, when, then
78+
79+
@given('a registered user with email "{email}"')
80+
def step_given_registered_user(context, email):
81+
context.user = context.user_service.create_user(email=email)
82+
83+
@when('the user authenticates with valid credentials')
84+
def step_when_user_authenticates(context):
85+
context.auth_result = context.auth_service.authenticate(context.user)
86+
87+
@then('the user should be granted access')
88+
def step_then_user_granted_access(context):
89+
assert context.auth_result.is_authenticated
90+
```
91+
92+
### Step Parameters
93+
- Use `{param}` for string parameters (parse expressions)
94+
- Use `{param:d}` for integers, `{param:f}` for floats
95+
- Use regular expressions for complex matching when needed
96+
97+
---
98+
99+
## Environment Configuration (environment.py)
100+
101+
### Fixtures and Hooks
102+
Use Behave fixtures for setup/teardown with proper cleanup:
103+
104+
```python
105+
# -- FILE: features/environment.py
106+
from behave import fixture, use_fixture
107+
108+
@fixture
109+
def database_connection(context):
110+
"""Database fixture with automatic cleanup."""
111+
context.db = create_test_database()
112+
yield context.db
113+
# Cleanup runs after scenario/feature
114+
context.db.rollback()
115+
context.db.close()
116+
117+
def before_all(context):
118+
"""Global setup - runs once before all features."""
119+
use_fixture(database_connection, context)
120+
# Initialize services using dependency injection
121+
context.user_service = UserService(context.db)
122+
context.auth_service = AuthService(context.db)
123+
124+
def before_scenario(context, scenario):
125+
"""Per-scenario setup."""
126+
context.db.begin_transaction()
127+
128+
def after_scenario(context, scenario):
129+
"""Per-scenario cleanup."""
130+
context.db.rollback()
131+
```
132+
133+
### Dependency Injection
134+
- **Do NOT use global singletons** in step definitions
135+
- Initialize services in `before_all()` or `before_scenario()` hooks
136+
- Pass dependencies through the `context` object
137+
138+
---
139+
140+
## Test Automation Layers
141+
142+
### Prefer API/Model Layer Testing
143+
- **Primary**: Test business logic via REST API or service layer
144+
- **Secondary**: UI testing only when specifically needed
145+
- Reuse feature files across layers using `--stage` option:
146+
147+
```bash
148+
uv run behave --stage=api features/ # Test via API
149+
uv run behave --stage=ui features/ # Test via UI (subset)
150+
```
151+
152+
### When UI Testing is Required
153+
Use Selenium or Splinter with fixtures:
154+
155+
```python
156+
from behave import fixture, use_fixture
157+
from selenium.webdriver import Firefox
158+
159+
@fixture
160+
def browser_firefox(context):
161+
context.browser = Firefox()
162+
yield context.browser
163+
context.browser.quit()
164+
165+
def before_all(context):
166+
use_fixture(browser_firefox, context)
167+
```
168+
169+
---
170+
171+
## Running Behave Tests
172+
173+
### Commands
174+
```bash
175+
# Run all BDD tests
176+
uv run behave
177+
178+
# Run specific feature
179+
uv run behave features/authentication.feature
180+
181+
# Run by tag
182+
uv run behave --tags=@api
183+
uv run behave --tags="@critical and not @slow"
184+
185+
# Generate reports
186+
uv run behave --format=json -o reports/results.json
187+
uv run behave --format=html -o reports/results.html
188+
```
189+
190+
### Configuration (behave.ini or pyproject.toml)
191+
```ini
192+
# behave.ini
193+
[behave]
194+
format = pretty
195+
logging_level = INFO
196+
junit = true
197+
junit_directory = reports/
198+
```
199+
200+
Or in `pyproject.toml`:
201+
```toml
202+
[tool.behave]
203+
format = "pretty"
204+
junit = true
205+
junit_directory = "reports/"
206+
```
207+
208+
---
209+
210+
## Integration with pytest
211+
212+
If needed, run Behave tests via pytest using `pytest-bdd` or invoke Behave as a subprocess:
213+
214+
```python
215+
# tests/test_bdd.py
216+
import subprocess
217+
218+
def test_bdd_features():
219+
result = subprocess.run(
220+
["uv", "run", "behave", "--tags=@critical"],
221+
capture_output=True
222+
)
223+
assert result.returncode == 0, result.stderr.decode()
224+
```
225+
226+
---
227+
228+
## Summary
229+
230+
1. **Feature files**: Declarative, technology-agnostic, business-focused
231+
2. **Step definitions**: Thin, reusable, use context for state
232+
3. **No singletons**: Use dependency injection via environment.py hooks
233+
4. **Test layers**: Prefer API/model testing over UI testing
234+
5. **Run with uv**: Always use `uv run behave` for consistency

0 commit comments

Comments
 (0)