diff --git a/.claude/skills/qa-testing/SKILL.md b/.claude/skills/qa-testing/SKILL.md new file mode 100644 index 0000000..eaaed66 --- /dev/null +++ b/.claude/skills/qa-testing/SKILL.md @@ -0,0 +1,233 @@ +--- +name: qa-testing +description: | + Comprehensive QA testing skill for e-commerce web applications and APIs using Python. + This skill should be used when the user needs help with: + (1) Writing test cases from requirements, user stories, or acceptance criteria with structured format (ID, priority, steps, expected results), + (2) Creating test plans covering scope, objectives, environment, entry/exit criteria, risks, and deliverables, + (3) Creating test strategies defining test types, automation approach, defect management, and reporting, + (4) Generating Python automation scripts using pytest, requests, Playwright, or Selenium with proper fixtures and assertions, + (5) Creating automation test reports with pass/fail metrics and recommendations, + (6) Reviewing code for testability, quality, error handling, security vulnerabilities, and best practices. + Domain focus: e-commerce (cart, checkout, payments, inventory, promotions, auth, catalog, orders), RESTful APIs, web applications. +--- + +# QA Testing + +## What This Skill Does + +- Write structured test cases from requirements, user stories, or acceptance criteria +- Create test plans and test strategies +- Generate Python automation scripts (pytest, requests, Playwright, Selenium) +- Create automation test reports with pass/fail metrics +- Review code for testability, security, error handling, and best practices + +## What This Skill Does NOT Do + +- Execute tests in live/production environments +- Set up CI/CD pipelines or infrastructure +- Manage test data in databases directly +- Perform actual penetration testing or load testing +- Deploy or configure test environments + +Domain focus is e-commerce, but patterns are adaptable to other domains by substituting the module coverage in `references/test-cases.md`. + +--- + +## Before Implementation + +Gather context to ensure successful implementation: + +| Source | Gather | +|--------|--------| +| **Codebase** | Existing test structure, frameworks in use, project conventions | +| **Conversation** | User's specific module, requirements, constraints | +| **Skill References** | Domain patterns from `references/` | +| **User Guidelines** | Project-specific conventions, team standards | + +Ensure all required context is gathered before implementing. +Only ask user for THEIR specific requirements — domain expertise is embedded in this skill's references. Do NOT research domain basics at runtime. + +--- + +## Required Clarifications + +Ask about the USER'S context before proceeding: + +1. **Task type**: "What do you need?" + - Test cases → follow Test Cases workflow + - Test plan/strategy → follow Planning workflow + - Automation scripts → follow Automation workflow + - Test report → follow Reporting workflow + - Code review → follow Review workflow + +2. **Module/feature**: "Which module or feature?" (e.g., cart, checkout, payments, auth, catalog, orders, promotions) + +3. **Existing setup**: "Any existing test framework or structure?" + - Determines whether to scaffold from scratch or integrate + +4. **Constraints**: "Any specific requirements?" (coverage targets, compliance, frameworks) + +## Optional Clarifications + +Ask only when relevant to the task: + +5. **Coverage target**: "Any minimum coverage requirement?" (default: 80%) +6. **Reporting format**: "pytest-html or allure?" (default: pytest-html) +7. **UI framework preference**: "Playwright or Selenium?" (default: Playwright) + +## If User Skips Clarifications + +- **Task type**: Required — explain why needed and ask again simply +- **Module**: Infer from conversation context if possible; otherwise ask +- **Existing setup**: Assume new project, scaffold from scratch +- **Constraints**: Proceed with skill defaults +- **Optional questions**: Proceed with defaults noted above + +--- + +## Workflow + +Determine task type and follow the corresponding path: + +``` +User request + │ + ├─ "Write test cases" ──────→ references/test-cases.md + │ → Structured tabular output + │ + ├─ "Create test plan" ──────→ references/test-plans.md (Test Plan section) + │ + ├─ "Create test strategy" ──→ references/test-plans.md (Strategy section) + │ + ├─ "Write automation" ──────→ references/automation.md + │ ├─ API tests? ──→ requests + pytest pattern + │ ├─ UI tests? ──→ Playwright (preferred) or Selenium + │ └─ Both? ──→ Full project structure + │ + ├─ "Create test report" ────→ references/automation.md (Report Template) + │ + └─ "Review code" ──────────→ references/code-review.md + ├─ Testability assessment + ├─ Security review (OWASP/PCI-DSS) + └─ Structured findings output +``` + +--- + +## Core Technologies + +| Tool | Purpose | When to Use | +|------|---------|-------------| +| **pytest** | Test framework | Always (fixtures, markers, parametrize) | +| **requests** | API/REST testing | HTTP endpoint testing | +| **Playwright** | UI testing (preferred) | Modern web apps, async support needed | +| **Selenium** | UI testing (alternative) | Legacy apps, user already using it | +| **jsonschema** | Response validation | API contract testing | +| **pytest-html** | Reporting | HTML test reports | + +--- + +## Standards Enforcement + +### Must Follow + +- [ ] Every test case has TC-ID, title, priority, preconditions, steps, expected results +- [ ] Test IDs follow `TC--` convention +- [ ] Priority assigned using P0-P3 scale based on business impact +- [ ] Automation uses pytest with class-based organization +- [ ] Page Object Model for all UI tests +- [ ] Session-scoped fixtures for auth tokens +- [ ] `@pytest.mark.parametrize` for data-driven tests +- [ ] JSON schema validation for API response contracts +- [ ] Test isolation (no shared mutable state between tests) +- [ ] Boundary and negative test cases for every input + +### Must Avoid + +- Bare `except:` clauses (always catch specific exceptions) +- Hardcoded test data in test methods (use fixtures/parametrize) +- `time.sleep()` in UI tests (use explicit waits) +- Shared mutable state between tests +- Testing implementation details instead of behavior +- Skipping negative/boundary test cases +- Assertions without descriptive messages on failures + +See `references/anti-patterns.md` for detailed anti-patterns with examples. + +--- + +## Output Formats + +| Task | Format | +|------|--------| +| Test cases | Structured table (TC-ID, Title, Priority, Steps, Expected) | +| Test plans | Markdown with numbered sections | +| Test strategies | Markdown with tables for test levels and automation | +| Automation scripts | Python with pytest, proper structure and docstrings | +| Test reports | Markdown with summary metrics tables | +| Code reviews | Findings table (Critical/Major/Minor) + testability score | + +--- + +## Output Checklist + +Before delivering any output, verify: + +### Test Cases +- [ ] Every test case has all required fields (TC-ID through Expected Result) +- [ ] Includes positive, negative, and boundary cases +- [ ] Priority reflects business impact (P0 for payment/security failures) +- [ ] Steps are specific and reproducible + +### Automation Scripts +- [ ] Runs without syntax errors +- [ ] Uses fixtures for setup/teardown +- [ ] Assertions have descriptive failure messages +- [ ] Follows project structure conventions +- [ ] Includes pytest markers (smoke, regression, api, ui) + +### Test Plans/Strategies +- [ ] Entry and exit criteria defined +- [ ] Risks identified with mitigations +- [ ] All required modules covered + +### Code Reviews +- [ ] Security checklist applied (OWASP Top 10, PCI-DSS for payments) +- [ ] Findings categorized by severity +- [ ] Each finding has specific remediation + +--- + +## Official Documentation + +| Resource | URL | Use For | +|----------|-----|---------| +| pytest | https://docs.pytest.org/ | Framework reference, fixtures, markers | +| Playwright Python | https://playwright.dev/python/ | UI testing API, selectors, assertions | +| Selenium | https://www.selenium.dev/documentation/ | WebDriver API, waits, locators | +| requests | https://requests.readthedocs.io/ | HTTP client, sessions, auth | +| jsonschema | https://python-jsonschema.readthedocs.io/ | Schema validation | +| OWASP Top 10 | https://owasp.org/www-project-top-ten/ | Security testing checklist | +| PCI-DSS | https://www.pcisecuritystandards.org/ | Payment data compliance | +| pytest-html | https://pytest-html.readthedocs.io/ | HTML report generation | + +## Unlisted Scenarios + +For patterns not documented in this skill's references: + +1. Fetch from official docs (see URLs above) +2. Apply the same quality standards (Must Follow / Must Avoid) +3. Follow established test structure patterns from `references/automation.md` + +--- + +## Reference Files + +| File | When to Read | +|------|--------------| +| `references/test-cases.md` | Writing test cases (format, module coverage, examples) | +| `references/test-plans.md` | Creating test plans or test strategies | +| `references/automation.md` | Generating Python automation scripts or test reports | +| `references/code-review.md` | Reviewing code for testability, security, quality | +| `references/anti-patterns.md` | Avoiding common QA testing mistakes | diff --git a/.claude/skills/qa-testing/references/anti-patterns.md b/.claude/skills/qa-testing/references/anti-patterns.md new file mode 100644 index 0000000..d5edcf7 --- /dev/null +++ b/.claude/skills/qa-testing/references/anti-patterns.md @@ -0,0 +1,231 @@ +# QA Testing Anti-Patterns + +## Test Case Anti-Patterns + +### Vague Steps + +``` +# Bad: Ambiguous, not reproducible +Steps: "Test the cart functionality" +Expected: "Cart works correctly" + +# Good: Specific, reproducible +Steps: + 1. Navigate to /products/widget-a + 2. Click "Add to Cart" button + 3. Navigate to /cart +Expected: + 1. Product page loads with "Add to Cart" enabled + 2. Toast "Added to cart" appears, cart badge shows "1" + 3. Cart page shows Widget-A, qty 1, total $29.99 +``` + +### Missing Negative Cases + +``` +# Bad: Only happy path +Test: "User can log in with valid credentials" + +# Good: Cover failure modes +Tests: +- TC-AUTH-001: Login with valid credentials → success +- TC-AUTH-002: Login with wrong password → error message, no lockout +- TC-AUTH-003: Login with nonexistent email → same error (no enumeration) +- TC-AUTH-004: Login after 5 failed attempts → account locked +- TC-AUTH-005: Login with SQL injection in email → sanitized, error +``` + +### Wrong Priority Assignment + +``` +# Bad: Payment failure marked P2 +TC-PAY-001 | Payment gateway timeout | P2 + +# Good: Payment failure is always P0/P1 +TC-PAY-001 | Payment gateway timeout | P0 (revenue-blocking) +``` + +--- + +## Automation Anti-Patterns + +### Sleep Instead of Wait + +```python +# Bad: Flaky, slow +import time +time.sleep(5) +element = driver.find_element(By.ID, "result") + +# Good: Explicit wait +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +element = WebDriverWait(driver, 10).until( + EC.visibility_of_element_located((By.ID, "result")) +) +``` + +### Hardcoded Test Data + +```python +# Bad: Data buried in test +def test_add_to_cart(self, api_client): + resp = api_client.post("/cart/items", json={ + "product_id": "PROD-123", + "quantity": 2 + }) + +# Good: Data in fixtures +@pytest.fixture +def cart_item(): + return {"product_id": "PROD-123", "quantity": 2} + +def test_add_to_cart(self, api_client, cart_item): + resp = api_client.post("/cart/items", json=cart_item) +``` + +### No Assertion Messages + +```python +# Bad: Unhelpful failure output +assert resp.status_code == 201 + +# Good: Clear failure context +assert resp.status_code == 201, ( + f"Expected 201 Created, got {resp.status_code}: {resp.text}" +) +``` + +### Shared Mutable State + +```python +# Bad: Tests depend on execution order +class TestCart: + cart_id = None # Shared across tests! + + def test_create_cart(self, api_client): + resp = api_client.post("/cart") + TestCart.cart_id = resp.json()["id"] + + def test_add_item(self, api_client): + # Fails if test_create_cart didn't run first + api_client.post(f"/cart/{TestCart.cart_id}/items", ...) + +# Good: Each test creates own state +class TestCart: + @pytest.fixture + def cart_id(self, api_client): + resp = api_client.post("/cart") + return resp.json()["id"] + + def test_add_item(self, api_client, cart_id): + resp = api_client.post(f"/cart/{cart_id}/items", ...) +``` + +### Testing Implementation, Not Behavior + +```python +# Bad: Coupled to internal implementation +def test_cart_uses_redis_cache(self, api_client): + api_client.post("/cart/items", json=item) + assert redis_client.exists("cart:user123") + +# Good: Test observable behavior +def test_cart_persists_across_sessions(self, api_client): + api_client.post("/cart/items", json=item) + # New session, same user + resp = api_client.get("/cart") + assert len(resp.json()["items"]) == 1 +``` + +### No Cleanup / Teardown + +```python +# Bad: Test data leaks +def test_create_order(self, api_client): + resp = api_client.post("/orders", json=order_data) + assert resp.status_code == 201 + # Order left in DB! + +# Good: Cleanup with fixture +@pytest.fixture +def created_order(self, api_client, order_data): + resp = api_client.post("/orders", json=order_data) + order_id = resp.json()["id"] + yield order_id + api_client.delete(f"/orders/{order_id}") +``` + +--- + +## Code Review Anti-Patterns + +### Bare Except + +```python +# Bad +try: + process_payment(order) +except: + pass + +# Good +try: + process_payment(order) +except PaymentGatewayError as e: + logger.error(f"Payment failed for order {order.id}: {e}") + raise +except ValidationError as e: + return {"error": str(e)}, 400 +``` + +### SQL Injection + +```python +# Bad: String concatenation +query = f"SELECT * FROM products WHERE id = '{product_id}'" +cursor.execute(query) + +# Good: Parameterized query +cursor.execute("SELECT * FROM products WHERE id = %s", (product_id,)) +``` + +### Sensitive Data Logging + +```python +# Bad: Logs credit card +logger.info(f"Processing payment: card={card_number}, cvv={cvv}") + +# Good: Mask sensitive data +logger.info(f"Processing payment: card=****{card_number[-4:]}") +``` + +--- + +## Test Plan Anti-Patterns + +### No Exit Criteria + +``` +# Bad: "Testing is done when we feel confident" + +# Good: +Exit Criteria: +- All P0/P1 test cases passing +- No open P0/P1 defects +- Code coverage ≥80% +- Performance tests within SLA thresholds +``` + +### No Risk Assessment + +``` +# Bad: No risks section + +# Good: +| Risk | Impact | Likelihood | Mitigation | +|------|--------|------------|------------| +| Payment gateway sandbox down | Blocks payment tests | Medium | Mock gateway responses | +| Test data corruption | False failures | Low | DB snapshot before each run | +| Flaky UI tests | False negatives | High | Retry mechanism + screenshot on failure | +``` diff --git a/.claude/skills/qa-testing/references/automation.md b/.claude/skills/qa-testing/references/automation.md new file mode 100644 index 0000000..38de805 --- /dev/null +++ b/.claude/skills/qa-testing/references/automation.md @@ -0,0 +1,453 @@ +# Python Automation Patterns + +## Table of Contents + +- Project Structure +- pytest Configuration +- Fixtures +- API Testing Patterns +- JSON Schema Validation +- Playwright UI Testing +- Selenium UI Testing +- Error Handling in Tests +- Test Report Template +- Dependencies + +--- + +## Project Structure + +``` +tests/ +├── conftest.py # Shared fixtures (base_url, auth_token, api_client) +├── api/ +│ ├── conftest.py # API-specific fixtures +│ ├── test_auth.py +│ ├── test_cart.py +│ └── test_orders.py +├── ui/ +│ ├── conftest.py # UI-specific fixtures (browser, page) +│ ├── pages/ # Page Object Model +│ │ ├── base_page.py +│ │ ├── login_page.py +│ │ └── cart_page.py +│ ├── test_login.py +│ └── test_checkout.py +├── data/ +│ └── test_data.json # Externalized test data +└── utils/ + ├── api_client.py # Reusable API client wrapper + └── helpers.py # Shared utilities +``` + +--- + +## pytest Configuration + +```ini +# pytest.ini +[pytest] +markers = + smoke: Quick sanity checks (< 2min total) + regression: Full regression suite + api: API tests + ui: UI tests + slow: Long-running tests (> 30s each) + security: Security-focused tests +testpaths = tests +addopts = -v --tb=short --strict-markers -ra +``` + +**Marker usage**: Always decorate tests with appropriate markers for selective execution. + +```python +@pytest.mark.smoke +@pytest.mark.api +def test_health_check(api_client): + ... +``` + +--- + +## Fixtures + +### Session-Level (shared across all tests) + +```python +import pytest +import requests + +@pytest.fixture(scope="session") +def base_url(): + """Base API URL from environment or default.""" + import os + return os.getenv("TEST_BASE_URL", "https://api.staging.example.com/v1") + +@pytest.fixture(scope="session") +def auth_token(base_url): + """Authenticate once per session, reuse token.""" + resp = requests.post(f"{base_url}/auth/login", json={ + "email": "test@example.com", + "password": "TestPass123!" + }) + assert resp.status_code == 200, f"Auth failed: {resp.status_code} {resp.text}" + return resp.json()["token"] + +@pytest.fixture(scope="session") +def api_client(base_url, auth_token): + """Authenticated requests session.""" + session = requests.Session() + session.base_url = base_url + session.headers.update({ + "Authorization": f"Bearer {auth_token}", + "Content-Type": "application/json" + }) + yield session + session.close() +``` + +### Function-Level (fresh per test) + +```python +@pytest.fixture +def sample_product(): + """Standard test product data.""" + return { + "name": "Test Widget", + "price": 29.99, + "sku": "TEST-001", + "quantity": 100 + } + +@pytest.fixture +def created_cart(api_client): + """Create a cart and clean up after test.""" + resp = api_client.post(f"{api_client.base_url}/cart") + assert resp.status_code == 201, f"Cart creation failed: {resp.text}" + cart_id = resp.json()["id"] + yield cart_id + # Cleanup + api_client.delete(f"{api_client.base_url}/cart/{cart_id}") +``` + +--- + +## API Testing Patterns + +### CRUD Pattern + +```python +import pytest + +class TestCartAPI: + """Cart API endpoint tests.""" + + @pytest.mark.smoke + @pytest.mark.api + def test_add_item_to_cart(self, api_client, sample_product): + """Add a single item to cart - verify response structure and data.""" + resp = api_client.post(f"{api_client.base_url}/cart/items", json={ + "product_id": sample_product["sku"], + "quantity": 1 + }) + assert resp.status_code == 201, ( + f"Expected 201, got {resp.status_code}: {resp.text}" + ) + data = resp.json() + assert data["items"][0]["product_id"] == sample_product["sku"] + assert data["items"][0]["quantity"] == 1 + + @pytest.mark.api + def test_add_item_unauthorized(self, base_url): + """Unauthenticated request returns 401.""" + resp = requests.post(f"{base_url}/cart/items", json={ + "product_id": "TEST-001", + "quantity": 1 + }) + assert resp.status_code == 401, ( + f"Expected 401 Unauthorized, got {resp.status_code}" + ) + + @pytest.mark.api + @pytest.mark.parametrize("quantity,expected_status", [ + (0, 400), + (-1, 400), + (1001, 400), # above max + (None, 400), + ]) + def test_add_item_invalid_quantities(self, api_client, quantity, expected_status): + """Invalid quantities return 400 with error details.""" + resp = api_client.post(f"{api_client.base_url}/cart/items", json={ + "product_id": "TEST-001", + "quantity": quantity + }) + assert resp.status_code == expected_status, ( + f"Quantity {quantity}: expected {expected_status}, got {resp.status_code}" + ) +``` + +### Response Validation Pattern + +```python +def assert_error_response(resp, expected_status, expected_field=None): + """Reusable assertion for error responses.""" + assert resp.status_code == expected_status, ( + f"Expected {expected_status}, got {resp.status_code}: {resp.text}" + ) + body = resp.json() + assert "error" in body, f"Missing 'error' field in response: {body}" + if expected_field: + assert expected_field in body["error"].lower(), ( + f"Expected '{expected_field}' in error, got: {body['error']}" + ) +``` + +--- + +## JSON Schema Validation + +```python +from jsonschema import validate, ValidationError +import pytest + +PRODUCT_SCHEMA = { + "type": "object", + "required": ["id", "name", "price", "sku"], + "properties": { + "id": {"type": "integer"}, + "name": {"type": "string", "minLength": 1}, + "price": {"type": "number", "minimum": 0}, + "sku": {"type": "string", "pattern": "^[A-Z]+-\\d+$"} + }, + "additionalProperties": True +} + +@pytest.mark.api +def test_product_response_schema(api_client): + """Product endpoint response matches expected schema.""" + resp = api_client.get(f"{api_client.base_url}/products/1") + assert resp.status_code == 200 + try: + validate(instance=resp.json(), schema=PRODUCT_SCHEMA) + except ValidationError as e: + pytest.fail(f"Schema validation failed: {e.message}") +``` + +--- + +## Playwright UI Testing + +```python +import pytest +from playwright.sync_api import Page, expect + +class LoginPage: + """Page Object for login page.""" + + def __init__(self, page: Page): + self.page = page + self.email_input = page.locator("#email") + self.password_input = page.locator("#password") + self.login_button = page.locator("button[type='submit']") + self.error_message = page.locator(".error-message") + + def goto(self): + self.page.goto("/login") + self.page.wait_for_load_state("networkidle") + + def login(self, email: str, password: str): + self.email_input.fill(email) + self.password_input.fill(password) + self.login_button.click() + + +class TestLogin: + @pytest.mark.smoke + @pytest.mark.ui + def test_successful_login(self, page: Page): + login_page = LoginPage(page) + login_page.goto() + login_page.login("test@example.com", "ValidPass123!") + expect(page).to_have_url("/dashboard") + + @pytest.mark.ui + def test_invalid_credentials(self, page: Page): + login_page = LoginPage(page) + login_page.goto() + login_page.login("test@example.com", "wrong") + expect(login_page.error_message).to_be_visible() + expect(login_page.error_message).to_contain_text("Invalid") + + @pytest.mark.ui + def test_empty_fields_shows_validation(self, page: Page): + login_page = LoginPage(page) + login_page.goto() + login_page.login_button.click() + expect(login_page.email_input).to_have_attribute("aria-invalid", "true") +``` + +--- + +## Selenium UI Testing + +```python +import pytest +from selenium.webdriver.common.by import By +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC + +class LoginPage: + """Page Object for login page (Selenium).""" + + def __init__(self, driver): + self.driver = driver + self.wait = WebDriverWait(driver, 10) + + def goto(self, base_url): + self.driver.get(f"{base_url}/login") + + def login(self, email, password): + self.wait.until(EC.visibility_of_element_located((By.ID, "email"))) + self.driver.find_element(By.ID, "email").send_keys(email) + self.driver.find_element(By.ID, "password").send_keys(password) + self.driver.find_element(By.CSS_SELECTOR, "button[type='submit']").click() + + @property + def error_message(self): + return self.wait.until( + EC.visibility_of_element_located((By.CLASS_NAME, "error-message")) + ).text + + +class TestLogin: + @pytest.mark.ui + def test_successful_login(self, driver, base_url): + page = LoginPage(driver) + page.goto(base_url) + page.login("test@example.com", "ValidPass123!") + WebDriverWait(driver, 10).until(EC.url_contains("/dashboard")) + assert "/dashboard" in driver.current_url + + @pytest.mark.ui + def test_screenshot_on_failure(self, driver, base_url, request): + """Example: capture screenshot on test failure.""" + page = LoginPage(driver) + page.goto(base_url) + page.login("test@example.com", "wrong") + try: + assert page.error_message == "Invalid credentials" + except AssertionError: + driver.save_screenshot(f"screenshots/{request.node.name}.png") + raise +``` + +--- + +## Error Handling in Tests + +### Retry Pattern for Flaky External Services + +```python +import pytest +from tenacity import retry, stop_after_attempt, wait_exponential + +@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, max=10)) +def call_external_api(api_client, endpoint): + """Retry flaky external API calls.""" + resp = api_client.get(f"{api_client.base_url}{endpoint}") + resp.raise_for_status() + return resp +``` + +### Soft Assertions (collect all failures) + +```python +class SoftAssert: + """Collect multiple assertion failures before failing test.""" + + def __init__(self): + self.errors = [] + + def check(self, condition, message): + if not condition: + self.errors.append(message) + + def assert_all(self): + if self.errors: + pytest.fail("Soft assertion failures:\n" + "\n".join(self.errors)) + +# Usage +def test_order_response(api_client, order_id): + resp = api_client.get(f"{api_client.base_url}/orders/{order_id}") + data = resp.json() + sa = SoftAssert() + sa.check(resp.status_code == 200, f"Status: {resp.status_code}") + sa.check("id" in data, "Missing 'id' field") + sa.check("total" in data, "Missing 'total' field") + sa.check(data.get("status") in ["pending", "confirmed"], f"Bad status: {data.get('status')}") + sa.assert_all() +``` + +--- + +## Test Report Template + +```markdown +# Test Execution Report + +## Summary +| Metric | Value | +|--------|-------| +| Total Tests | X | +| Passed | X (X%) | +| Failed | X (X%) | +| Skipped | X (X%) | +| Blocked | X (X%) | +| Duration | Xm Xs | +| Environment | staging / QA | +| Date | YYYY-MM-DD | +| Build | vX.Y.Z / commit SHA | + +## Failed Tests +| # | Test | Module | Error Summary | Priority | Jira | +|---|------|--------|---------------|----------|------| +| 1 | test_name | module | AssertionError: expected 200 got 500 | P1 | PROJ-123 | + +## Coverage by Module +| Module | Total | Passed | Failed | Skipped | Pass Rate | +|--------|-------|--------|--------|---------|-----------| +| Cart | X | X | X | X | X% | +| Auth | X | X | X | X | X% | +| Payments | X | X | X | X | X% | + +## Defect Summary +| Severity | Open | Fixed | Total | +|----------|------|-------|-------| +| S1 (Blocker) | X | X | X | +| S2 (Critical) | X | X | X | +| S3 (Major) | X | X | X | +| S4 (Minor) | X | X | X | + +## Recommendations +1. [Specific action items based on failures] +2. [Risk areas needing additional testing] +3. [Environment or data issues encountered] + +## Go/No-Go Assessment +- **Recommendation**: GO / NO-GO +- **Rationale**: [Based on exit criteria] +``` + +--- + +## Dependencies + +``` +pytest>=7.0 +requests>=2.28 +playwright>=1.40 +selenium>=4.0 +jsonschema>=4.0 +pytest-html>=4.0 +tenacity>=8.0 +``` diff --git a/.claude/skills/qa-testing/references/code-review.md b/.claude/skills/qa-testing/references/code-review.md new file mode 100644 index 0000000..4ad10a1 --- /dev/null +++ b/.claude/skills/qa-testing/references/code-review.md @@ -0,0 +1,215 @@ +# Code Review for Testability + +## Table of Contents + +- Review Checklist +- Security Checklist (E-Commerce / PCI-DSS) +- Good/Bad Examples +- Review Output Format +- Remediation Patterns + +--- + +## Review Checklist + +### Testability +- [ ] Functions have single responsibility +- [ ] Dependencies are injectable (no hard-coded URLs, DB connections) +- [ ] Business logic separated from I/O +- [ ] No global state mutation +- [ ] Functions return values (not just side effects) +- [ ] Complex conditionals extracted into named functions +- [ ] No hidden dependencies (imports inside functions that change behavior) + +### Error Handling +- [ ] All external calls wrapped in try/except +- [ ] Specific exception types (not bare except) +- [ ] Meaningful error messages with context +- [ ] Errors logged before re-raising +- [ ] HTTP errors return appropriate status codes +- [ ] Validation errors return field-level details +- [ ] No silent failures (catch + pass) + +### Security (E-Commerce Focus) +- [ ] No SQL injection (parameterized queries / ORM) +- [ ] No XSS (output encoding, CSP headers) +- [ ] Input validation on all user data +- [ ] Authentication on protected endpoints +- [ ] Authorization checks (user can only access own data) +- [ ] Sensitive data not logged (passwords, tokens, card numbers) +- [ ] HTTPS enforced +- [ ] Rate limiting on auth endpoints +- [ ] CSRF protection on state-changing operations +- [ ] Payment data handled per PCI-DSS (never stored raw) +- [ ] Session tokens rotated after login +- [ ] Password hashing (bcrypt/argon2, not MD5/SHA1) + +### API Best Practices +- [ ] Consistent response format (envelope pattern) +- [ ] Proper HTTP methods (GET/POST/PUT/DELETE) +- [ ] Meaningful status codes (201 for create, 204 for delete) +- [ ] Pagination on list endpoints +- [ ] Input validation with clear error messages +- [ ] Versioned endpoints +- [ ] Idempotency keys on payment operations +- [ ] CORS properly configured + +### Code Quality +- [ ] No code duplication (DRY) +- [ ] Meaningful variable/function names +- [ ] No magic numbers/strings (use constants) +- [ ] Functions under 30 lines +- [ ] No deeply nested logic (max 3 levels) +- [ ] Type hints on function signatures + +--- + +## Security Checklist (PCI-DSS for E-Commerce) + +| Requirement | Check | Severity if Missing | +|-------------|-------|---------------------| +| Card numbers never stored in logs | Grep for card patterns in log statements | Critical | +| Card data encrypted in transit | HTTPS enforced, no HTTP fallback | Critical | +| Card data not in URL parameters | No GET requests with card data | Critical | +| CVV never stored (even encrypted) | No CVV in database models | Critical | +| Access control on payment endpoints | Auth + authz checks | Critical | +| Audit trail for payment operations | Logging for payment CRUD | Major | +| Token-based card storage (Stripe/Braintree) | No raw card handling | Major | + +--- + +## Good/Bad Examples + +### Error Handling + +```python +# Bad: Silent failure +try: + process_payment(order) +except: + pass + +# Good: Specific handling with context +try: + process_payment(order) +except PaymentGatewayTimeout as e: + logger.error(f"Payment timeout for order {order.id}: {e}") + return {"error": "Payment service temporarily unavailable"}, 503 +except InvalidCardError as e: + logger.warning(f"Invalid card for order {order.id}: {e}") + return {"error": "Invalid card details", "field": "card_number"}, 400 +except Exception as e: + logger.exception(f"Unexpected payment error for order {order.id}") + raise +``` + +### Dependency Injection + +```python +# Bad: Hard-coded dependency +def get_product(product_id): + db = psycopg2.connect("host=prod-db.example.com dbname=store") + return db.execute("SELECT * FROM products WHERE id = %s", (product_id,)) + +# Good: Injectable dependency +def get_product(product_id, db_connection): + return db_connection.execute( + "SELECT * FROM products WHERE id = %s", (product_id,) + ) +``` + +### SQL Injection + +```python +# Bad: String interpolation +query = f"SELECT * FROM users WHERE email = '{email}'" + +# Good: Parameterized query +query = "SELECT * FROM users WHERE email = %s" +cursor.execute(query, (email,)) + +# Good: ORM +user = User.query.filter_by(email=email).first() +``` + +### Authorization + +```python +# Bad: No ownership check +@app.get("/orders/{order_id}") +def get_order(order_id): + return Order.query.get(order_id) + +# Good: Verify ownership +@app.get("/orders/{order_id}") +def get_order(order_id, current_user): + order = Order.query.get(order_id) + if order.user_id != current_user.id: + raise HTTPException(403, "Access denied") + return order +``` + +### Sensitive Data + +```python +# Bad: Logs credit card +logger.info(f"Payment: card={card_number}, amount={amount}") + +# Good: Mask sensitive data +logger.info(f"Payment: card=****{card_number[-4:]}, amount={amount}") +``` + +--- + +## Review Output Format + +```markdown +## Code Review: [File/Module] + +### Summary +[1-2 sentence overview of code quality and key concerns] + +### Findings + +#### Critical (Must Fix Before Release) +| # | Issue | Location | Recommendation | +|---|-------|----------|----------------| +| 1 | SQL injection risk | file.py:42 | Use parameterized query | + +#### Major (Fix in Current Sprint) +| # | Issue | Location | Recommendation | +|---|-------|----------|----------------| + +#### Minor (Fix When Convenient) +| # | Issue | Location | Recommendation | +|---|-------|----------|----------------| + +### Testability Score: X/10 + +| Category | Score (0-2) | Notes | +|----------|-------------|-------| +| Single Responsibility | X | | +| Dependency Injection | X | | +| State Management | X | | +| Error Handling | X | | +| Separation of Concerns | X | | + +**Overall**: X/10 — [Brief justification] +``` + +--- + +## Remediation Patterns + +| Issue | Fix | Example | +|-------|-----|---------| +| Bare except | Catch specific exceptions | `except ValueError as e:` | +| SQL injection | Parameterized queries or ORM | `cursor.execute(sql, (param,))` | +| Hardcoded secrets | Environment variables | `os.getenv("DB_PASSWORD")` | +| No input validation | Validate at boundary | Pydantic models, JSON schema | +| God function (>50 lines) | Extract into smaller functions | One function per responsibility | +| No auth check | Add middleware/decorator | `@require_auth` decorator | +| Sensitive data in logs | Mask or omit | `card=****{last4}` | +| No rate limiting | Add rate limit middleware | `@rate_limit(max=5, per=60)` | +| Missing CSRF | Add CSRF tokens | Framework CSRF middleware | +| Password in plaintext | Hash with bcrypt | `bcrypt.hashpw(password, salt)` | diff --git a/.claude/skills/qa-testing/references/test-cases.md b/.claude/skills/qa-testing/references/test-cases.md new file mode 100644 index 0000000..391effc --- /dev/null +++ b/.claude/skills/qa-testing/references/test-cases.md @@ -0,0 +1,202 @@ +# Test Case Patterns + +## Table of Contents + +- Test Case Structure +- E-Commerce Module Coverage +- Priority Guidelines +- Boundary & Negative Test Patterns +- Good/Bad Examples +- Example Test Cases + +--- + +## Test Case Structure + +Every test case uses this format: + +| Field | Description | +|-------|-------------| +| **TC-ID** | Unique ID: `TC--` (e.g., TC-CART-001) | +| **Title** | Action + expected outcome | +| **Priority** | P0 (blocker), P1 (critical), P2 (major), P3 (minor) | +| **Type** | Functional, API, UI, Integration, Performance, Security | +| **Preconditions** | State required before execution | +| **Test Data** | Specific input values | +| **Steps** | Numbered actions | +| **Expected Result** | Observable outcome per step | +| **Postconditions** | State after execution | + +--- + +## E-Commerce Module Coverage + +### Cart +- Add/remove/update items +- Quantity limits, stock validation +- Price calculation with discounts +- Cart persistence (logged-in vs guest) +- Empty cart handling +- Cart merge (guest → logged-in) +- Concurrent modification (two tabs) + +### Checkout +- Address validation (required fields, format, international) +- Shipping method selection and cost calculation +- Payment processing (success, decline, timeout, 3DS) +- Order summary accuracy (items, tax, shipping, total) +- Coupon/promo code application and removal +- Back navigation preserves state + +### Payments +- Credit card validation (Luhn, expiry, CVV) +- Payment gateway integration (success, failure, timeout, network error) +- Refund processing (full, partial) +- Partial payments / split payments +- Currency handling and conversion +- Idempotency (duplicate submission prevention) +- PCI-DSS compliance (card data never stored raw) + +### User Authentication +- Registration (valid/invalid data, duplicate email, password strength) +- Login (valid credentials, invalid, locked account, rate limiting) +- Password reset flow (request, email, token expiry, new password) +- Session management (timeout, concurrent sessions, remember me) +- OAuth/social login (Google, Facebook, Apple) +- MFA/2FA flows + +### Product Catalog +- Search (keyword, filter, sort, no results) +- Product detail display (images, description, price, variants) +- Inventory status (in stock, low stock, out of stock) +- Category navigation and breadcrumbs +- Pagination and infinite scroll + +### Order Management +- Order creation and confirmation email +- Order status tracking (pending, processing, shipped, delivered) +- Order history with filters +- Cancellation (before/after shipping) +- Returns and refund processing +- Email/SMS notifications at each stage + +### Promotions +- Percentage/fixed discounts +- BOGO offers +- Minimum purchase requirements +- Promo code validation (expired, invalid, already used, case sensitivity) +- Stacking rules (combinable vs exclusive) +- Time-limited promotions + +--- + +## Priority Guidelines + +| Priority | Criteria | Example | +|----------|----------|---------| +| P0 | Complete feature failure, data loss, security breach, revenue loss | Payment processing crashes, user data exposed | +| P1 | Major feature broken, no workaround, blocks user flow | Cannot add items to cart, login fails for all users | +| P2 | Feature works with workaround or affects minor flow | Filter reset doesn't work (manual URL edit works) | +| P3 | Cosmetic, minor UX issue, no functional impact | Button alignment off, placeholder text wrong | + +### Priority Decision Tree + +``` +Is it a security vulnerability or data loss? → P0 +Does it block revenue (payment, checkout)? → P0 +Does it block a core user flow with no workaround? → P1 +Does it affect a core flow but has a workaround? → P2 +Is it cosmetic or minor UX? → P3 +``` + +--- + +## Boundary & Negative Test Patterns + +### Numeric Inputs +- Zero, negative, minimum valid, maximum valid, above maximum +- Decimal precision (0.01, 0.001) +- Integer overflow values + +### String Inputs +- Empty string, whitespace only, single character +- Maximum length, above maximum length +- Special characters: ``, `'; DROP TABLE--`, `../../../etc/passwd` +- Unicode: emoji, RTL text, accented characters + +### API Inputs +- Missing required fields +- Wrong data types (string where int expected) +- Exceeding length limits +- Malformed JSON / XML +- Invalid auth tokens (expired, malformed, wrong scope) +- Empty request body +- Extra/unexpected fields + +### Date/Time Inputs +- Past dates, future dates, current date +- Leap year (Feb 29), month boundaries (Jan 31 → Feb 1) +- Timezone edge cases + +--- + +## Good/Bad Examples + +### Good Test Case + +| Field | Value | +|-------|-------| +| TC-ID | TC-CART-001 | +| Title | Add single in-stock product to empty cart updates badge and total | +| Priority | P0 | +| Type | Functional | +| Preconditions | User logged in, cart empty, product "Widget-A" in stock (qty ≥ 1) | +| Test Data | Product: Widget-A, SKU: WA-001, Price: $29.99, Qty: 1 | +| Steps | 1. Navigate to /products/widget-a
2. Click "Add to Cart" button
3. Observe cart icon badge in header
4. Navigate to /cart | +| Expected | 1. Product page loads with price $29.99 and "Add to Cart" enabled
2. Success toast "Widget-A added to cart" appears within 2s
3. Cart badge updates from "0" to "1"
4. Cart page shows: Widget-A, qty 1, subtotal $29.99, total $29.99 | +| Postconditions | Cart contains 1 item, inventory not decremented until checkout | + +### Bad Test Case (Avoid) + +| Field | Value | +|-------|-------| +| TC-ID | TC-001 (missing module prefix) | +| Title | Test cart (vague, no expected outcome) | +| Priority | P2 (wrong: cart add is P0) | +| Type | (missing) | +| Preconditions | (missing) | +| Test Data | (missing) | +| Steps | 1. Add item to cart | +| Expected | Cart works | + +**Why it's bad**: No module in ID, vague title, wrong priority, missing fields, non-reproducible steps, unmeasurable expected result. + +--- + +## Example Test Cases by Module + +### API Test Case + +| Field | Value | +|-------|-------| +| TC-ID | TC-API-AUTH-001 | +| Title | POST /auth/login with valid credentials returns 200 and JWT token | +| Priority | P0 | +| Type | API | +| Preconditions | User account exists: test@example.com / ValidPass123! | +| Test Data | `{"email": "test@example.com", "password": "ValidPass123!"}` | +| Steps | 1. Send POST to /auth/login with test data
2. Validate response status
3. Validate response body schema | +| Expected | 1. Request accepted
2. Status 200 OK
3. Body contains `token` (JWT format), `expires_in` (integer), `user.email` matches input | + +### Security Test Case + +| Field | Value | +|-------|-------| +| TC-ID | TC-SEC-PAY-001 | +| Title | Payment endpoint rejects request without authentication | +| Priority | P0 | +| Type | Security | +| Preconditions | Valid order exists, no auth token in request | +| Test Data | Order ID: ORD-12345, no Authorization header | +| Steps | 1. Send POST to /payments with order data but no auth header
2. Validate response | +| Expected | 1. Request sent
2. Status 401 Unauthorized, no payment processed, no sensitive data in response body | diff --git a/.claude/skills/qa-testing/references/test-plans.md b/.claude/skills/qa-testing/references/test-plans.md new file mode 100644 index 0000000..d58c1f4 --- /dev/null +++ b/.claude/skills/qa-testing/references/test-plans.md @@ -0,0 +1,104 @@ +# Test Plan & Strategy Patterns + +## Test Plan Structure + +```markdown +# Test Plan: [Project/Feature Name] + +## 1. Overview +- **Objective**: What is being tested and why +- **Scope**: In-scope and out-of-scope items +- **References**: Requirements docs, user stories, designs + +## 2. Test Scope +### In Scope +- List of features/modules to test +### Out of Scope +- Explicitly excluded items with rationale + +## 3. Test Types +| Type | Coverage | Tools | +|------|----------|-------| +| Functional | Business logic, workflows | pytest | +| API | Endpoints, contracts | pytest + requests | +| UI | User flows, visual | Playwright/Selenium | +| Integration | Service interactions | pytest | +| Performance | Load, stress | locust/k6 | +| Security | OWASP Top 10 | manual + automated | + +## 4. Test Environment +- **URL**: staging/QA environment URL +- **Database**: Test DB details +- **Test Data**: How test data is managed +- **Dependencies**: External services, mocks + +## 5. Entry & Exit Criteria +### Entry +- Code deployed to test environment +- Unit tests passing (>80% coverage) +- Test data prepared +### Exit +- All P0/P1 tests passing +- No open P0/P1 defects +- Test report approved + +## 6. Schedule +| Phase | Duration | Activities | +|-------|----------|------------| +| Preparation | ... | Test case design, data setup | +| Execution | ... | Test runs | +| Reporting | ... | Results analysis, report | + +## 7. Risks & Mitigations +| Risk | Impact | Mitigation | +|------|--------|------------| +| Third-party API downtime | Blocks integration tests | Use mock servers | +| Test data corruption | Re-run failures | DB snapshots before runs | + +## 8. Deliverables +- Test cases document +- Test execution report +- Defect log +- Test summary report +``` + +## Test Strategy Structure + +```markdown +# Test Strategy: [Project Name] + +## 1. Testing Approach +- Risk-based testing (prioritize by business impact) +- Shift-left: unit/integration tests in CI pipeline +- Automation-first for regression, manual for exploratory + +## 2. Test Levels +| Level | Scope | Owner | Automation | +|-------|-------|-------|------------| +| Unit | Functions/methods | Dev | 100% | +| Integration | Service interactions | Dev/QA | 80%+ | +| System | End-to-end flows | QA | 60%+ | +| Acceptance | Business requirements | QA/PO | Key flows | + +## 3. Automation Strategy +- **Framework**: pytest +- **API testing**: requests + JSON schema validation +- **UI testing**: Playwright (preferred) or Selenium +- **CI integration**: Run on every PR +- **Reporting**: pytest-html or allure + +## 4. Defect Management +- **Tracking**: Jira / GitHub Issues +- **Severity**: S1 (blocker) → S4 (cosmetic) +- **SLA**: S1: 4h, S2: 24h, S3: sprint, S4: backlog + +## 5. Test Data Strategy +- Factories/fixtures for reproducible data +- Isolated test database per run +- No production data in tests + +## 6. Reporting +- Daily: execution progress +- Per-cycle: pass/fail metrics, defect trends +- Final: coverage summary, risk assessment +``` diff --git a/qa-testing.skill b/qa-testing.skill new file mode 100644 index 0000000..72a3390 Binary files /dev/null and b/qa-testing.skill differ