Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion tools/scopy_dev_plugin/.claude-plugin/plugin.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "scopy_dev_plugin",
"version": "1.0.0",
"version": "1.1.0",
"description": "Scopy development tools: code generation, documentation, testing, quality checks, and styling for IIO plugin development.",
"author": { "name": "Muthi Ionut Adrian" }
}
4 changes: 4 additions & 0 deletions tools/scopy_dev_plugin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ Both must be on your `PATH`. On Ubuntu: `sudo apt install clang-format` and `pip
| `/scopy_dev_plugin:verify-package <package>` | CI pre-flight validation (format + license) |
| `/scopy_dev_plugin:validate-api <plugin>` | Validate API class implementation (checks A1–A7) |
| `/scopy_dev_plugin:validate-automated-tests <plugin>` | Validate JS automated test scripts (checks T1–T7) |
| `/scopy_dev_plugin:create-unit-tests <plugin>` | Generate JS unit test scripts for IIOWidget coverage |
| `/scopy_dev_plugin:validate-unit-tests <plugin>` | Validate JS unit test scripts (checks U1–U7) |

## Knowledge Skills (auto-load)

Expand All @@ -51,6 +53,8 @@ These skills are loaded automatically when relevant context is detected:
- **scopy-doc-format** — RST documentation conventions
- **scopy-test-format** — Test case UID and RBP conventions
- **scopy-api-patterns** — API class structure and Q_INVOKABLE patterns
- **unit-test-quality-checks** — Unit test validation rules (U1–U7) for IIOWidget coverage tests
- **unit-test-patterns** — Code patterns for unit test helpers and complex test scenarios

## Hooks

Expand Down
179 changes: 179 additions & 0 deletions tools/scopy_dev_plugin/commands/create-unit-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
# /create-unit-tests — Create JS unit test scripts for IIOWidget coverage

You are creating JavaScript unit test scripts for a Scopy plugin that test every IIOWidget attribute via `readWidget`/`writeWidget` and API getter/setter methods.

**Plugin:** `$ARGUMENTS`

## Step 0: Load context

Use the Read tool to check if a port state file exists:
- Path: `tasks/$ARGUMENTS-port-state.md`
- If the file does not exist, note "No state file — will discover from source files directly." and continue.

## Prerequisites check

Before writing tests, verify this input exists:
1. **Plugin API class** — use the Glob tool to search for `*_api.h` in `scopy/packages/$ARGUMENTS/`

If the API class does not exist, stop and tell the user to run `/create-api $ARGUMENTS` first.

## Step 1: Discovery

Read these specific files to discover all testable widgets and API methods:

1. **Plugin API header** (all `Q_INVOKABLE` methods):
- Use Glob: `scopy/packages/$ARGUMENTS/plugins/*/include/**/*_api.h`
- Extract every getter/setter pair, standalone getters, and utility methods (`calibrate()`, `refresh()`, `loadProfile()`, etc.)

2. **Widget factory source** (all IIOWidgetBuilder calls):
- Use Glob: `scopy/packages/$ARGUMENTS/plugins/*/src/**/*.cpp`
- Identify files that use `IIOWidgetBuilder`
- Extract: widget keys, UI strategy type (EditableUi, ComboUi, CheckBoxUi, RangeUi), range bounds, combo options, conversion functions

3. **EMU XML** (device structure and attribute defaults):
- Use Glob: `scopy/packages/$ARGUMENTS/emu-xml/*.xml`
- Extract: device name (for PHY prefix), channel structure, attribute names, default values, `_available` options

4. **Tool source files** (for tool names and advanced tool detection):
- Search for `switchToTool()`, `getAdvancedTabs()`, `switchAdvancedTab()` in the API header
- Check for `advanced/` subdirectory in the plugin source

5. **Existing test files** (avoid duplication):
- Use Glob: `scopy/js/testAutomations/$ARGUMENTS/`

6. **Test framework API**:
- `scopy/js/testAutomations/common/testFramework.js`

## Step 2: Classification — WAIT FOR APPROVAL

After reading all source material, present a structured classification report:

### Widget Key Prefix
```
var PHY = "<device-name>/"; // from EMU XML
```

### Basic Tool Widgets

| Section | Widget Key | Type | Min | Max | Mid | Options | Test Helper | UID |
|---------|-----------|------|-----|-----|-----|---------|-------------|-----|
| Global | ensm_mode | combo | - | - | - | ["radio_on", "radio_off"] | testCombo + API | UNIT.GLOBAL.ENSM_MODE |
| RX | voltage0_in/hardwaregain | range | 0 | 30 | 15 | - | testRange + testConversion | UNIT.RX.CH0_HARDWARE_GAIN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |

### Advanced Tool Widgets (if applicable)

Group by tab name:

| Tab | Widget Key | Type | Min | Max | Mid | Options | Test Helper | UID |
|-----|-----------|------|-----|-----|-----|---------|-------------|-----|
| CLK Settings | adi,clocks-device-clock_khz | range | 30720 | 320000 | 122880 | - | testRange | UNIT.CLK.DEVICE_CLOCK_KHZ |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |

### API-Only Methods (no widget, tested via getter/setter)

| Method | Test Type | UID |
|--------|-----------|-----|
| getRxRssi(channel) (readonly) | testReadOnly via API | UNIT.RX.CH0_RSSI |
| calibrate() | complex test | UNIT.CAL.CALIBRATE_TRIGGER |
| ... | ... | ... |

### Widget Counts
- Basic tool: X widgets
- Advanced tool: Y widgets across Z tabs
- API-only: W methods
- Total attribute test cases: N

### Proposed Complex Tests

Scan the API header for these method signature triggers and list matching complex tests:

| # | Pattern | API Trigger | UID |
|---|---------|-------------|-----|
| C1 | Calibration Flow | `calibrate()` found | UNIT.CAL.FULL_CALIBRATION_FLOW |
| C2 | Profile Loading | `loadProfile()` found | UNIT.PROFILE.LOAD_AND_VERIFY |
| C3 | Gain Mode Interaction | `getXxxGainControlMode()` + `setXxxHardwareGain()` found | UNIT.RX.GAIN_MODE_INTERACTION |
| C4 | State Transitions | `getEnsmMode()`/`setEnsmMode()` found | UNIT.GLOBAL.ENSM_STATE_TRANSITIONS |
| C5 | DPD Operations | `dpdReset()` + `getDpdStatus()` found | UNIT.DPD.RESET_AND_STATUS_CH0 |
| C6 | Channel Independence | Setter with `int channel`, 2+ channels | UNIT.TX.CHANNEL_INDEPENDENCE |
| C7 | Phase Rotation | `getPhaseRotation()`/`setPhaseRotation()` found | UNIT.FPGA.PHASE_ROTATION_CH0 |
| C8 | Frequency Tuning | Hz-to-MHz conversion in getter/setter | UNIT.RX.LO_FREQUENCY |
| C9 | UDC LO Splitting | `hasUdc()`/`getUdcEnabled()` found | UNIT.UDC.LO_SPLITTING |
| C10 | Refresh Cycle | `refresh()` found | UNIT.UTIL.REFRESH_ALL |

Only list patterns where the API trigger was actually found.

### Proposed File Structure

Apply adaptive splitting:
- Always: `<plugin>_Basic_Unit_test.js` (or `<plugin>_Unit_test.js` if single file)
- If advanced tool detected: `+ <plugin>_Advanced_Unit_test.js`
- If complex tests approved: `+ <plugin>_Complex_Unit_test.js`
- If multiple files: `+ <plugin>_Unit_test.js` (combined runner)

### Variant Detection

If the plugin supports device variants (e.g., AD9371 vs AD9375), describe:
- How to detect the variant at runtime
- Which widgets/tests need skip guards

**Wait for user approval before writing any JavaScript.**

## Step 3: Interactive Complex Test Discovery

After presenting the plan, ask the user:
1. "Which complex tests should I include?" (present the matched list)
2. "Are there any plugin-specific complex scenarios not in the standard patterns?"
3. "Are there variant-specific features that need skip guards?"

## Step 4: Generate Files

Generate each file following the `unit-test-patterns` skill. Use the `file-structure.md` pattern for boilerplate.

**File locations:**
- `scopy/js/testAutomations/$ARGUMENTS/<plugin>_Basic_Unit_test.js`
- `scopy/js/testAutomations/$ARGUMENTS/<plugin>_Advanced_Unit_test.js` (if applicable)
- `scopy/js/testAutomations/$ARGUMENTS/<plugin>_Complex_Unit_test.js` (if applicable)
- `scopy/js/testAutomations/$ARGUMENTS/<plugin>_Unit_test.js` (combined runner or single file)

**Critical generation rules (non-negotiable):**

1. Every `writeWidget()` and setter call is followed by `msleep(500)`
2. Every test saves original value before modifying, and restores it in ALL code paths (normal, early return, catch)
3. Standard widget types use canonical helper functions (`testRange`, `testCombo`, `testCheckbox`, `testReadOnly`, `testConversion`)
4. Sections with 3+ widgets of the same type use `runDataDrivenTests()` with test descriptor arrays
5. Every range widget gets both `testRange()` and `testBadValueRange()` tests
6. Every combo widget gets both `testCombo()` and `testBadValueCombo()` tests
7. API getter/setter pairs with unit conversion get `testConversion()` tests
8. Files end with `TestFramework.disconnectFromDevice()`, `TestFramework.printSummary()`, `scopy.exit()`
9. UID format: `UNIT.<SECTION>.<ATTRIBUTE_NAME>` (uppercase, dots as separators)
10. Never invent API methods — only use what's in the `*_api.h` header
11. Never invent widget keys — only use keys discoverable from source code or EMU XML
12. Variant-specific tests wrapped in skip guards (e.g., `if (!isAd9375) return "SKIP"`)

## Step 5: Validate

Run the `unit-test-quality-checks` skill rules (U1-U7) against the generated files:
- [ ] [U1] Every discoverable widget has a test
- [ ] [U2] Standard helpers used (no ad-hoc logic for standard types)
- [ ] [U3] Every setter has `msleep(500)` after it
- [ ] [U4] Original values saved and restored in all code paths
- [ ] [U5] Bad value tests present for range and combo widgets
- [ ] [U6] Complex tests isolated in marked section
- [ ] [U7] File naming, license header, termination sequence correct

## Step 6: Update state file (if it exists)

```markdown
## Status
- Phase: UNIT_TESTS_COMPLETE
```

## Rules

- Do NOT modify any C++ source code
- Do NOT invent API methods that don't exist in the `*_api.h` header
- Do NOT invent widget keys — discover them from source code and EMU XML
- Getter return values are always strings — compare with `===` against string values
- Every test must restore original hardware state
- Use `"SKIP"` return for features not available on current hardware variant
148 changes: 148 additions & 0 deletions tools/scopy_dev_plugin/commands/validate-unit-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
# /validate-unit-tests — Validate JS unit test scripts for a Scopy plugin

You are validating the JavaScript unit test scripts for the Scopy plugin: `$ARGUMENTS`

The `unit-test-quality-checks` skill rules (checks U1–U7) govern this analysis.

## Step 1: Discover files

Use the Glob tool to locate:
- Unit test files: `js/testAutomations/$ARGUMENTS/*_Unit_test.js`, `js/testAutomations/$ARGUMENTS/*_Basic_Unit_test.js`, `js/testAutomations/$ARGUMENTS/*_Advanced_Unit_test.js`, `js/testAutomations/$ARGUMENTS/*_Complex_Unit_test.js`
- Plugin API header: `scopy/packages/$ARGUMENTS/plugins/*/include/**/*_api.h`
- Widget factory sources: `scopy/packages/$ARGUMENTS/plugins/*/src/**/*.cpp` (files using `IIOWidgetBuilder`)
- EMU XML: `scopy/packages/$ARGUMENTS/emu-xml/*.xml`
- Test framework: `js/testAutomations/common/testFramework.js`

If no JS unit test files are found, report "No unit test files found for `$ARGUMENTS`" and stop.

Read **all** discovered files before starting analysis.

## Step 2: Build expected widget set

From the source files, build the complete set of testable widgets and API methods:

1. **From widget factory sources**: Extract all IIOWidgetBuilder calls to get widget keys and their types (EditableUi → range, ComboUi → combo, CheckBoxUi → checkbox, read-only patterns)
2. **From API header**: Extract all `Q_INVOKABLE` getter/setter pairs and standalone getters
3. **From EMU XML**: Extract device name prefix, channel structure, attribute names

This is the "expected" set — every item should have at least one test.

## Step 3: Build actual test set

From the JS unit test files, extract:

1. **Widget keys tested**: All keys passed to `readWidget()`, `writeWidget()`, `testRange()`, `testCombo()`, `testCheckbox()`, `testReadOnly()`, `testConversion()`, `testBadValueRange()`, `testBadValueCombo()`, and `runDataDrivenTests()` calls
2. **API methods exercised**: All `<apiObject>.<method>()` calls
3. **Test UIDs**: All `TestFramework.runTest("<UID>", ...)` UIDs

## Step 4: Run checks U1–U7

### CRITICAL

**[U1] Widget Coverage**
- Compare expected widget set against actual test set
- Flag any widget key with no corresponding test
- Flag any `Q_INVOKABLE` getter/setter pair with no test exercising it
- Flag any test referencing a widget key not in the expected set
- Report coverage: `X/Y widgets tested (Z%)`

**[U2] Helper Function Usage**
- Scan each `TestFramework.runTest()` body
- For non-complex tests: if the body manually writes/reads/compares widget values when a standard helper exists for that widget type, flag it
- Verify `runDataDrivenTests()` is used for sections with 3+ widgets of the same type
- Complex tests (in sections marked `// SECTION: Complex`) are exempt

**[U3] Sleep After Setters**
- Identify every `writeWidget()`, `set*()`, or state-mutating call (`calibrate()`, `dpdReset()`, `refresh()`, `loadProfile()`)
- Check that the very next non-empty line is `msleep(500)` or longer
- Flag any setter not followed immediately by msleep
- Check both in helper definitions and in individual test bodies

**[U4] State Restoration**
- Identify every test that modifies state (calls a setter or writeWidget)
- Check that it saves the original value before modification
- Check that it restores the original value in: normal completion, early return, and catch block
- Flag any test that modifies state without full restoration

### WARNING

**[U5] Bad Value Tests**
- Count range widgets with `testRange()` calls vs those with `testBadValueRange()` calls
- Count combo widgets with `testCombo()` calls vs those with `testBadValueCombo()` calls
- Report coverage ratio for each
- Warn if bad value test coverage drops below 80%

**[U6] Complex Test Isolation**
- Check that complex multi-step tests are in a clearly marked section
- Check each complex test has a descriptive comment (e.g., `// C1: Full Calibration Flow`)
- Check complex test UIDs follow `UNIT.<SECTION>.<DESCRIPTIVE_NAME>` format
- Flag any undocumented complex logic mixed into attribute test sections

### INFO

**[U7] File Structure**
- Verify file naming convention (`*_Basic_Unit_test.js`, `*_Advanced_Unit_test.js`, etc.)
- Verify GPL license header present in every file
- Verify termination sequence: `disconnectFromDevice()` → `printSummary()` → `scopy.exit()`
- Verify combined runner uses `evaluateFile()` and does not duplicate helpers or tests
- Report file count and structure

## Step 5: Generate report

```
## Unit Test Validation Report: $ARGUMENTS

### Summary
| Check | Severity | Result |
|-------|----------|--------|
| [U1] Widget Coverage | CRITICAL | PASS/FAIL |
| [U2] Helper Function Usage | CRITICAL | PASS/FAIL |
| [U3] Sleep After Setters | CRITICAL | PASS/FAIL |
| [U4] State Restoration | CRITICAL | PASS/FAIL |
| [U5] Bad Value Tests | WARNING | PASS/WARN |
| [U6] Complex Test Isolation | WARNING | PASS/WARN |
| [U7] File Structure | INFO | PASS/INFO |

### Critical Issues
**[U1] Missing widget coverage**
`voltage0_in/rf_bandwidth` — no test found in any unit test file.
> **Fix:** Add a testReadOnly() call for this widget.

**[U3] Missing sleep after setter**
`ad9371_Basic_Unit_test.js:142` — `writeWidget()` not followed by `msleep(500)`.
> **Fix:** Add `msleep(500);` on the next line.

### Warnings
**[U5] Bad value test coverage**
- Range widgets: 12/15 have testBadValueRange() (80%)
- Combo widgets: 3/5 have testBadValueCombo() (60%) — below 80% threshold

### Info
**[U7] File structure**
- Files found: <plugin>_Basic_Unit_test.js, <plugin>_Advanced_Unit_test.js, <plugin>_Unit_test.js
- License headers: OK
- Termination sequence: OK

### Widget Coverage Detail
| Widget Key | Type | Has Test | Has Bad Value Test |
|-----------|------|----------|-------------------|
| voltage0_in/hardwaregain | range | YES | YES |
| voltage0_in/rf_bandwidth | readonly | NO | N/A |
| ensm_mode | combo | YES | YES |
| ... | ... | ... | ... |
Coverage: X/Y widgets tested (Z%)

### API Method Coverage
| Method | Has Test |
|--------|----------|
| getRxHardwareGain(channel) | YES |
| setRxHardwareGain(channel, val) | YES |
| getRxRssi(channel) | NO |
| ... | ... |
Coverage: X/Y methods tested (Z%)

### Verdict
[PASS/FAIL] — [one sentence summary]
```

PASS = zero critical issues. FAIL = one or more critical issues.
Loading
Loading