Skip to content

Commit 4ecdd3c

Browse files
feat: add comprehensive test design techniques guide and Wopee.io integration (#124)
- Introduced a new blog post detailing essential test design techniques for web applications, covering boundary value analysis, equivalence partitioning, risk-based prioritization, and model-driven strategies. - Added practical examples and implementation tips to enhance testing efficiency and coverage. - Created a follow-up post demonstrating how to apply these techniques using Wopee.io, including ready-to-use prompts for generating tests. - Included multiple illustrative images to support the content and improve user engagement.
1 parent b0045b2 commit 4ecdd3c

File tree

9 files changed

+379
-0
lines changed

9 files changed

+379
-0
lines changed
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
---
2+
slug: test-design-techniques
3+
title: "Essential test design techniques for web apps"
4+
description: "Learn proven test design techniques for web applications—including boundary value analysis, equivalence partitioning, risk-based prioritization, and model-driven strategies—to maximize testing efficiency and coverage."
5+
tags: [qa, test automation, test design techniques, user stories]
6+
image: /img/blog/test-design-techniques.png
7+
authors: marcel
8+
---
9+
10+
Want broader coverage with fewer tests?
11+
12+
These **test design techniques** help you create a **smallest set of most effective tests**.
13+
14+
This short guide covers **boundary value analysis (BVA)**, **equivalence partitioning (EP)**, **risk-based prioritization**, and **model/state-transition testing**.
15+
16+
You’ll learn **when to use each technique, how to design the cases, and the minimal set** that gives maximum signal.
17+
18+
<!--truncate-->
19+
20+
## Why test design techniques matter
21+
22+
Without structure, testing devolves into guesswork and redundancy. Design techniques give you a repeatable way to select high-value tests and avoid wasted effort.
23+
24+
**What good test design delivers**
25+
26+
- **Eliminates redundancy** using **equivalence partitioning (EP)**
27+
- **Catches edge bugs early** with **boundary value analysis (BVA)**
28+
- **Focuses where it hurts** via **risk-based prioritization**
29+
- **Covers complex flows** using **model- and state-transition testing**
30+
- **Improves maintenance** by organizing tests around **models and risks**, not pages
31+
32+
---
33+
34+
## Use boundary & equivalence to minimize cases
35+
36+
**When to use**: Any input validation (forms, APIs, ranges, enums, formats).
37+
38+
**How to design**
39+
40+
1. Identify input domains (range, set, format).
41+
2. Create **partitions** (valid + invalid).
42+
3. Pick **one representative** per partition (EP).
43+
4. Add **BVA** around each limit: just below, at, just above.
44+
45+
**Minimal set example (age: 18–65 inclusive)**
46+
47+
| Partition | Representative |
48+
| ---------------- | -------------- |
49+
| valid (in-range) | 25 |
50+
| valid (in-range) | 45 |
51+
| valid (in-range) | 60 |
52+
| invalid (< 18) | 17 |
53+
| invalid (> 65) | 66 |
54+
55+
**BVA targets**: 17, **18**, **65**, 66
56+
57+
> Tip: one mid-range value per valid partition is usually enough. Add more only if the code handles subranges differently.
58+
59+
## Decision table (small, real example)
60+
61+
Discount applies when **loyalty = gold** **and** **cart ≥ 100**.
62+
63+
| Loyalty | Cart ≥ 100 | Expect |
64+
| ------- | ---------- | ---------------- |
65+
| bronze | false | no discount |
66+
| bronze | true | no discount |
67+
| gold | false | no discount |
68+
| gold | true | discount applied |
69+
70+
> If rules × inputs explode, consider pairwise generation or MBT to keep the table tractable.
71+
72+
---
73+
74+
## Prioritize risks, then explore to find surprises
75+
76+
**When to use**: Planning sprints/releases; scoping a suite under time pressure.
77+
78+
**How to design**
79+
80+
1. Score features on **Impact** (business/user) × **Likelihood** (defect/complexity/change).
81+
2. Test **high-impact/high-likelihood** first; **high-impact/low-likelihood** next.
82+
3. Allocate **exploratory charters** to the highest-risk zones.
83+
84+
**Quick 2×2**
85+
86+
- **High/High** → automate critical paths + explore each release
87+
- **High/Low** → smoke checks + targeted exploration
88+
- **Low/High** → lightweight checks, observe in prod/telemetry
89+
- **Low/Low** → sample or defer
90+
91+
**Exploratory session charter (template)**
92+
93+
- **Area**: Payments → promo codes
94+
- **Start with**: expired code, special characters, rapid apply/remove
95+
- **Risks**: rounding, concurrency, caching
96+
- **Timebox**: 45 minutes
97+
- **Capture**: notes, defects, screenshots, data used
98+
99+
**Error-guessing checklist (starter)**
100+
101+
- empty/huge inputs, whitespace, special characters
102+
- copy/paste, different keyboard layouts, IME input
103+
- rapid submit/double-click/back/refresh sequences
104+
- timeouts, slow network, offline/restore
105+
- concurrency: two tabs/users editing the same entity
106+
107+
<img
108+
src={require('./prioritize-risks.png').default}
109+
alt="Prioritizing risks"
110+
style={{ width: '50%', height: 'auto', display: 'block', margin: '1rem auto' }}
111+
/>
112+
113+
_Prioritizing risks_
114+
115+
---
116+
117+
## Model workflows with state transitions & BDD
118+
119+
**When to use**: Multi-step flows, role-based behavior, approvals, wizards.
120+
121+
**How to design**
122+
123+
1. List states and allowed transitions (a sketch or bullets is enough).
124+
2. Cover **all valid transitions** at least once; include **invalid transitions** as negative tests.
125+
3. Use **0-switch/1-switch coverage**: single transitions and adjacent transition pairs.
126+
127+
**Example transitions (account recovery)**
128+
129+
- Valid: `Start → RequestToken → EmailSent → ResetForm → Success`
130+
- Invalid: `Start → ResetForm` (no token) → expect a clear error
131+
132+
**BDD scenario (executable specification)**
133+
134+
```gherkin
135+
Feature: Password reset
136+
137+
Scenario: Valid token enables password reset
138+
Given a user requested a reset token
139+
And the token is valid and not expired
140+
When the user sets a new password that meets complexity
141+
Then the user can sign in with the new password
142+
```
143+
144+
**User story → tests (traceability mini-map)**
145+
146+
- Extract **happy path**, **alternate paths**, **negative paths**
147+
- Link each scenario to **acceptance criteria**
148+
- Map tests → story IDs for coverage reporting
149+
150+
---
151+
152+
## Implementation tips
153+
154+
- **Start each feature** with EP + BVA for inputs—keep it tiny and repeatable.
155+
- **Re-score risks each sprint**; prune or elevate tests based on change and incident data.
156+
- **Schedule exploratory charters** for top-risk areas every release.
157+
- **Automate from models**: generate cases from state charts/decision tables where feasible.
158+
- **Track four signals**:
159+
160+
- **defect yield** (by technique)
161+
- **redundancy removed** (tests deleted/merged)
162+
- **flake rate** (before/after design cleanup)
163+
- **requirements coverage** (stories ↔ tests)
164+
165+
---
166+
167+
## Summary
168+
169+
- **Strategic efficiency**: choose tests that add unique value, not volume.
170+
- **EP + BVA**: cover representative inputs and edges with minimal cases.
171+
- **Risk + exploration**: spend time where failures matter and surprises hide.
172+
- **Models & states**: make complex flows testable and maintainable.
173+
- **Traceability**: turn user stories and acceptance criteria into executable checks.
174+
175+
:::tip transform your test design approach 🎯
176+
Skip manual test case creation.
177+
[Wopee.io](https://wopee.io) maps your app, applies user story-based test generation, explores high-risk flows, and generates coverage in minutes.
178+
179+
**Test better. Ship faster.**
180+
181+
---
182+
183+
Tip: Read more about [how to apply test design techniques](test-design-techniques-w-wopee) when you generate tests with [Wopee.io](https://wopee.io).
184+
185+
:::
1.32 MB
Loading
116 KB
Loading
83.9 KB
Loading
169 KB
Loading
Lines changed: 194 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,194 @@
1+
---
2+
slug: test-design-techniques-w-wopee
3+
title: "Test design techniques with Wopee.io"
4+
description: "Follow-up to our test design guide: ready-to-use prompts for BVA/EP, decision tables, and risk-based testing — using examples with the Swag Labs demo project inside Wopee.io."
5+
tags: [qa, test automation, test design techniques, prompts]
6+
image: /blog/wopee-commander.png
7+
authors: marcel
8+
---
9+
10+
The era of manually writing tests is soo 2005. We can now generate tests automatically with LLMs.
11+
12+
However, still, it is good to drive the test design process by human. ... at least until the current tools are good enough.
13+
14+
Want to turn the theory into executable tests?
15+
16+
Here are a few examples with the Swag Labs demo project inside Wopee.io Commander: a **copy-paste prompts** tailored to the Swag Labs demo project so you can generate real scenarios.
17+
18+
<!--truncate-->
19+
20+
[Test design techniques](/blog/test-design-techniques) are important to achieve comprehensive test coverage and assure effective testing.
21+
22+
## Example prompts you can use
23+
24+
Feel free to use the prompts below as a starting point and adapt them to your needs.
25+
26+
All the examples below are for user story:
27+
28+
> _As a shopper, I want to enter my shipping information_.
29+
30+
In order to use the prompts below, go to **Analysis → 4. Test Cases**, pick the user story, and paste the prompt below. See below for the [detailed steps](#how-to-use-wopeeio-commander-quick-steps).
31+
32+
---
33+
34+
### A. BVA/EP for checkout info
35+
36+
Use Boundary Value Analysis (BVA) and Equivalence Partitioning (EP) to create a **minimal set** of positive/negative tests around the required buyer fields before moving to the overview page.
37+
38+
```
39+
Use BVA/EP test design technique to create a minimal set of positive/negative tests around the required buyer fields before moving to the overview page.
40+
41+
Scope: First Name, Last Name, Postal Code
42+
43+
Goal: Generate the smallest test set that proves required-field gating works and handles edge inputs.
44+
45+
Design:
46+
- EP: {filled vs empty} for each field; {valid vs clearly invalid} for postalCode.
47+
- BVA around length: 0, 1, typical (5), long (100+) for each field.
48+
- Combine minimally to avoid duplicates; show which partition/boundary each case covers.
49+
- Each test should cover end-to-end flow: from login → cart → checkout → overview.
50+
51+
Generate:
52+
- Positive path that advances to overview when all fields are valid.
53+
- Negative cases that block progression with clear errors when one or more fields are empty/invalid.
54+
- Use emojis to make the test design more fun.
55+
```
56+
57+
Why this works here: User story explicitly requires first/last name and postal code and blocks progression on missing data; advancing shows the **order overview**.
58+
59+
### Generated tests for BVA/EP example
60+
61+
![Wopee.io Commander](./example-bva-ep.png)
62+
63+
_Wopee.io Commander with test cases generated for the user story, BVA/EP_
64+
65+
---
66+
67+
### B. Decision table for “continue” gating
68+
69+
Model the **rules for advancing** from `checkout info``overview` using a compact decision table (presence/absence of required fields).
70+
71+
```
72+
Use decision table test design technique to model the rules for advancing from checkout info → overview.
73+
74+
Rules:
75+
- Continue is allowed only when firstName, lastName, and postalCode are all provided.
76+
- Otherwise: progression is blocked; show clear error/highlight on missing fields.
77+
78+
Please:
79+
1) Produce a decision table with the three conditions (FName?, LName?, Zip?) → Outcome (Advance/Blocked + which field errors).
80+
2) Generate the minimal set of unique test cases from that table (no duplicates).
81+
3) Include steps and expected results for each case.
82+
4) Into test description field, add a short "Mitigates risk:" note to each test (Impact, Likelihood).
83+
5) Use emojis extensively to make the final tests fun to read.
84+
```
85+
86+
This aligns with Swag Labs behavior: missing any of the three required inputs prevents progression and highlights errors; with all present, the flow proceeds to the **overview** and later to **confirmation**.
87+
88+
### Generated tests for decision table example
89+
90+
![Wopee.io Commander](./example-decision-table.png)
91+
92+
_Wopee.io Commander with test cases generated for the user story, decision table_
93+
94+
---
95+
96+
### C. Risk-based testing: prioritize what matters first
97+
98+
This version bakes priority **into the test names** so your suite is immediately ordered. Use the prefixes:
99+
100+
- `prio-H` = high impact × high likelihood (test first)
101+
- `prio-M` = medium
102+
- `prio-L` = low
103+
104+
**Risk focus for user story**
105+
106+
- **High**: required-field gating (firstName, lastName, postalCode), invalid postal code formats, and the ability to continue only when all fields are valid.
107+
- **Medium**: whitespace handling (trim vs. all-spaces), max/min lengths, state persistence when navigating back from Overview.
108+
- **Low**: special characters, accessibility focus on first invalid field, non-Latin characters.
109+
110+
Copy-paste prompt (risk-based, with priority in names):
111+
112+
```
113+
Use risk-based testing with explicit priority prefixes in test names.
114+
115+
Goal:
116+
- Generate a prioritized suite for the user story.
117+
- Prefix every test title with one of emojis: High 🔴, Medium 🟠, Low 🟢
118+
- One deterministic automated test per risk; include preconditions, steps, and expected results.
119+
120+
Risk model (Impact × Likelihood)
121+
122+
High (prio-H) — must-have paths and hard blockers:
123+
- prio-H - cannot continue with empty firstName
124+
- prio-H - cannot continue with empty lastName
125+
- prio-H - cannot continue with empty postalCode
126+
- prio-H - rejects invalid postalCode format (e.g., "ABC", "12-3")
127+
- prio-H - can continue when all fields are valid and reaches Overview page
128+
129+
Medium (prio-M) — common edge behaviors:
130+
- prio-M - trims leading/trailing spaces before validation (" John " → "John")
131+
- prio-M - all-whitespace is treated as empty and blocks progression
132+
- prio-M - field length boundaries: 0, 1, typical (10), long (100+) with expected outcomes
133+
- prio-M - values persist when navigating back from Overview to Info and then forward again
134+
135+
Low (prio-L) — nice-to-have quality checks:
136+
- prio-L - handles special characters without crashing (names with hyphen/accents)
137+
- prio-L - focus moves to the first invalid field when Continue fails
138+
- prio-L - accepts non-Latin letters if supported; otherwise shows a clear validation error
139+
140+
Generation requirements
141+
- For negatives, verify error text and that the page does NOT advance.
142+
- For the positive case, assert navigation to Overview and presence of order summary.
143+
- Add a short "Mitigates risk:" note to each test (Impact, Likelihood) into test description field.
144+
```
145+
146+
Note: There are no validations or any other logic for the fields in the Swag Labs demo project. This is just an example.
147+
148+
### Generated tests for risk-based testing example
149+
150+
![Wopee.io Commander](./example-risk-based.png)
151+
152+
_Wopee.io Commander with test cases generated for the user story, risk-based testing_
153+
154+
---
155+
156+
## How to use Wopee.io Commander (quick steps)
157+
158+
### 1. Go to **Analysis → 4. Test Cases** (Commander).
159+
160+
![Wopee.io Commander](./wopee-commander.png)
161+
162+
_Wopee.io Commander with test cases generated for the user stories, step 1: Analysis → 4. Test Cases_
163+
164+
### 2. **Add a user story** you want to cover.
165+
166+
![Wopee.io Commander](./wopee-commander-add-user-story.png)
167+
168+
_Wopee.io Commander with test cases generated for the user stories, step 2: Add user story_
169+
170+
### 3. Select the user story, then paste a prompt (see below).
171+
172+
![Wopee.io Commander](./wopee-commander-select-user-story.png)
173+
174+
_Wopee.io Commander with test cases generated for the user stories, step 3: Paste a prompt_
175+
176+
### 4. Click **`GENERATE`** to create the test cases.
177+
178+
:::caution heads-up
179+
180+
If you pick an existing user story that already has scenarios, the generator will **rewrite** them.
181+
182+
:::
183+
184+
:::tip fast path
185+
186+
## Generate your tests automatically too
187+
188+
Wopee.io maps your app. Create tests. Automate instantly.
189+
190+
<br />
191+
192+
Paste a prompt and then let our AI Agent map your app, generate rest of the test cases. You will get automated tests right in [Wopee.io](https://wopee.io) Commander.
193+
194+
:::
28.3 KB
Loading
126 KB
Loading
345 KB
Loading

0 commit comments

Comments
 (0)