You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/analytics-ai-root-cause-analysis.md
+32-22Lines changed: 32 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,16 +117,16 @@ Our CRM application has specific failure patterns to watch for:
117
117
118
118
**Possible Categories and Descriptions:**
119
119
120
-
| Category | Description | When to Use |
121
-
|----------|-------------|-------------|
122
-
|**Database Issues**| Connection timeouts, query performance, data integrity problems | When tests fail during data operations (CRUD, reports, imports) |
123
-
|**API Integration**| Third-party service failures, rate limiting, authentication issues | When tests interact with external services (Salesforce, payment gateways) |
124
-
|**UI/UX Problems**| Element not found, timing issues, responsive design failures | When tests fail on user interface interactions |
125
-
|**Performance Issues**| Slow page loads, memory leaks, resource exhaustion | When tests timeout or run very slowly |
126
-
|**Environment Issues**| Test data problems, configuration mismatches, infrastructure failures | When failures are environment-specific rather than code issues |
127
-
|**Authentication/Authorization**| Login failures, permission errors, session timeouts | When tests fail during user authentication or access control |
128
-
|**File Processing**| Upload failures, format validation, processing timeouts | When tests involve file operations (imports, exports, attachments) |
129
-
|**Network Issues**| Connectivity problems, DNS failures, proxy issues | When tests fail due to network-related problems |
120
+
| Category | Description |
121
+
|----------|-------------|
122
+
|**Database Issues**| Connection timeouts, query performance, data integrity problems |
123
+
|**API Integration**| Third-party service failures, rate limiting, authentication issues |
124
+
|**UI/UX Problems**| Element not found, timing issues, responsive design failures |
|**File Processing**| Upload failures, format validation, processing timeouts |
129
+
|**Network Issues**| Connectivity problems, DNS failures, proxy issues |
130
130
131
131
### Step 4: Configure Intelligent Targeting
132
132
@@ -136,34 +136,44 @@ Configure intelligent targeting rules to precisely control which tests, builds,
136
136
2.**Click Include (+) or Exclude (-)**: Choose whether to include or exclude matching tests
137
137
3.**Configure Multiple Criteria**: Set targeting rules for:
138
138
-**Test Names**: Target specific test suites or test patterns
139
-
-**Build Tags**: Include or exclude builds with specific tags
139
+
-**Build Names**: Include or exclude builds with specific names (e.g., hourly, nightly)
140
140
-**Test Tags**: Include or exclude tests with specific tags (e.g., playwright_test, atxHyperexecute_test)
141
141
-**Build Tags**: Include or exclude builds with specific tags (e.g., hourly, nightly)
142
142
-**Job Labels**: Include tests with specific job labels or tags
143
143
144
-
#### Example Configuration
144
+
#### Rule Logic and Application
145
+
146
+
The intelligent targeting system applies rules using the following logic:
147
+
148
+
**Rule Evaluation Process:**
149
+
1.**Include Rules (AND Logic)**: All Include rules within the same category must match for a test to be considered
150
+
2.**Exclude Rules (OR Logic)**: Any Exclude rule that matches will immediately exclude the test from analysis
151
+
3.**Cross-Category Logic**: Include rules across different categories (Test Names, Build Tags, etc.) must ALL match
152
+
4.**Exclusion Precedence**: Exclude rules take priority over Include rules - if any exclude rule matches, the test is excluded regardless of include matches
153
+
154
+
**Best Practices for Rule Configuration:**
155
+
-**Start Broad**: Begin with general include rules, then add specific exclusions
156
+
-**Use Specific Patterns**: Avoid overly broad regex patterns that might include unintended tests
157
+
-**Test Your Rules**: Verify rule behavior with sample test names and tags before applying
158
+
-**Regular Review**: Periodically review and update rules based on changing test patterns
159
+
160
+
<!-- #### Example Configuration -->
161
+
#### Example Configuration for Production Test Analysis
145
162
146
163
:::tip
164
+
147
165
**Test Name:**
148
166
-**Include**: `.*prod.*` - Only analyze tests with name containing "prod"
149
167
-**Exclude**: `.*non-critical.*` - Skip tests with name containing "non-critical"
150
168
151
-
**Build Tag:**
169
+
**Build Tags:**
152
170
-**Include**: `^hourly` - Only analyze builds with tag starting with "hourly"
153
171
154
-
**Failure Type:**
155
-
-**Include**: `ApiError5xx|ResourceLoadFailure` - Focus on API and resource loading failures
156
-
-**Exclude**: `TestScriptError` - Skip script-related errors for this analysis
157
-
158
-
**Browser/OS:**
159
-
-**Include**: `Chrome.*MacOS|Chrome.*Windows` - Target Chrome on Mac and Windows
160
-
-**Exclude**: `.*Linux.*` - Skip Linux environments
161
-
162
172
**Test Tags:**
163
173
-**Include**: `playwright_test|atxHyperexecute_test` - Focus on specific test frameworks
164
174
-**Exclude**: `.*smoke.*` - Skip smoke tests
165
175
166
-
**Result**: AI-powered analysis will run only on production tests (excluding non-critical ones) from hourly builds, focusing on API and resource failures in Chrome browsers on Mac/Windows, using Playwright or HyperExecute test frameworks, while excluding smoke tests.
176
+
**Result**: AI-powered analysis will run only on production tests (excluding non-critical ones) from hourly builds, focusing on Playwright or HyperExecute test tags, while excluding smoke tests. This configuration helps narrow down analysis to the most critical test scenarios.
0 commit comments