You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai-test-automation/test-authoring/creating-tests/tasks.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ It's common to have parameters within tasks, as users often need to run tests wi
58
58
3.**Test**: A test-level override supersedes the environment and task-level values. You can also define a parameter override for a specific combination of environment and test.
59
59
4.**Test Suites**: This is the highest level in the hierarchy. Parameter overrides set at the test suite level apply during test suite execution and take precedence over all other levels.
60
60
61
-
Here is a **short video** explaining how to set overrides for a Task parameter. 
61
+
Here is a **short video** explaining how to set overrides for a Task parameter. 
Copy file name to clipboardExpand all lines: docs/ai-test-automation/test-authoring/test-parameterization/test-parameterization.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,7 +77,7 @@ However, this flexibility isn't available when executing multiple tests or test
77
77
3.**Test-Level Overrides**: These take precedence over task and environment-level values. You can also define overrides specific to a combination of a test and an environment.
78
78
4.**Test Suite-Level Overrides**: At the top of the hierarchy, overrides set at the test suite level are applied during test suite execution and override values from all other levels.
79
79
80
-
**How do I set these overrides ?**
80
+
**How do I set these overrides ?**
81
81
82
82
Here is a short video explaining how to set up the parameter overrides 
Copy file name to clipboardExpand all lines: docs/ai-test-automation/test-execution/running-tests.md
+8-9Lines changed: 8 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,21 +15,21 @@ If you have created a new test, it will be marked as a Draft after creation. You
15
15
16
16
Clicking on the VALIDATE TEST will bring up the test run modal, where you can specify
17
17
18
-
-`Environment`**against which the validation run should be performed. Harness AI Test Automation will pick your Pre-Release environment by default but you can pick any environment from the pick list.
19
-
-`Execution Location`. You can run tests from our default cloud location if your test environment is accessible on the Internet. Harness AIT also supports a Private Tunnels for test environments that are behind a firewall.
20
-
-`Test Parameters`**default to values entered by the user, unless those inputs are redacted. You can update the defualt values based on the data available in your test environment. The updated values will be saved for the future run.
18
+
-*Environment*: against which the validation run should be performed. Harness AI Test Automation will pick your Pre-Release environment by default but you can pick any environment from the pick list.
19
+
-*Execution Location*: You can run tests from our default cloud location if your test environment is accessible on the Internet. Harness AIT also supports a Private Tunnels for test environments that are behind a firewall.
20
+
-*Test Parameters*: default to values entered by the user, unless those inputs are redacted. You can update the defualt values based on the data available in your test environment. The updated values will be saved for the future run.
21
21
22
22

23
23
24
-
:::
25
24
26
-
### Run a Test
27
25
28
-
Once a Test has been validated, the option to "Run Test" will automatically appear. Tests with this option have already gone through the Validation process.
29
26
30
-

31
27
32
-
:::
28
+
#### Run a Test
29
+
30
+
Once a Test has been validated, the option to "Run Test" will automatically appear. Tests with this option have already gone through the Validation process.
31
+
32
+

33
33
34
34
### Environment Parameters
35
35
@@ -40,5 +40,4 @@ Parameters can be defined at the Applicaiton Environment Level. For Example, the
40
40
Below "Environment Override" and "Test Default" are denoted by which variables will be used based on the environment chosen.
Copy file name to clipboardExpand all lines: release-notes/ai-test-automation.md
+20-3Lines changed: 20 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,22 +21,39 @@ The release notes describe recent changes to Harness AI Test Automation.
21
21
:::
22
22
## September 2025
23
23
24
-
### New Features
24
+
### 2025.09.v2
25
+
26
+
27
+
#### Enhancements and Bug Fixes
28
+
29
+
***CLI Download for Test Results** : Quickly download CSV and JSON files from the CLI, to get all the test results in a single file just by clicking the link available after the test it run in the python cli itself.
30
+
***Better Gzip Debugging** : Troubleshooting compression-related issues is now easier with enhanced debugging support.
31
+
***Timezone Accuracy for Indonesia (WIB)** : Fixed an issue where some timezone abbreviations were not recognized. Scheduling and reporting now correctly reflect local time in Indonesia, preventing errors.
32
+
***Improved Filter Visibility** : Active filters now appear as chips, giving you a clear view of the criteria applied when exploring test data.
33
+
***Fail Tasks Immediately on AI Command or Fast Task Errors** : Tasks now properly fail if AI Commands or Fast Tasks encounter errors. Previously, failures were only flagged as warnings, which could cause confusion.
34
+
***Aligned Date Selection** : The start and end dates now default correctly and remain consistent in the interface, improving accuracy in reports and dashboards.
35
+
***Overseer Task Completion Fix** : Overseer now completes tasks reliably, reducing delays caused by screenshot-based prioritization.
36
+
***Smarter Element Selection** : Relicx-specific ID attributes are now ignored in `smartselector`, ensuring more reliable element detection and reducing false positives in task execution.
37
+
38
+
39
+
40
+
### 2025.09.v1
41
+
#### New Features
25
42
26
43
-**API Response Interception**: Added capability to intercept and analyze API responses during test execution for enhanced debugging and validation
27
44
-**Pagination Enhancement**: Added pagination options to display more than 20 items per page across test listings and results
28
45
-**CSV/JSON Content Generation Control**: Introduced configurable settings to control automatic generation of CSV and JSON content during test suite execution
29
46
-**AI-Powered Parameter Generation**: Enabled 'Generate with AI' functionality in parameter creation to support deterministic value generation for dates
30
47
-**Test Case Import with Assertions**: Added support for creating assertions and parameters during the 'Import Test Case' process
31
48
32
-
### Enhancements
49
+
####Enhancements
33
50
34
51
-**AI Thoughts Visibility**: Enhanced AI transparency by showing AI thoughts during execution of If/elseIf commands and on assertion failures
35
52
-**Download Directory Navigation**: Added support for navigating to DOWNLOAD_DIR for improved file handling workflows
36
53
-**Copilot Step Interactivity**: Made copilot steps clickable during execution in Interactive Authoring mode
37
54
-**Screenshot Retry Logic**: Implemented automatic screenshot retry mechanism when confidence levels fall below retraining threshold
38
55
39
-
### Bug Fixes
56
+
####Bug Fixes
40
57
41
58
-**Parameter Handling**: Fixed issues where empty string values were not being properly set in parameters
42
59
-**Cursor Position**: Resolved cursor position reset issue when entering values in input fields
0 commit comments